When developing an application, sometimes we need to install additional dependencies and/or modify the environment. When we run the application on our own machine, it should work perfectly fine. On the other hand, running the app on other machines is another story. Other machines probably won’t have the same environment, operating system, or dependencies installed. Without any modification to the other machines’ environment, the app will most likely fail to run.
So how do we deal with this problem?
A great way to ensure your application runs well on other machines is by using Docker. Docker is a platform that is used to package an application and its dependencies together. This allows the application to be able to run the same way on any machine, regardless of its operating system. Docker can also help developers build and deploy applications across environments easily.
Benefits of Docker
Docker containers only contain a single application and its dependencies (including necessary packages and libraries). Consequently, docker containers are quite lightweight. This causes docker containers to be quick to boot, start, and stop. Docker containers also don’t need to consume a lot of resources, which means you can save costs.
Docker containers are also great in terms of scalability. Although one container only has one application, you can run multiple instances of Docker containers together. With containers you can decompose giant services into smaller, compact services, making them easier to scale. When you want to do the scaling, all you need to do is move containers to a new server or deploy it across a cluster of servers.
Docker Containers vs Virtual Machines
Concept-wise, docker containers are quite similar to virtual machines. They both implement virtualization, but they virtualize different components. Virtual machines contain their own operating system, application, and dependencies. On the other hand, Docker containers only contain a single application and its dependencies but share their operating system kernel with other containers. Docker containers are more portable, quick, and flexible, making them better for projects that require fast-paced deliveries and constant changes. Virtual machines are more suitable for applications that need more security because they have their own operating kernels and security privileges.
Docker File, Images, and Containers
A Docker file is a text file that consists of commands needed to build a docker image. Docker files contain configurations on what the Docker image needs to install and update. Docker files also include everything needed for the Docker image to execute, such as dependencies and application code. To automate these commands you can use
docker build. Below is an example of a Docker file from my project.
You may be wondering what all the commands in the file above do. So, here is the explanation:
FROM: Initializes and sets the base image for the following commands. Each Dockerfile has to have a
FROMcommand. In the example, the base image used is python 3.9.
WORKDIR: Tells Docker where the working directory for the container will be. The directory for the example will be in
ENV: Used to set environment variables.
RUN: Executes commands and commits the results in an image.
COPY: Copies new files or directories into the filesystem of the container. Here we are copying the requirements.txt file into the current directory.
USER: Sets the user name used to run the image.
CMD: Provide defaults for an executing container. Each Dockerfile can only have one
A Docker image is a layered file system needed to run Docker containers. You can also think of Docker images as read-only templates that are created by Docker files. Because Docker images can’t be edited, if you need to make any changes, make the changes in your Docker file. To run an image, use
Docker containers are the running instances of Docker images. These containers hold the application and its dependencies. Once a Docker container is created, they start to perform tasks and processes within the container.
When you have multiple Docker containers, you can use Docker compose to define, manage, and run them at once. Docker compose is a YAML file that configures your application’s services. With Docker compose, you can create separate containers and easily make them communicate with each other.
Here is a Docker compose file example from my class project. There are three main parts, which are version, service configuration, and volume configuration.
To run (and build if not yet) the containers stated in the Docker compose, run
docker-compose up in your bash or command window. After you run the command, your window should look somewhat like this.
docker-compose up command, there are several other Docker Compose commands that you can use:
docker-compose down: Removes all running service containers.
docker-compose build: Builds a docker image for all of the service containers.
docker-compose build [service-name]: Builds a certain docker image, use
docker-compose stop: Stops the running containers.
docker-compose start: Starts running containers that were previously stopped.
docker-compose rm: Removes stopped containers.
For a better experience, you can also utilize the Docker application. Instead of typing in the commands, you can just click the buttons on the right to start, stop, restart, and delete your containers or Docker compose.
You can also view all the information related to your Docker images in the app.