What is Docker ?
Docker usually described as a “software platform that allows you to deploy applications” but what does it really mean ?
Let’s start with a simple example: You want to write an application that uses some kind of database, like Postgres, and an additional service, like Node or a broker such as RabbitMQ. Until now, you would have needed to install all of those dependencies locally in order to enable your app to work. This was not easy, right?
No one can guarantee that future versions of the software will work the same as the current version. Also, software may behave differently on different operating systems. This issue often led to the frustrating “It’s working on my machine” situation that we all love to hate. Whether it was QA versus R&D or users versus tech support, it was difficult to reproduce the same behavior of the application on different machines.
This is where Docker shines. It enables you to get the bare minimum layer of the operating system (most of the time, it’s Linux) in a system process that you can manipulate as you like. The only requirement you have now is to install Docker.
So, how does Docker work?
Docker uses two types of virtualization: container and image. An image is a file that can create many containers (like a template) which can be spawned by Docker as processes in the system memory.
These images can be downloaded from a Docker repository, and you can also create your own images and upload them to the repository. This makes it easy to share your application with others or deploy it to different environments without worrying about the dependencies and compatibility issues that may arise from installing software locally.
Where do you start with Docker?
The installation process may differ based on your operating system. It’s a breeze on Linux, but you may encounter issues trying to install it on Windows.
Based on my experience, it’s better to install WSL first (and verify that it’s WSL2), install one of the Linux distros from the Microsoft Store, and then install Docker. You can ask ChatGPT for guidance on this.
Most of the time, you’ll want to install ‘Docker Desktop’, which provides the Docker engine with an additional simple GUI to manage it.
To verify that you have a functional Docker installation, you can check the ‘Hello World’ container:
# In Linux, you may need 'sudo' before this command
$ docker run hello-world
# You should see this message
Hello from Docker!
# This message shows that your installation appears to be working correctly.
Now, you can start reading the documentation and learning the commands to operate Docker, but there’s also an alternative to using commands.
You can operate Docker via configuration files, which I find more convenient, first and foremost because they can be saved locally or in a git repository. This means that your work will be preserved for the future and can be easily shared with others. They also eliminate the need to memorize all the flags and arguments required by Docker.
There are basically two kinds of files you may need: Dockerfile and Docker Compose files. A Dockerfile is a simple text file that contains shell commands for Docker to run on a single container before it starts in order to customize it. A Compose file is a YAML file format that configures all the containers, their source images, their ports, and more.
The Dockerfile can look like this:
FROM python:alpine
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
To run the Dockerfile you’ll need to run at the same folder of :
$ docker build -t <image_name> .
In this example, Docker will download a Python image, copy the requirements.txt file from its current location to the container folder /app, and run the ‘pip install’ command. Once completed, you’ll see a new image with the chosen name in your image list, ready to use. However, if you don’t wish to alter the image, you won’t need to write a Dockerfile.
The Compose file is not required, but it’s very helpful as it saves the configuration of the containers and saves you time and hassle typing long Docker commands.
The docker-compose.yml can look like this:
services:
server:
container_name: web_server
image: nginx
restart: always
volumes:
- "./html:/usr/share/nginx/html"
ports:
- "8080:80"
In this example, Docker will download an image of Nginx, map the ‘html’ folder in the current path to the html files folder in the container, and forward port 80 on the container to port 8080 on the host.
All you need to do in order to install this setup is to type:
$ docker compose up
After completion, you’ll see ‘nginx’ under your images list and the container ‘web_server’ under the containers list. If you open the address http://localhost:8080, you’ll see the HTML files you put in the html folder.
In the same way, you can choose any other image from DockerHub, such as Node, Postgres, Ubuntu, etc., and have it running on your host within a few seconds. No more frustrating setups and long installations.
Say you regret it and now you want another version of Node or Postgres? No worries! All you need to do is change the tag of the image in the Compose file and run Docker Compose again.
Note - it’s better to cleanup your current containers after you change your Docker Compose file:
# This will delete your current contatiners
$ docker compose down
# This will delete your images as well (useful if images need to re-compiled)
$ docker compose down --rmi all
Now, let’s say you need a more complex setup, like a Python application with Postgres SQL and admin. You can write this docker-compose.yml file:
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin@example.com
PGADMIN_DEFAULT_PASSWORD: password
ports:
- "8080:80"
depends_on:
- db
app:
build: .
ports:
- "5000:5000"
depends_on:
- db
volumes:
db_data:
And this Dockerfile:
FROM python:alpine
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
This will install Postgres SQL with the default password as described, and its data files will be preserved under the folder ‘db-data’. Pg-admin will also be installed with the admin user and password as described and can be accessed through port 8080 on the host. Additionally, a Python application will be copied to the Python image and will run on port 5000 on the host.
All of this setup can be easily modified and deleted in seconds without leaving a trace. Furthermore, you will be able to copy the Docker files and your source to any PC with Docker installed and get the same results.
Summary
We’ve briefly described what Docker is and why it’s important and efficient for almost anyone. We’ve also seen how easy it is to configure a complex setup with a single file and how to change it according to our needs.
Obviously, this is only the tip of the iceberg, and I welcome you to learn more about this topic as it will save you time and hassle in the future.