Source code:
Content of this post:
1) Deployment
Let’s see how we can deploy the translation app from last post using Docker Compose. We need a folder with a docker-compose.yaml
file at its root, see this repository regarding the translation app’s deployment.
The docker-compose.yaml
file describes each microservices: which container image should be pulled, how to connect them together, should they be accessible on the network and through which port, etc…
|
|
Let’s look at each part in details.
2) Networking
For the translation app to work, the 3 microservices need to be able to talk to each other on a network:
- The Flask frontend sends German text through HTTP requests to the FastAPI backend, who sends back the translated text
- The Flask frontend sends queries to the MySQL database, and gets the results back
As you can see in docker-compose.yaml
file, each container is defined by a name, e.g. database
or backend_fastapi
:
|
|
When deploying the app, Docker Compose creates a network that connects the containers. Then, each container can be reached through its name. For instance, here is how we send requests to the backend from inside the Flask frontend container:
|
|
To connect to the MySQL database, we simply specify database
as the hostname in the connection parameters.
As we’ll see in the next post, things get way more complicated when operating on a Kubernetes cluster.
3) Docker secrets
We need to provide a root password when creating the MySQL database. Ideally, we should grant the Flask frontend its own user and password so it can insert and query previous translations.
Docker secrets is a secure and convenient tool to manipulate sensitive data. The syntax is as follows:
|
|
Secrets can then be passed to containers in the docker-compose.yaml
deployment file, e.g.:
|
|
Quoting the Docker docs:
the decrypted secret is mounted into the container in an in-memory filesystem. The location of the mount point within the container defaults to /run/secrets/<secret_name> in Linux containers
This means that the value of the secret is then available in a file, inside the container, at /run/secrets/<secret_name>
. Instead of hardcoding this path inside scripts, it is best practice to pass the path as an environment variable, as in the example just above. This way, if the name of the secret changes at some point, you won’t have to edit any of the scripts using this secret, but just the docker-compose.yaml
file.
4) Data persistence
When a container stops running, all the data is lost. This is obviously a problem with some types of containers, e.g. databases. You want to be able to restore the data after the container was shut down, on purpose or by accident. Fortunately, Docker volumes are there to persist data generated by and used by Docker containers. Let’s see an example with the MySQL database of the translation app:
|
|
At the bottom, the volume is created if it does not exist. Then, under the container definition section, we map this volume to the /var/lib/mysql
folder inside the container, where MySQL stores the data. If we need to delete the volume at some point, we can run docker volume rm <volume_name>
.
5) Database dump
In the case of our translation app, we want to have a table ready to store our German texts and their translations up and running when the container is created. We need to be able to load the corresponding SQL instructions without manually connecting to the container each time it’s launched. With the offical MySQL container, it is easy to load a database dump at startup.
We write a dump.sql
script with the following instructions:
|
|
After saving the script in the deployment repository, we can load it inside the MySQL container through the docker-compose.yaml
file: map the folder containing the dump script to the folder /docker-entrypoint-initdb.d
that lives inside the MySQL container. More details are available on the MySQL official image repo.
Loading ./mysql-dump/dump.sql instructions in the MYSQL container:
|
|
6) Startup order
In order for the app to be up and running as soon as the user can access the frontend, we need to ensure both the backend and the database are available first. Docker Compose allows to control startup and shutdown order with the depends_on
option:
|
|
7) Conclusion
That is all for the Docker Compose deployment. Technically, I ended up deploying with Docker Swarm on a single node, which is pretty much the same thing. The reason for that is the Docker secrets feature missing in Docker Compose, see details in the project repo.
In the next post, we’ll deploy the translation app on a Kubernetes cluster. This requires much more work but it makes an app scalable and resilient.