15/07/2021
Docker compose
So, what is compose?
Contrary to what many people think, compose is a specification, this piece of text is from the specification page on Github:
The Compose specification establishes a standard for the definition of multi-container platform-agnostic applications.
So, like what we have with JPA (yes, if you are unaware JPA is a specification also), compose is a specification, and we can have many implementations of it maybe the most popular being the docker-compose implementation. We will focus on two implementations here (docker-compose and Compose V2), according to the docker compose page, this is its definition:
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Well, here they talk about YAML file, if you don't know what it is, take a look at this: YAML Ain't Markup Language. I won't explain YAML on this article.
So, how can I use this beautiful tool that will help me create and start all my services?
It is very simple indeed, you create a YAML file, put all your services and configuration there and then run a command like: docker compose -f compose.yaml up -d
and that's it. Let's go through this miraculous file to understand it better.
The compose File
According to the specification, the compose file is a YAML file with some specific top-level elements, here they are:
- Version: Indicates the specification version (not docker version), it is DEPRECATED, and it's not required anymore, but is kept for backward compatibility. Useful tip: if the implementation finds some unknown field, it SHOULD warn the user, but the compose file won't be invalid.
- Services: The most important one IMO, defines our services, a service is an abstract definition of a computing resource within an application which can be scaled/replaced independently of other components, like databases, applications, message brokers and so on... Generally speaking, our containers.
- Networks: This is pretty obvious, here we can define networks that our services can use to communicate with each other. Here we begin to depend a little on the platform, because this part of our ecosystem is always provided by the platform we are running.
- Volumes: Roughly speaking, this is where we define our storages, it is very useful if we need to tune some storage settings, this is also provided by the platform, so, platform driven.
- Configs: This is also pretty obvious, here we can define some configs to our services,
- Secrets: This one is a flavor of Configs, but it is reserved for sensitive data, and it is restricted for this usage.
There are a lot of cool things that you can accomplish with compose file, if you want to know more, refer to the specification or to the documentation of the implementation that you are using.
Enough talking, lets create a compose file to run an application, let's say that we need to create a partner integration solution, almost like this one, designed by AWS.
We have a 'Business Service' layer, a 'Partner Integration' layer, and we have our partner (Partner 1, Partner 2...) layer.
Let's say I'm responsible for develop the first layer only, the business one, another team, will take the integration layer, and let's say our partners already have theirs built.
In this scenario, if we use a microservice architecture, I will be actively working in only one microservice, the business one, but in order to me be able to execute a E2E test, I'll need somehow the partner integration microservice running, of course I can run it in a test environment and so on, but here we want to run it locally, in a very easy way. (and it is my company, I run where I want my microservices :hehe:)
Let's say, both (business and partner integration) microservices use a mongodb to persist some data, and they communicate using REST.
So, if you are paying attention to this point, we will need an environment with:
- At least one MongoDB instance.
- A instance of the Partner integration microservice.
- An optional instance of our business microservice (This one can be running on an IDE for example for debugging)
Also, we need to make sure that our entire environment can communicate: My business ms need to communicate with the partner integration one, both need to access my MongoDB instance and so on...
So, how can I build my compose file in order to make this happen?
Before we jump into the compose file, I wanna say that you will need a Dockerfile in order to build your images if they are not images from some repository (like docker hub), but lets focus on compose so, you can just use de Dockerfile I provide in the repo below.
Our compose file to define these services will be like this:
services:
mongo:
image: mongo
container_name: mongo
ports:
- 27017:27017
networks:
custom_network:
aliases:
- mongodb.pigmeo.dev
volumes:
- mongo_storage:/data/db
business:
image: business
container_name: business
ports:
- 8085:8085
build:
context: ./
dockerfile: Dockerfile
args:
app: business
networks:
custom_network:
aliases:
- business.pigmeo.dev
partner-integrator:
image: partner-integrator
container_name: partner-integrator
ports:
- 8090:8090
build:
context: ./
dockerfile: Dockerfile
args:
app: partner-integrator
networks:
custom_network:
aliases:
- partner-integrator.pigmeo.dev
networks:
custom_network:
volumes:
mongo_storage:
Let's understand this file:
First, we have the services element, right below it we define a service called mongo
(could be any other name, like potato), this service uses the mongo oficial image,
so we don't need to provide a Dockerfile or other build method for it, docker will pull this image from any repository that you have configured (like docker hub), we named our
container as mongo
and we exposes some ports: 27017:27017
this means that the container port 27017 (before the :
) will be mapped to the port 27017 (after the :
) of the
host machine, after that we setup the network telling our container to connect into the network named as custom_network
(could be potato as well) and inside this network it will
use an alias mongo.pigmeo.dev
, also, we setup a volume, basically we are telling our container to save its data inside this volume, at the path /data/db
, remember, the name of
our storage could be any other.
I'll explain only one of our custom services, to save some time (and keyboard haha)
First we name our service (business
and partner-integrator
), remember, you could use any name you want, the next elements are very similar to the mongo one, but here, we define a build
element,
here we need to tell our implementation how to build the image that we need, because this is not an image that comes from a public repo or so, we want to build it. In this case we just define a Dockerfile
and we use the args
element to pass some data to our Dockerfile at the build time, also we specify an alias for each service inside the same custom_network
.
Pay attention that all of our services are connecting to the same network.
Right below we define our network, the custom_network
, for this example we just need a network defined, so we only need to define its name, and the platform will provide a network default,
if we need to tune or customise the network it's possible, if you want to learn more, please, refer to the specification, or the docs of the implementation that you want to use.
Then last but not least, we define the volume used by the mongo
service, here we just define the name, as like the network, but it is possible to tune and customise as well if you want to.
Now with our compose file written we just need to pick an implementation (I'm really enjoying Compose V2) and run it:
docker compose -f compose.yaml up -d
(Pay attention to the -d
argument, this will tell docker to release the command line after it runs everything)
With this command, docker will use the file compose.yaml (-f compose.yaml
) to build our images first (using what is defined in the build
element), or download the ones that are from public repos and then
it will start our containers, putting them in the same network as we defined.
If you want to stop it all, just run this command: docker compose -f compose.yaml down
docker will read the compose file and will stop all the services defined in it. Really simple, don't you think? And now,
everytime you need to start your big environment you just need to run the same compose file again.
If you want to test this example it is on my Github
I also made a little PPT about this, feel free to use it :)
Source: Compose spec