One way to "containerize" an application is to make an image that contains everything that the application needs. For example most of our web applications require at least a web server, PHP runtime and a database server. You could choose to make an image that contains these dependencies. Although it makes the application very portable, as in easy to deploy anywhere, it really limits the ease of scaling. It is not very easy to move the database server to a different server for example, to accomplish that, you would have to rebuild the image to exclude the database server and create a new image just for the database server.
Separating the database in this case would mean that 2 containers need to be started for the application to run correctly. Also the containers need to be able to communicate with each other somehow. That is where docker-compose
comes to the rescue.
This tool allows us to manage multi-container applications. Of course the scope of this article is limited, but detailed information can be found here:
The containers that should be started for a multi-container application are listed in a file named docker-compose.yml
.
This file is in the YAML format. To find out more about YAML, be sure to visit this site: http://yaml.org/
Basically the docker-compose file consists out of a few sections:
Currently there are 3 major versions for the structure of the file. Usually it is best to pick the latest and greatest version, which at the time of writing is version 3.
In the services section we define the information about the containers we need to start. As an added advantage we can define the options for mapping/publishing a port to the host or mounting a volume. Without docker-compose we would have to specify these options using the docker run
command and have no way to store the desired options in a file.
The volumes section defines named volumes that are shared between services. This section is optional.
The networks section defines networks that are used to enable communication between containers, when containers do not need to be able to communicate with each other this section can be omitted.
So let's start writing our first docker-compose.yml file. The first step would be to create a new directory that we can use to store this file, after that we can create the file.
The first thing we need to specify in the file is the docker-compose version. In our case we will specify version 3:
version: '3'
This one line is enough. After we specify the version we can move on to specifying our services:
services:
The YAML format relies heavily on indentation to determine a section. The "services" key should be placed on the top level like the "version" key. To place keys in the section we should use 2 spaces to indent those lines:
services:
web:
Now we have added a new subsection for our "web" service. The "web" service will be the container running both nginx and php-fpm. We could make our own custom image using a Dockerfile, but for now we will use the ready-to-go image from webdevops: https://store.docker.com/community/images/webdevops/php-nginx.
This image already does what we're looking for so there is no need to customize it at this point.
services:
web:
image: webdevops/php-nginx:latest
If no further configuration is required than this is all there is to defining a service. However we probably want to mount our PHP project in the container so that the web server and PHP can actually access the files. Another thing we need is to be able to access the web server from at least the host system, so we need to publish the tcp ports that it will use.
services:
web:
image: webdevops/php-nginx:latest
volumes:
- /opt/scuti/project:/app
In the above example you can see we added a new sub section to the "web" service sub section called volumes. With the volumes key we can mount a directory from the host system inside our container. The first path is the source (on the file system of the host), the second path is the path inside the container. The two are separated by a colon ":".
Next let's add the ports that we want to be able to access on the container:
services:
web:
image: webdevops/php-nginx:latest
volumes:
- /opt/scuti/project:/app
ports:
- 8000:80
- 44300:443
As we can see this works in the same way as the volumes sub section, we specify the port on the host first and then the port inside the container. So in this case nginx will use port 80 and 443 inside the container and we can access them on the host on ports 8000 and 44300 (using 80 and 443 would work fine in most cases as well).
This configuration would be sufficient if our application only uses PHP. However in most cases we need to use a database as well. So let's add another service to our docker-compose.yml file:
services:
web:
image: webdevops/php-nginx:latest
volumes:
- /opt/scuti/project:/app
ports:
- 8000:80
- 44300:443
db:
image: mysql:latest
volumes:
- /var/lib/scuti_mysql:/var/lib/mysql
ports:
- 33060:3306
environment:
- MYSQL_ROOT_PASSWORD=my-very-secret-password
- MYSQL_DATABASE=scuti
This newly added service looks very similar to the previous one, specifying an image to use, volumes to mount and ports to publish. But there is also a new sub section called environment. The section allows us to specify environmental variables. Some images use these environmental values to configure certain things. In this case the mysql image allows us to specify the desired password of the root (mysql) user and the name of the default database to be created.
Now we have defined two services, which is great because we need both of them. But when we would start the containers with this configuration they have no way to communicate with each other. This is where "networks" come in to place:
version: '3'
services:
web:
image: webdevops/php-nginx:latest
volumes:
- /opt/scuti/project:/app
ports:
- 8000:80
- 44300:443
networks:
- scutinet
db:
image: mysql:latest
volumes:
- /var/lib/scuti_mysql:/var/lib/mysql
ports:
- 33060:3306
environment:
- MYSQL_ROOT_PASSWORD=my-very-secret-password
- MYSQL_DATABASE=scuti
networks:
- scutinet
networks:
scutinet:
driver:bridge
This would be the complete file, please not that not only was the "networks" sub section added to the services, but there is also a new section in the top level. The "networks" sub sections only have a reference to the network name, but the section in the top level defines the network driver to use. We don't need to specify the driver (by default it will use the bridge driver), but it is considered a good practice to define it. That way there can be no mistake about what network driver is used.
The example above will work perfectly fine, but at some point we might have to customize one of the images. For example because we found out that a certain PHP module is not installed by default. We can do that by creating a new Dockerfile in a subdirectory of the directory that holds the docker-compose.yml file and modify the docker-compose.yml file to look like:
version: '3'
services:
web:
build:
context: ./php-nginx
volumes:
- /opt/scuti/project:/app
ports:
- 8000:80
- 44300:443
networks:
- scutinet
db:
image: mysql:latest
volumes:
- /var/lib/scuti_mysql:/var/lib/mysql
ports:
- 33060:3306
environment:
- MYSQL_ROOT_PASSWORD=my-very-secret-password
- MYSQL_DATABASE=scuti
networks:
- scutinet
networks:
scutinet:
driver:bridge
So instead of using the "image" key in the sub section of a service, we use the "build" key and the "context" key in the new "build" sub section. The "context" key holds a path to the directory that holds the Dockerfile that needs to be build.
The docker-compose
command is used to start and stop the containers specified in the docker-compose.yml file. Basically it tells the docker daemon what needs to be done. Giving a full explanation of all things you can do with this command would go beyond the scope of this article. So instead we will discuss the two most important commands: up and down.
After creating a new docker-compose.yml file, the first thing we want to do is to try and start the containers. To do that we need to use the docker-compose
command up
:
docker-compose up -d
This will try to start all services in the docker-compose.yml file in a detached state (running in the background).
Sometimes however you might only want to start one or more of the services:
docker-compose up -d web db
This will try to start the "web" and "db" services, in our case the only two services defined, but imagine that we have twenty different services defined.
To stop the containers we can run:
docker-compose down
The docker-compose commands should always be executed from the directory that holds the docker-compose.yml file.
If the image that is listed in the docker-compose.yml was not pulled or built before docker-compose will take care of that for us. Sometimes we will need to rebuild one of the containers (for example when the Dockerfile was changed). We can do that by specifying the --rebuild
option:
docker-compose up -d --rebuild
Since most of our projects use the Laravel framework it would be strange not to mention Laradock.
Laradock is a collection of mostly preconfigured images that can be controlled using docker-compose. This is very useful in the sense that it requires fairly little time to set up an environment for a project (note that although it is meant to be used for Laravel projects, we can use it for other types of projects as well). In most cases it does not require us to even write one docker-compose.yml of Dockerfile.
Using Laradock can be accomplished with a few simple steps:
git clone https://github.com/Laradock/laradock.git
cd laradock
cp env-example .env
docker-compose up -d nginx mysql phpmyadmin
That's all folks, we can now access our application on port 80 on our host.