Around 2008 a new feature in the Linux kernel was released, called Linux Containers (LXC). This feature, consisting of multiple smaller functions, allows us to run multiple Linux systems on the same kernel. While this sounds somewhat similar to using virtual machines, the big difference is that a virtual machine runs its own kernel and there is a layer of virtualization in between the host OS and vm OS. This layer forms a bridge between the two operating systems, allowing the virtual machine to make use of the host machine's memory, file system, graphics environment, etc...
Virtual machines have their advantages too, for example it is possible to start almost any OS inside any other OS. Thanks to the virtualization layer we don't need to care about the differences in file system type, the way of communicating with our hardware and many other things.
Containers can still take advantage of the virtualization layer, but they would have to be started inside a virtual machine. A big advantage however is that containers (in many cases) do not need the virtualization layer, as they can take full advantage of the kernel of the host OS.
LXC basically isolates the system running in the container from the other containers and the host. Only allowing access where specified.
Not being bound to the virtualization layer, means that resources can be used directly, in the exact same way as if the system was running on its own dedicated hardware.
Why should we use containers? Well one of the reasons might be that during development we do not need to give up half of the resources that are available on your machine (to be reserved for a virtual machine). Another advantage could be that it becomes very easy to move these containers. Imagine for example that we have to configure all the new servers, because the company decided that the applications should no longer be hosted by Amazon, but on a different platform instead.
Another advantage that containers have over virtual machines is that they can be started much faster than a virtual machine, because there is no need to boot an operating system anymore.
So far we have discussed the LXC and container architecture, a big part of Docker. But there is more, much more.
Docker is a tool we can use to control our containers and also share them with other users. Of course there are infinite ways to use the containers as well.
We will explain how to do all of that later on. For now it is important to know how Docker is different from LXC.
While LXC support comes from the Linux kernel, to be able to use Docker we have to install it first. Once we have installed the Docker package a new service will be available. This service can manage all Docker containers and their resources. We can communicate with the service using the Docker commands. Another important task Docker can perform is pulling (downloading) images from the internet. How this helps us will be explained later on.
To make container configuration easy and to provide a way to store certain "container setups" Docker provides us with "images". These images are somewhat similar to virtual machine images. However they are build using a file that describes the configuration of the image.
This file is called the Dockerfile. The Dockerfile contains information about the binaries and libraries that need to be installed. Using a Dockerfile we can build an image.
Once an image is built, we can use it to start as many container instances as the host system allows us (limited by resources).
As mentioned in the previous section a container is basically an instance of an image. It is actually very comparable to classes and their instances.
There are certain things that can only be configured when we start a container. This includes the exposed (network)ports and folders of the host to mount in the container. Or with which other containers this container can have network communication.