Β Approach Description
1- DCF Parsing:
The first component takes the services from the Docker compose file, referred as depends_on or as links properties and it generates a dependency graph πΊ. Let
πΊ=(π,πΈ) be a dependency graph. π = { π£1,π£2,π£3, . . .,π£π} is the set of containers and π£π, π β [0,π] is a unique identifier of the ππ‘β container. πΈ={π£π,π£π}, where π£π, π£π,β π is the set of calls or requests among two different containers. let π£π be a unique identifier of every container/service and node.
Figure 5 shows an example of docker compose file and the generated graph with five nodes and six edges. In this example, five services cbedb, cbedbadmin , cbemq, haproxy, cbeapp were converted to a graph πΊ=(π,πΈ) with five nodes and six edges where π={0,1,2,3,4} and
πΈ= {{1,0},{3,1},{3,2},{3,4},{4,0},{4,2}}. The dependency graph is used to extract the dependencies between containers.
2- Optimisation:
Β We propose a many-objective optimization technique to generate the optimal allocation of containers in Docker Swarm mode. This technique takes as input
a dependency graph πΊ of π container, the property values associated to the containers (i.e., the priorities and the power consumption values), and the current allocation of containers and nodes in Docker Swarm mode, a.k.a current Swarm State π(π‘). Then, the technique lunches the optimisation on the current Swarm State given the set of desribed objectives. Finally, it generates a new Swarm State π(π‘+1), which specifies the new placement of the containers.
3- DCF Generation
The third component takes the new Swarm State π (π‘+1) and returns a docker-compose file with the newly generated Swarm State. Finally, it applies the changes of the container properties to reflect the new scheduling. Note that Docker Swarm service is based on a declarative model. Once the service is running and some node has already been allocated, the containers running must not be moved or replaced. Thus, to mitigate this limitation, we generate a new
Docker compose file by changing the constraints property from the file.
Β Global Architecture of the scheduler and its framework
The global view of the various components of our scheduler inter-connected with its framework is depicted in the figure below. We used Node-exporter to extract metrics including the number of containers per node, the CPU and memory consumption, the network I/O per node and many other metrics from the nodes and Cadvisor to monitor the performance of each container in the experiment including CPU, memory usage. This real-time data is scraped by Prometheus and plotted in our monitoring interfaces via http. The user can use the dashboard to change containers placements or properties that will be updated in the docker cluster before running the scheduler using our API.Β When CPU and memory utilization surpass 80\%, the tool issues a warning.