With ctlabs.rb we can create Lab-Environments similar to docker-compose, but especially designed to create complex network setups and easy to access containers that feel like Virtual Machines. Additionally the network is separated into a management part and a data part so that we can use the management network to further run ansible (or saltstack, pupet, ...) to manage the nodes.Â
We can also use terraform to create cloud vms in gcp/aws. The terraform code creates a VM with all necessary dependencies to run ctlabs.rb. It is available here: ctlabs-terraform.
container that feel like VMs
Management Network
Data Network
configuration automation
Cloud VM's
We use the script ctlabs.rb to bring up a lab environment that we defined in a lab configuration yaml file. The configuration file format is explained in lab configuration .
Existing Lab Configurations:
Lab Configuration explained:
E.g. the following output shows how we bring up the lpic210 lab environment. Each line beginning with the word running shows a node being started and each line beginning with the work connect() shows a link being brought up.Â
We can add a playbook that should be run once the environment was spun up. The command is set as part of the controller node in the lab configuration. E.g. lpic210.ymlÂ
...
      ansible :
        type : controller
        gw  : 192.168.40.1
        nics :
          eth0: 192.168.40.3/24
        vols : ['/root/ctlabs-ansible/:/root/ctlabs-ansible/:Z,rw']
        play : ansible-playbook -i ./inventories/lpic210.ini ./playbooks/lpic2.yml --skip-tags sssd
Once we brought up the environment we can use the podman/docker commands. E.g. with docker ps we can list the running container.
To bring down a lab environment we use the -d flag. We also need to provide the same configuration file we used to bring up the environment.
If we use a cloud vm (as described in cloud vms) we can use the command enter <container> to enter a container.
We can also get a visual overview of the lab by using the server.rb script. We can start is manually or use systemd to auto-start it. A service file is included in the ctlabs repository . By default it is listening on port 4567. If we used the terraform configuration as described here we can connect to the WebInterface via the public ip of the vm.
We can find the public ip of our gcp-vm in the console under virtual machines -> VM instances. (see below)
The overview is devided into the sections
Connections
Topology
The Connections section show us how the interfaces are connected with each other.
The Topology section shows us how the nodes are logically connected
The Topology view is divided into the sections
Data Network
Management Network
The Management Network is used to manage the nodes and the Data Network is the Core Network, i.e. the network used for the services.
The Connections view is also divided into the sections
Data Network
Management Network
We can inspect the yaml configuration in the Configuration View and generated ansible inventory of the last used lab in the Inventory View.
The ansible inventory is used by the ansible plays: