The configuration consists of several sections
Global Section
Topology Section
virtual machine
nodes
links
Existing Labs
---
# ------------------------------------------
# File : ctlabs/labs/net/net01.yml
# Description : Simple BGB Setup
# ------------------------------------------
name: net01
desc: BGP Lab
defaults:
controller:
linux:
image: ctlabs/c9/ctrl
switch:
mgmt:
image: ctlabs/c9/ctrl
ports: 12
linux:
image: ctlabs/c9/base
ports: 4
host:
linux:
image: ctlabs/c9/base
kali:
image: ctlabs/kali/base
router:
frr:
image: ctlabs/c9/frr
caps : [SYS_NICE,NET_RAW,NET_BIND_SERVICE]
mgmt:
image: ctlabs/c9/frr
caps : [SYS_NICE,NET_RAW,NET_BIND_SERVICE]
topology:
- vm:
name: net01-vm1
dns : [192.168.10.11, 192.168.20.11, 169.254.169.254]
mgmt:
vrfid : 40
net : 192.168.40.0/24
gw : 192.168.40.1
nodes:
ansible:
type : controller
gw : 192.168.40.1
nics :
eth0: 192.168.40.5/24
vols : ['/root/ctlabs-ansible/:/root/ctlabs-ansible/:Z,rw']
play : ansible-playbook -i ./inventories/net01.ini ./playbooks/ctlabs.yml -tsetup,bind
sw0:
type : switch
kind : mgmt
ipv4 : 192.168.40.10/24
gw : 192.168.40.1
ro0:
type: router
kind: mgmt
gw : 192.168.15.1
nics:
eth0: 192.168.40.1/24
eth1: 192.168.15.2/29
natgw:
type : gateway
ipv4 : 192.168.15.1/29
snat : true
sw1:
type: switch
sw2:
type: switch
sw3:
type: switch
ro1:
type : router
kind : frr
nics :
eth1: 192.168.10.1/24
eth2: 192.168.12.1/30
ro2:
type : router
kind : frr
nics :
eth1: 192.168.20.1/24
eth2: 192.168.12.2/30
h1:
type : host
gw : 192.168.10.1
nics :
eth1: 192.168.10.12/24
h2:
type : host
kind : kali
gw : 192.168.20.1
nics :
eth1: 192.168.20.12/24
ns1:
type : host
gw : 192.168.10.1
nics :
eth1: 192.168.10.11/24
ns2:
type : host
gw : 192.168.20.1
nics :
eth1: 192.168.20.11/24
links:
- [ "ro0:eth1", "natgw:eth1" ]
- [ "ro1:eth1", "sw1:eth1" ]
- [ "ro1:eth2", "sw2:eth1" ]
- [ "ro2:eth1", "sw2:eth2" ]
- [ "ro2:eth2", "sw3:eth1" ]
- [ "sw1:eth2", "h1:eth1" ]
- [ "sw1:eth3", "ns1:eth1" ]
- [ "sw3:eth2", "h2:eth1" ]
- [ "sw3:eth3", "ns2:eth1" ]
The global section we provide lab-wide definitions, such as lab name or a short description.
name: net01
desc: BGP Lab
name : Lab Name
desc : Lab Description
defaults : container images, nodes defaults
We also define Node Kinds in the defaults section. We use those later in the Nodes Section. This way we don't have to provide common settings to the individual nodes.
defaults:
controller:
linux:
image: ctlabs/c9/ctrl
switch:
mgmt:
image: ctlabs/c9/ctrl
ports: 8
linux:
image: ctlabs/c9/base
ports: 2
host:
linux:
image: ctlabs/c9/base
kali:
image: ctlabs/kali/base
router:
frr:
image: ctlabs/c9/frr
caps : [SYS_NICE,NET_RAW,NET_BIND_SERVICE]
Example:
here we specify a host of kind kali
nodes:
h2:
type: host
kind: kali
...
which would create a container named h1 that runs with the image ctlabs/kali/base
Node Types
controller (mgmt network)
switch
host
router
gateway
Default Capability set which is implicit for all nodes:
NET_ADMIN
NET_RAW
SYS_ADMIN
AUDIT_WRITE
AUDIT_CONTROL
Node Kinds
custom defined properties of a node type
e.g.
defaults:
host:
slapd:
image: ctlabs/d11/base
caps : [SYS_PTRACE]
Default Node Kind for all Node Types is linux
We can define multiple Virtual Machines under the topology section. Each VM has its own set of nodes defined. We could for example provision one VM with GCP and another with AWS and connect those two via a vpn, vxlan, ...
topology:
- vm:
name: net01-vm1
dns: [169.254.169.254]
nodes:
sw1: ...
sw2: ...
- vm:
name: net01-vm2
dns: [1.1.1.1]
nodes:
sw3: ...
sw4: ...
A node definition begins with its node name. We have to provide provide at least the node type, The default node kind is linux which is valid for all node types, i.e. if we don't provide a kind it is implicitely set to linux. If a different kind than linux is used it needs to be defined in the defaults section.
sw0:
type : switch
kind : mgmt
ports : 10
ipv4 : 192.168.40.11/24
gw : 192.168.40.1
ro1:
type : router
kind : frr
gw : 192.168.15.1
nics :
eth1: 192.168.15.2/29
eth2: 192.168.10.1/24
eth3: 192.168.20.1/24
Supported Node Types are:
controller
switch
host
router
gateway
In the links section we define how the containers will be connected with each other. A link is defined as a "src, dst"-pair, e.g. the entry [ "ro1:eth1", "sw1:eth1" ] defines a link between the container ro1 and sw1 whereby the interface eth1 in ro1 is connected to the interface eth1 in sw1.
links:
- [ "ro1:eth1", "sw1:eth1" ]
- [ "ro1:eth2", "sw2:eth1" ]
- [ "ro2:eth1", "sw2:eth2" ]
- [ "ro2:eth2", "sw3:eth1" ]
- [ "sw1:eth2", "h1:eth1" ]
- [ "sw3:eth2", "h2:eth1" ]
A link is defined as a pair of
["<node_a>:<nic_aX>", "<node_b>:<nic_bX>"]
Every node has a management interface which is implicitly eth0 , so we can't use eth0 for the data network.