Here you can find frequently asked questions.

Check the mailing list archive to find additional communications between users and developers.
You can subscribe and ask questions to:

Google Groups

Q: What is RouteFlow?
Basically, RouteFlow is an open source solution to provide legacy IP routing services (e.g., OSPF, RIP) over OpenFlow networks. As a consequence of running the routing protocols flow entries are installed in the OpenFlow devices that correspond to the FIB generated by the routing engine (Quagga).

In its basic mode of operation you can map one OpenFlow device to one Virtual Machine with Quagga providing the IP control plane and RouteFlow monitoring the routing table changes and installing the corresponding flow entries in the OpenFlow device. Routing protocol messages entering the OpenFlow devices are sent to the corresponding VM running Quagga, and viceversa, routing protocol messages generated by Quagga are sent out through the physical ports in the OpenFlow data plane.  

In more ellaborated scenarios routing protocol messages can be kept in the virtual environment, and more flexible mapping can be defined between physical OpenFlow-enabled devices and the virtual control plane running Quagga in a VM with virtual interfaces corresponding to physical ports.

Q: When I tried to load the Open vSwitch I get "Invalid module format" error: 
$sudo insmod datapath/linux-2.6/openvswitch_mod.ko
$insmod: error inserting 'datapath/linux-2.6/openvswitch_mod.ko': -1 Invalid module format

Make sure you disable the Linux bridge:
$sudo rmmod bridge

Q: Why do we need two OVS instances for the connectivity in the virtual control plane? 

You need one virtual switch to provide connectivity among the VMs (i.e., forming a virtual network), and another one to deliver RouteFlow control messages to the RF-Slave daemons in the VMs. 
Unfortunately, you cannot run a non-OpenFlow OVS and an OVS simultaneously. The way around we have figured out is having two OpenFlow OVS each attached to two OF controller, one acting as a dumb switch (i.e. running a simple learning application) and the second running the RouteFlow controller application that interacts with the RouteFlow server and configures the OVS accordingly.

The following picture may help in understanding the current implementation:

Q: I do not want to use Quagga as a routing engine, does RouteFlow work?

You can use any Linux-based routing engine that modifies the IP routing and ARP tables of the Linux networking stack. The RF-Slave watches for Linux netlink updates and will accordingly request the installation/update/removal of OpenFlow entries. If you want, you can manually add the IP route entries:

For instance, to insert a new network with address, with gateway and subnet mask you can type:
# route add -net netmask dev eth1
# ip route add dev eth1

Route all traffic via gateway connected via eth1 network interface:

# route add default gw dev eth1
# ip route add default via dev eth1

Q: How does and end-host get the gateway IP address in the beginning?

You will need to configure manually the IP of the gateway. Or you can install a dhcp server in the virtual machine to automatically get the gateway IP.

Q: What is the function of the rf-slave?

The rf-slave is a daemon, running in the virtual machines where Quagga is executing. Its principal function is to monitor changes in the Linux routing table.  When it detects any change, sends the route information to the rf-server. In addition it helps in the mapping between the VM interfaces and their attachment to the Open vSwitch ports by sending probe packets that act as a location/attachment discovery technique.

Q: What is the function of the rf-server? Does rf-server do any treatment of the data it receives from the Quagga vms?
The rf-server keeps the core  logic of the system. It receives the registered events from the routeflowc NOX module and  takes decisions over those events (e.g. packet-in, datapath- join). It also receives information about route changes from the rf-slave running in the quagga vms, which will trigger a flow install/modification in the corresponding OpenFlow switch. The server is responsible for deciding what to do with packets that arrive at the controller, so it will handle the protocol packets generated by Quagga and send them out through the datapath switches. It is responsible too for Virtual Machine registration and for keeping the synchronization with the datapaths.

Q: How does the routeflowc calculate the path for a packet?

When you saying "calculate the path for a packet" are you talking about the network routes and the flows to install in the datapaths that match these paths?
Well, if this is it, the routeflowc doesn't calculate the routes. This is done in the virtual environment by the Quagga instances themselves. The routing engine sends protocol messages (like Hello packets from OSPF) that will hit the OVS and will be sent to the controller according to the configured flow entry in the OVS. The controller receives a packet-in event, delivers it to the routeflowc which in turn sends the information of the event to the rf-server. The rf-server replies with a command to the routeflowc instructing it to inject the packet in the datapath associated with the virtual machine in which the routing protocol packet was originated. The packet will be send out through the port in the datapath and will arrive at other switches.

On the other switches, routing protocol packets will match a pre-installed flow with action set to send to the controller. The reverse process happens, resulting in the delivery of the packets to the interfaces of the VMs associated to the datapaths. Note that there is a mapping between ports in the datapaths and interfaces in the VM. So, if one packet arrived at one specific port in the datapath it will be delivered to the corresponding interface in the VM. Conversely, if one packet goes out from one VM interface it will be send out through the mapped port in the datapath.

So, routing paths are calculated by Quagga as if it where running on each switch. The rf-slave detects the route entries, sends the information to the rf-server which will send a command to routeflowc install a flow in the corresponding switch of the OpenFlow network.

Q: What about OpenFlow rules for MAC re-writing, TTL decrement, and checksum updates? 

Source and Destination MAC header re-writing are mandatory actions that need to be supported by the OpenFlow switches for correct IP forwarding. Current RouteFlow installed OpenFlow entries include the corresponding MAC re-writing actions. 
TTL decrement and checksum update actions should be included as well to fully conform legacy IP forwarding behavious. However, the current version of RouteFlow does not include these rules because lack of support in some dataplane OpenFlow devices. 

Q: Why do all the interfaces in the VM share the same MAC address? 

While IP routing works with interfaces having the same or different MAC addresses, we have opted for assigning the same MAC address to all router interfaces for the following reasons:
  • Commercial equipment usually works this way: "one MAC address per box"
  • We can differentiate virtual routers assigned to the same physical OF switch by each having a different MAC address.
  • Reduzing the number of flow table entries: As the destination MAC field is part of the flow match entry and needs to be one of the router, otherwise the packet should be discarded, note that having different MAC addresses per interface would multiply the number of required flow entries. Packets coming from different interfaces would have different destination MAC addresses but all of them would share the same IP routing rules. You can think as if you would enforce including the inport port in the flow entry, you would need to replicate the IP routing rules for every possible inport. See example below:
          FLOW MATCH                  #                ACTION
DST_MAC                   DST_IP      #    PORT        SRC_MAC               DST_MAC
00:01:00:02:00:01    #    port1    00:01:00:02:00:01    00:01:00:03:00:A1    
00:01:00:02:00:01    #    port5    00:01:00:02:00:01    00:01:00:03:00:A5

Note: In OpenFlow 1.1 with multiple table support the implementation would be more efficient / scalable. One could have a first table that checks whether the destination MAC address corresponds to the router, in that case the instruction would be to go to the pure IP routing table, that would be shared among the multiple MAC addresses (i.e. interfaces) or not (in case of virtual routersm i.e., router multiplexation use case). The "pure" IP routing flow table would only match packets based on the destination IP address and would perform the Longest-Prefix-Matching, as expected by any tradition IP forwarding engine.

Q: How would the schematics of the Tutorial look like in case of more than one datapath switch?

See Tutorial 2.

Q: How do I update the RouteFlow code inside a pre-configured VM?

1. Get the latest code*:
cd /opt/RouteFlow
sudo git pull
(*verify that the git repository is, some early version of the pre-confgured VM pointed to the fork)

2. Compile:
sudo make
3. Copy the Rf-Slaves into the LXC containers:
sudo cp /opt/RouteFlow/build/rf-slave /var/lib/lxc/rfvmA/rootfs/opt/rf-slave/
sudo cp /opt/RouteFlow/build/rf-slave /var/lib/lxc/rfvmB/rootfs/opt/rf-slave/
sudo cp /opt/RouteFlow/build/rf-slave /var/lib/lxc/rfvmC/rootfs/opt/rf-slave/
sudo cp /opt/RouteFlow/build/rf-slave /var/lib/lxc/rfvmD/rootfs/opt/rf-slave/

Q: How can I contribute to the RouteFlow Community?

You can start by joining the RouteFlow discuss mailing list and helping to answer questions.

If you have a feature or bug you would like to work on send a mail to the mailing list.

You can also suggest improvements to documentation, new use case scenarios, etc.