News‎ > ‎

Demo at SRS SuperComputing 2011 with switches from IBM, NEC and Pronto

posted Nov 25, 2011, 12:18 PM by Christian Esteve Rothenberg   [ updated Nov 28, 2011, 4:38 AM ]
The SC11 SCinet Research Sandbox (SRS) did an excellent job in putting together a 10 Gbps, multi-vendor OpenFlow network testbed that allowed researchers to test a number of OpenFlow applications, some of them in the field of High Performance Computing. 

We are glad that RouteFlow was one of the 10 submissions accepted and made it also to the top six showcased as part of the Disruptive Technologies sessions in the technical program. Here are the slides summarizing the demo.

See below the network diagram and how we deployed RouteFlow to operate with OpenFlow switches from IBM, NEC and Pronto in a virtual slice configured in FlowVisor.



The experience was great, though we (mostly Marcelo ;)) had to do some configuration and fast programming work to get the demo running. No panic, most of the required work was due to the GUI (which we did not proof tested with 64-bit randomly looking DP IDs such as the one of the IBM switch) and the collection of statistics to display in a Web browser that needed to be remotely accessible. We used VMPlayer to run the RouteFlow environment (same pre-configure VM of tutorial 2 plus the GUI extensions, available in the github fork by Alisson). VMPLayer in NAT mode would not work with the host Linux machine also doing NAT to get OpenFlow and web traffic in and out of the machine to the OpenFlow testbed and the devices accessing the GUI. We changed VMPLayer to host-only and set the NAT manually in the Linux machine: 
sudo iptables -t nat -A PREROUTING -p tcp --dport 6633 -j DNAT --to-destination 172.16.230.129:6633
sudo iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 172.16.230.129:80 

Apparently there is another way of setting NAT rules within VMPlayer veth configuration.
 
The good news: From the core Routeflow components we only needed to apply the patch that removes any VLAN tags inserted by the OF switch when handled to the controller. The rest of the code worked fine but required some configuration efforts:

Much of the efforts to get the demo to work were spent in manual configuration of the LXC container configuration (e.g., names of the virtual interfaces) and configuring the Quagga instances.  The main problem is due to a lack of automated or user-controlled mapping of VMs to physical switches. So, if we use Mininet, 99% of the times the switches will register in the order they are started, so it is ease to have the VMs and end-hosts pre-configured with the desired IP addressing. In a real network however, datapaths may join in any order and thus we needed to discover to which VM each DP was associated and then manually change the VM configuration to match our target network addressing and OSPF configuration. We simply kept any potential configuration in every VM and then changed the zebra.conf as required. We also configured every VM with a larger number of virtual interfaces with names that could match any switch that could be mapped to. This is a current limitation but easily to overcome in the current RouteFlow version. If a physical switch has ports 6, 49,50 and 51 allocated for the OpenFlow datapath, the corresponding VM running Quagga and RF-Slave needs to have at least the four interfaces named eth6, eth49, eth50 and eth51.


All in all it was an excellent experience! Having FlowVisor in the picture was seamless as expected. The only caveat is that the number of entries installed in the switches was larger than required in order to strictly slice the network by ports, so that we saw in the switches entries with the same IP subnet replicated for each possible in-port. 

The best of the first day was seeing the traffic between the virtualized end-hosts flowing through the hardware-based OF switches and the QEMU-based end-host VMs pinging with RTTs lower than 1ms!

We are thankful to the SRS organization, the Indiana University staff for the booth space to run RouteFlow continuously, and for their support in setting up the demo!


 
 
 


Comments