The SC11 SCinet Research Sandbox (SRS) did an excellent job in putting together a 10 Gbps, multi-vendor OpenFlow network testbed that allowed researchers to test a number of OpenFlow applications, some of them in the field of High Performance Computing.
We are glad that RouteFlow was one of the 10 submissions accepted and made it also to the top six showcased as part of the Disruptive Technologies sessions in the technical program. Here are the slides summarizing the demo.
See below the network diagram and how we deployed RouteFlow to operate with OpenFlow switches from IBM, NEC and Pronto in a virtual slice configured in FlowVisor.
The experience was great, though we (mostly Marcelo ;)) had to do some configuration and fast programming work to get the demo running. No panic, most of the required work was due to the GUI (which we did not proof tested with 64-bit randomly looking DP IDs such as the one of the IBM switch) and the collection of statistics to display in a Web browser that needed to be remotely accessible. We used VMPlayer to run the RouteFlow environment (same pre-configure VM of tutorial 2 plus the GUI extensions, available in the github fork by Alisson). VMPLayer in NAT mode would not work with the host Linux machine also doing NAT to get OpenFlow and web traffic in and out of the machine to the OpenFlow testbed and the devices accessing the GUI. We changed VMPLayer to host-only and set the NAT manually in the Linux machine:
Apparently there is another way of setting NAT rules within VMPlayer veth configuration.
The good news: From the core Routeflow components we only needed to apply the patch that removes any VLAN tags inserted by the OF switch when handled to the controller. The rest of the code worked fine but required some configuration efforts:
All in all it was an excellent experience! Having FlowVisor in the picture was seamless as expected. The only caveat is that the number of entries installed in the switches was larger than required in order to strictly slice the network by ports, so that we saw in the switches entries with the same IP subnet replicated for each possible in-port.
The best of the first day was seeing the traffic between the virtualized end-hosts flowing through the hardware-based OF switches and the QEMU-based end-host VMs pinging with RTTs lower than 1ms!
We are thankful to the SRS organization, the Indiana University staff for the booth space to run RouteFlow continuously, and for their support in setting up the demo!