The main goal of the FIBRE project is the design, implementation and validation of a shared Future Internet research facility, supporting the joint experimentation of European and Brazilian researchers.
This week the project is organizing its First Open Workshop in the Brazilian city of Salvador (BA).
As part of the live demostrations, RouteFlow will be executed as an experimenter application that requests OpenFlow switches and end-hosts (VMs) located in different testbeds.
The FIBRE control and management framework (CMF) takes care of providing control and visualization of the requested slice, which includes reserving the flow space (policed by FlowVisor), booting up the VMs and stitiching together the desired OpenFlow topology, based on hardware or software-based OpenFlow devices. Allocated OpenFlow switches are pointed (via FlowVisor) to the experimenter OpenFlow controller´s IP and port. In our case, and in order to show the seamless operation for the experimenter, our RouteFlow configuration is exactly the same (no single source code modification required) as the Tutorial 2 featuring an OSPF routed network with 4 OpenFlow switches.
For those familiar with Mininet, the experience is very similar, only that you get a real hardware in a topology that spans 100s or 1000s of kilometers with the associated realistic delays. In the experiments that we carried, we measured a 2 ms end-to-end RTT of PINGs between VMs from the CPqD island located in Campinas and the USP island located in Sao Paulo.
Find below a copy of the poster [PDF]. Hopefully you will see soon a full scientific paper describing with more details the experiments on top of federated OpenFlow islands that span Brazil up to Europe.
[Post by Allan Vidal]
Building upon the refactored version of RouteFlow we released 2 months ago, we have a new version with several improvements.
* What's new *
most important change in this new version is the new configuration
system. It's now possible to associate datapath ports to VM ports in a
predefined way, making RouteFlow much more
reliable and easier to configure. This was a long-standing issue that
affected lots of users, and we hope this will make RouteFlow easier for everyone. The updated project README contains information about the configuration scheme.
Other changes include:
- RouteFlow now works in a more granular level, associating individual datapath and VM ports, not entire VMs and datapaths as it used to be
- RFServer was entirely rewritten in Python, making it much easier to understand, extend and modify
- RFProxy now has a standard implementation across NOX and POX, making it easier to port to other controllers
- Logging messages are clearer and standardized
- Several minor architectural and code clean-ups
* What's next *
- Improve documentation
- Address High Availability
- On-the-fly configuration changes
- New routing abstractions implemented as services on top of the RF-Server
- Exploration of possibilities opened by the use of a central
database (e.g., keep state history and allow queries like "show me flow table at timestamp x")
with NOX requires Ubuntu 11.04 (POX users should be fine in newer
versions). We will be adding support for Ubuntu 11.10 and 12.04
- Embrace OpenFlow v1.X. We have working prototypes of NOX and software-based reference switch using OpenFlow 1.1 and 1.2
- Extensions to support LDP label information
... a number of additions under investigation by students and project collaborators (see the TODO section in the README file)
* Other news *
We now have a main page for the code at GitHub:
This is intended to be more or less like a business card; the main website is still updated with news and resources as usual.
We also plan to improve the project documentation through GitHub's wiki, including:
- explanations about rftest1 and rftest2 scripts
- a guide on how to port RFPRoxy to other controllers
- an explanation of the new RouteFlow association algorithm
Prior work on centralized Routing Control Platform (RCP)
has shown many benefits in flexible routing, enhanced security,
and ISP connectivity management tasks. In this paper,
we discuss RCPs in the context of OpenFlow/SDN, describing
potential use cases and identifying deployment challenges
and advantages. We propose a controller-centric hybrid
networking model and present the design of the Route-
Flow Control Platform (RFCP) along the prototype implementation
of an AS-wide abstract BGP routing service.
- C. Esteve Rothenberg, Marcelo R. Nascimento, Marcos R. Salvador, Carlos Corrêa, Sidney Lucena, and Robert Raszuk "Revisiting Routing Control Platforms with the Eyes and Muscles of Software-Defined Networking." In ACM SIGCOMM Workshop on Hot Topics in Software Defined Networking (HotSDN), Helsinki, Finland, Aug 2012. [PDF]
We're glad to announce an entirely new version of RouteFlow, with many new features in responese to our first year´s experiences and the requests from users and developers!
The version has been in an experimental branch for some time now and is stable enough to become mainstream.
In this new version, we have introduced:
- Centralized database and IPC
- We leverage MongoDB for storing the core system´s state and the
OpenFlow network statistics. A JSON-based IPC service (aka RouteFlow
protocol) is also implemented on top of it.
- Cleaner code base
- POX support
- Support for using the new POX controller was added.
- Web monitoring interface (requires POX)
- Inspect network topology, RouteFlow internal messages and network state.
- Open vSwich v1.4
- To attach the virtual interfaces (eth1 to ethX) of the VMs.
- Used also in the control network that (attaching et0) and running in bridge mode removes the requirement of a second controller instance to act as a simple L2 switch.
- Tools for testing
- A new module (rftest) introduces several scripts to facilitate testing and environment creation.
- SNMP support
- Export OpenFlow stats via SNMP. [Contribution by Joe Stringer]
The list is long!
- Foremost we want to make RouteFlow more and easier configurable. Currently, there's no trivial way to associate VMs and datapaths statically, but we want to solve this through a new configuration apporach.
- RouteFlow with NOX requires Ubuntu 11.04 (POX users should be fine in newer versions). We will be adding support for Ubuntu 11.10 and 12.04.
- Embrace OpenFlow v1.X. We have working prototypes of NOX and software-based reference switch using OpenFlow 1.1 and 1.2.
- Extensions to support LDP label information.
- Exploration of possibilities opened by the use of a central database (e.g., keep state history and allow queries like "show me flow table at timestamp x").
- Address High Availability.
- New routing abstractions implemented as Services on top of the RF-Server.
- ... a number of additions under investigation by students and project collaborators.
Stay tuned for further news!
Thank you all!
Project Launch !!! :)
I just realized that, coinciding with our second demo at ONS
, RouteFlow has made its first year of being an open-source initiative!
There is no better time to pull some visitor numbers and comment on the past 365 days:
Visits: 11,859 Unique Visitors: 5,241 Pageviews: 24,153
From over 1,100 cities of 90+ countries all over the globe!
Distribution of page visitors per city (left) and country (right) over one year.
Our estimates of downloads of the full pre-configured VMs indicates more than 1,000 downloads.
Github statistics became available only recently and indicate an increasing user base.
Despite these good numbers, it has been a great year! We demoed RouteFlow over commercial hardware at SuperComputing and participated in both ONS events. We have established a number of collaborators in the academic and service provider domains, and have good expectations on increasing this network and see real use cases in operation!
We would like to thank the many types of contributions received so far, from those reporting bugs to those providing the fixes, defining requirements and envisioning real-world use cases for RouteFlow deployments.
Highlights of recent collaborations include:
- Web-based UI & Internet 2 HW pilot [C. Small, Indiana]
- Aggregated BGP Routing Service [C. Corrêa, Unirio]
- SNMP plugin [J. Stringer, Google]
- Optimal BGP best path reflection [R. Raszuk, NTT-MCL]
- OpenFlow v1.1 and v1.2 [w/ Ericsson]
- Open Label Switched Router [OSRF; Google]
There are a number of scenarios that we believe are worth to explore, including , multi-path, Fast-ReRoute, BGP-Sec, IPv6, high-availability, etc.
We welcome researchers interested in any topic around split routing architectures to validate their ideas on top of RouteFlow!
We are glad that already two MSc thesis students (@UNIRIO, @UNICAMP) completed their thesis work around RouteFlow. A number of students are following the same path!
We are looking for new project extensions to continue a committed development of RouteFlow. Moreover, we hope to be able to announce soon more pilots, operational scenarios and official project collaborations.
Finally, we shall not forget to thank our funding sources:
RouteFlow is partially supported by the Experimental High-Speed Network Project – GIGA, which is supported by the Brazilian Financing Agency for Studies and Projects (FINEP).
We had a great week at the Open Networking Summit (ONS
) 2012 edition! Our demo booth in collaboration with Indiana University was very packed and we had great discussions with existing and prospective users, including new service provider use cases!
In addition to the well-known interest in the datacenter arena, I would highlight Google´s presentation on their OpenFlow deployment in the WAN to re-architecture their G-scale backbone interconnecting their datacenters.
Pay attention to the presentations by Urs Hoelze and Amin Vahdat and watch for the glue between Quagga routing engines and the OpenFlow domain!
The world has finally seen a real operational SDN at production scale -- including subsecond reconvergence using tunnels and IS-IS and very neat work on dynamic, applciation-tailored traffic engineering!
After one year of going public with RouteFlow, time for evolution has come! We have developed a new version of RouteFlow that includes the contributions from users and the needs of real deployments and user-defined routing services. An upcoming post will detail what is new in the new RouteFlow architecture, but, fundamentally we have re-written the IPC to include a NoSQL datastore (e.g. MongoDB) to let the 3 RF components communicate via extensible JSON-based messages (i.e. RouteFlow protocol). In order to reflect the new changes and their actual role, the envisioned extensibility of the RF-Slaves, the multi-controller support (POX is on the way) and the potential Master/Slave deployments, we have renamed two components as follows:
RF-Slave -> RF-Client
The code of the new architecture has been available for some time in a branch and will become soon the main stream code base. Find below links to the resources and ONS demo material:
Code: New design architecture in the NewRouteFlow github branch
- We know! There is room for improving our multimedia presentation skills :)
put together in the Internet2 Joint Techs!
Among many interesting topics around networking technology research networks experiments, design and operations, there is a presentation on OpenFlow/SDN by NEC.
Two presentations from Indiana University colleagues on RouteFlow:
Ronald explains the main challenges faced by real OpenFlow demos over a shared infrastructure. Worth to see how to use FlowVisor to slice your OpenFlow network. You will see the configuration used for the SC11 SRS demo of RouteFlow
Chris has put a nice set of slides explaining RouteFlow core functionality, the path ahead on new abstractions and GUI that simplify network management, and future work on RouteFlow. More on this in an upcoming post on the exciting outlook for 2012.
Thanks to Ronald, Chris, and Matt Davy (plus all involved team members) to not only spread the OpenFlow word but also show case running code such as RouteFlow!
The SC11 SCinet Research Sandbox (SRS)
did an excellent job in putting together a 10 Gbps, multi-vendor OpenFlow network testbed that allowed researchers to test a number of OpenFlow applications, some of them in the field of High Performance Computing.
We are glad that RouteFlow was one of the 10 submissions accepted and made it also to the top six showcased as part of the Disruptive Technologies sessions in the technical program. Here are the slides summarizing the demo
See below the network diagram and how we deployed RouteFlow to operate with OpenFlow switches from IBM, NEC and Pronto in a virtual slice configured in FlowVisor.
The experience was great, though we (mostly Marcelo ;)) had to do some configuration and fast programming work to get the demo running. No panic, most of the required work was due to the GUI (which we did not proof tested with 64-bit randomly looking DP IDs such as the one of the IBM switch
) and the collection of statistics to display in a Web browser that needed to be remotely accessible. We used VMPlayer to run the RouteFlow environment (same pre-configure VM of tutorial 2 plus the GUI extensions, available in the github fork by Alisson
). VMPLayer in NAT mode would not work with the host Linux machine also doing NAT to get OpenFlow and web traffic in and out of the machine to the OpenFlow testbed and the devices accessing the GUI. We changed VMPLayer to host-only
and set the NAT manually in the Linux machine:
sudo iptables -t nat -A PREROUTING -p tcp --dport 6633 -j DNAT --to-destination 172.16.230.129:6633
sudo iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 172.16.230.129:80
Apparently there is another way of setting NAT rules within VMPlayer veth configuration.
The good news: From the core Routeflow components we only needed to apply the patch
that removes any VLAN tags inserted by the OF switch when handled to the controller. The rest of the code worked fine but required some configuration efforts:
Much of the efforts to get the demo to work were spent in manual configuration of the LXC container configuration (e.g., names of the virtual interfaces) and configuring the Quagga instances. The main problem is due to a lack of automated or user-controlled mapping of VMs to physical switches. So, if we use Mininet, 99% of the times the switches will register in the order they are started, so it is ease to have the VMs and end-hosts pre-configured with the desired IP addressing. In a real network however, datapaths may join in any order and thus we needed to discover to which VM each DP was associated and then manually change the VM configuration to match our target network addressing and OSPF configuration. We simply kept any potential configuration in every VM and then changed the zebra.conf as required. We also configured every VM with a larger number of virtual interfaces with names that could match any switch that could be mapped to. This is a current limitation but easily to overcome in the current RouteFlow version. If a physical switch has ports 6, 49,50 and 51 allocated for the OpenFlow datapath, the corresponding VM running Quagga and RF-Slave needs to have at least the four interfaces named eth6, eth49, eth50 and eth51.
All in all it was an excellent experience! Having FlowVisor in the picture was seamless as expected. The only caveat is that the number of entries installed in the switches was larger than required in order to strictly slice the network by ports, so that we saw in the switches entries with the same IP subnet replicated for each possible in-port.
The best of the first day was seeing the traffic between the virtualized end-hosts flowing through the hardware-based OF switches and the QEMU-based end-host VMs pinging with RTTs lower than 1ms!
We are thankful to the SRS organization, the Indiana University staff for the booth space to run RouteFlow continuously, and for their support in setting up the demo!