OTV Use Case

OTV Use Case

In this article, I will demonstrate a use case of Overlay Transport Virtualization (OTV) technology from Cisco. As the name suggests, it is an overlay technology which is used to extend Layer 2 networks, usually between data centers. The current implementation is only supported on Cisco Nexus 7000 switches, NX-OS 5.1 and above. In NX-OS 5.1, OTV requires IP Multicast support in the transport network but the recent release also allows Unicast transport support. Together with the use case, I will try to go over the concepts of OTV. Let's set the stage first.

In the figure above, there are 3 types of servers in each data center connected with each other via 4 DCI routers that are of interest to us. The requirement is as follows-

    • Geographic redundancy for the servers
    • Application (App) servers communicate with each other on the same subnet (VLAN 6) to decide Master/Slave relationship
    • App servers communicate with Database (DB) servers on the same subnet (VLAN 6)
    • DB servers communicate with each other on the same subnet (VLAN 6) to exchange Heartbeat messages and decide Master/Slave relationship
    • Element Manager Servers (EMS) communicate with each other on the same subnet (VLAN 5) to decide Master/Slave relationship
    • Each server must be reachable from remote sites connected to the Core network (VLAN 10)

It is clear from the requirements that the servers do not need SVI for VLANs 5 and 6 which removes some complexity associated with FHRP. The management of the servers is via VLAN 10.

OTV Terminology

First some OTV terminology -

    • Edge Device - This is a device which performs all OTV functions. The OTV Edge device is connected to Layer 2 segments and IP transport network. Currently, this functionality is only available to Nexus 7000 switches. It is supported on VDCs as well. Like traditional switches, they perform MAC learning and aging. NOTE: The OTV Edge device does not necessarily have to be the Edge router connected to WAN network. It can be an internal device.
    • Internal Interfaces - These are Layer 2 interfaces on the OTV Edge device. These can be "trunk" or "access" ports.
    • Join Interfaces - These are Layer3 interfaces on the OTV Edge device which connects to the IP transport network. Currently, these can be physical or logical interfaces but Loopback interfaces are not supported, but it will come in future releases. The IP address of the OTV Join interface is used as the Source IP address for OTV encapsulation.
    • Overlay Interface - This is a multicast-enabled multi-access network over which all OTV encapsulated Layer2 frames are carried.
    • OTV Control-Group - A unique multicast-group is required to carry OTV traffic between OTV Edge devices.
    • OTV Data-Group - A multicast-group (usually a range of multicast-group) used to carry Layer2 multicast traffic over the overlay network.
    • Extended VLANs - A group of VLANs allowed to be extended over the overlay network.
    • Site VLAN - In case of dual-attached (as in figure above), the OTV Edge devices need to elect an Authoritative Edge Device (AED) per VLAN so that only one device forwards traffic for that VLAN. For this election, the OTV Edge devices use Site VLAN for communication on the local site.

Control Plane

Inherently, OTV uses IS-IS protocol to exchange MAC address reachability information between Edge devices (no IS-IS knowledge is required for OTV as such). However, before Edge devices can exchange MAC address information, they need to form peer relationship. This adjacency can be formed in 2 ways, depending on the type of transport network - Multicast enabled (discussed next) and Unicast enabled (not discussed in this article).

Multicast Enabled Transport Network

If the transport network is multicast enabled, a specific multicast group can be used by OTV Edge devices to exchange control protocol messages. All OTV Edge devices are configured to join a specific Any Source Multicast (ASM) group. ASM group is where anyone can be a source and a receiver for the group. The multicast transport network is recommended to be based on PIM Bi-directional, however, PIM Sparse-mode (PIM-SM) with static Rendezvous Point (RP) should work equally fine.

    1. In the figure above, each OTV Edge device sends an IGMP (IGMPv3) Report message to join the specific ASM group. This way, the multicast-enabled transport network is aware of the "IGMP Hosts" (i.e. OTV Edge devices) requesting traffic for that specific ASM group.
    2. The OTV Control protocol on Edge devices generate OTV Hello messages that need to be sent to all other OTV Edge devices. This is required to communicate its existence and trigger the establishment of control plane adjacencies.
    3. The OTV Hello message is sent over the overlay network and hence encapsulated within an OTV frame. The source is the Join interface IP address and the destination is the ASM group IP address.
    4. The multicast enabled transport network replicates this packet to all interested OTV Edge devices.
    5. The OTV Edge devices decapsulate the packets and process the OTV Hello messages.

The same process occurs in the other direction and results in the OTV control-plane adjacencies between OTV Edge devices. The exchange of MAC address information is now possible. The MAC address advertisement process is quite similar to above -

    1. Consider the OTV Edge devices in the left data center learn MAC addresses of servers in VLAN 5 and 6 internal interfaces. This is done via traditional MAC learning.
    2. An OTV Update message is created containing the MAC addresses of the servers. This Update message is OTV encapsulated and sent to the transport network. Again, the source is the Join interface IP address and the destination is the ASM group IP address.
    3. The multicast enabled transport network replicates this packet to all interested OTV Edge devices; in this case, the OTV Edge devices in the right data center.
    4. The recipient OTV Edge devices decapsulate the packets and hand it to the OTV control process.
    5. The MAC address information is imported into the CAM table, however, instead of physical interfaces, the mapping is between MAC address and the IP address of the originating OTV Edge device.

The MAC address information is carried in IS-IS Type-Length-Value (TLV). The MAC address information ages out in 30 minutes by default.

Data Plane : Unicast Traffic

Once OTV adjacencies are formed and MAC address information is exchanged by OTV Edge devices, traffic can flow across the overlay network. If App Server 1 wants to communicate with DB Server 1 (i.e. within the same site), the Layer2 switch looks up the CAM table and it points to a local interface. The switch can now perform traditional switching.

However, if EMS Server 1 in left data center wants to communicate with EMS Server 2 in right data center, traffic needs to flow over the overlay network.

    1. When the OTV Edge device receives the traffic, it performs CAM table lookup.
    2. This time, the OTV Edge device points to the IP address of the remote OTV Edge device.
    3. The OTV Edge device then encapsulates the frame: the source is its OTV Join interface IP address and the destination is the remote OTV Edge device's Join interface IP address.
    4. The encapsulated traffic is sent to the transport network. Note that this is unicast traffic now.
    5. When the remote OTV Edge device receives the traffic, it decapsulates the frame exposing the original Layer2 frame.
    6. It then looks up its CAM table which points to a local interface, which means the MAC address of EMS Server 2 is local to the site.
    7. The frame is delivered to EMS Server 2.

Data Plane : Multicast Traffic

Although it is not relevant to this use case, it is possible that a host/server in one data center is a multicast-source and another host/server is a multicast-receiver in another data center for the same Layer2 segment. Let's consider this scenario - EMS Server 1 is a multicast-source.

Multicast-enabled Transport Network

Cisco recommends to enable Source Specific Multicast (SSM) group range in the transport network to carry this Layer2 multicast traffic between sites over the overlay network. This group range is specified using Data-Group (GD) in OTV and is different to ASM group (OTV Control-Group).

    1. EMS Server 1 sends multicast-traffic to group GS.
    2. The OTV Edge device creates a mapping between GS and GD.
    3. The OTV Edge device advertises this mapping to remote OTV Edge devices using OTV Control protocol (IS-IS). It also includes the VLAN (VLAN 6 in this case) and the IP address of the originating OTV Edge device advertising the mapping.
    4. EMS Server 2 in the right data center sends an IGMP message to join the group GS.
    5. The OTV Edge device snoops this IGMP message and realizes that EMS Server 2 is interested in group GS and it belongs to VLAN 6.
    6. The OTV Edge device sends a Group-Membership Update (GM-Update) message to all remote OTV Edge devices in OTV Control protocol.
    7. The remote OTV Edge device receives this GM-Update message and updates its Outgoing Interface List (OIL) that group GS traffic needs to be forwarded over the overlay network.
    8. Since the OTV Edge device in the right data center knows the mapping between GS and GD , it sends out an IGMPv3 Report message to the transport network for SSM group GD and Join interface IP address of the OTV Edge device in the right data center.
    9. The multicast traffic for group GS is encapsulated in OTV packet with source as the IP address of the Join interface of the OTV Edge device in the left data center and destination as the SSM multicast group GD and forwarded to the transport network.
    10. The transport network forwards the multicast traffic as per the SSM-tree.
    11. The remote OTV Edge device receives this traffic for multicast group GD, decapsulates the packet and delivered to the interested receiver in VLAN 6.

Data Plane : Broadcast Traffic

STP Isolation

By default, Spanning Tree Protocol (STP) BPDUs are not allowed across the overlay by OTV Edge devices. Hence, every site is an independent STP domain

Unknown Unicast Handling

In the current release of NX-OS, the OTV Edge device does not flood any unknown unicast frames across the overlay. However, like in traditional switches, the unknown unicast traffic is still flooded out the regular Ethernet interfaces.

ARP Optimization

ARP Request (Layer2 broadcast frame) is allowed across the overlay network to all remote sites. ARP Reply (Layer2 unicast frame) is also allowed across the overlay network. Also, the OTV Edge devices snoop and cache the ARP Reply messages. Any subsequent ARP Requests for that MAC address is served by the OTV Edge devices rather than flooding across the overlay network. This requires that OTV ARP-timer be lower than MAC address aging-timer.

OTV Encapsulation

OTV adds a further 42 bytes on all packets traveling across the overlay network. The OTV Edge device removes the CRC and 802.1Q fields from the original Layer2 frame. It then adds an OTV Shim Header which includes this 802.1Q field (this includes the priority P-bit value) and the Overlay ID information. It also includes an external IP header for the transport network. All OTV packets have Don't Fragment (DF) bit set to 1 in the external IP header.

OTV Configuration

OTV requires Transport_Services_Package license installed on Nexus switches. The configuration steps are -

Step 1 : Enable OTV feature

feature otv

From NX-OS 5.2 (1) onwards, the OTV Site-ID configuration is mandatory and must match on all Edge devices belonging to the same site.

otv site-identifier 0x1

Step 2 : Create VLANs to be extended across the overlay

vlan 5

name EMS-VLAN

vlan 6

name App-DB-VLAN

Step 3 : Create Site VLAN

It is recommended that a dedicated VLAN be used as Site VLAN. It is not extended across the overlay.

vlan 15

name Site-VLAN

!

otv site-vlan 15

Step 4 : OTV Join Interface

Only one Join interface can be specified per overlay on each device.

interface Ethernet 1/1

description [ OTV Join Interface ]

no switchport

mtu 9000

ip address 10.1.1.1/30

igmp version 3

no shutdown

end

Step 5 : OTV Internal Interfaces

These are the Layer2 (trunk or access) interfaces on the OTV Edge devices.

interface Ethernet 2/1-2

description [ OTV Internal Interface ]

switchport

switchport mode trunk

switchport trunk allowed vlan 5,6,15

no shutdown

end

Step 6 : OTV Overlay Interface

The Overlay interface ID must match on all sites. Multiple overlay interfaces can be created. But, a VLAN can only be assigned to one overlay interface.

interface overlay 1

description [ OTV Overlay Interface ]

otv join-interface Ethernet 1/1

otv control-group 239.0.0.1

otv data-group 232.0.0.0/24

otv extended-vlan 5,6

no shutdown

end

There are some useful output verification commands to check the state of OTV Overlay interface and adjacency.

    • show otv - displays OTV overlay interface state and information
    • show otv adjacency - displays all OTV adjacencies formed with remote OTV Edge devices
    • show otv vlan - displays the AED for each VLAN. The OTV Edge device with lower System-ID becomes authoritative for all even-numbered extended VLANs while the OTV Edge device with higher System-ID becomes authoritative for all odd-numbered extended VLANs.
    • show otv route - displays the MAC address information that are either local to the site or reachable via the overlay interface

Conclusion

OTV does provide a way to extend Layer2 segments over Layer3 transport network. OTV configuration is very simple. Multi-homing and loop-prevention features are great. However, current implementation has certain limitations like -

    • Maximum 6 sites with 2 Edge devices per site
    • Multicast enabled transport network (Unicast support available from NX-OS 5.2 onwards)
    • Feature available in Nexus 7000 switches only