1 / 31

OTV Technology Introduction

OTV Technology Introduction. Natale Ruello Technical Marketing Engineer – Nexus 7000. Addressing Business Goals with LAN Extensions. Business Goals. LAN Extensions. Attributes. Enable Distributed Clusters to improve Application Availability without compromising Network Resiliency.

saman
Télécharger la présentation

OTV Technology Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OTV Technology Introduction Natale Ruello Technical Marketing Engineer – Nexus 7000

  2. Addressing Business Goals with LAN Extensions Business Goals LAN Extensions Attributes Enable Distributed Clusters to improve Application Availability without compromising Network Resiliency Availability 99.999% Global Availability Unleash Compute Virtualization beyond a single physical data center for fast service and capacity additions Service Velocity and On-demand Capacity Adaptability Supports migration of workloads and consolidation of servers across locations to avoid power/cooling hot spots or compute/network idleness Maximize Asset Utilization Cost Optimization Enables improved change management methods across multiple physical locations Non-disruptive model for minimal operational overhead Streamline Operations & Reduce OPEX

  3. LAN Extension Enabling IT Solutions with LAN extensions Data Center A Data Center B Vmotion Cluster MSCS Cluster Solaris Sun Cluster Enterprise RAC (Real Appl.Cluster) HACMP Legato Automated Availability Mgr Metrocluster Metro Cluster BACnet (building automation/control)

  4. Overlay Transport Virtualization (OTV) Overlay - A solution that is independent of the infrastructure technology and services, flexible over various inter-connect facilities O T Transport - Transporting servicesfor layer 2 and layer 3 Ethernet and IP traffic V Virtualization - Providesvirtualconnections, connections that are in turn virtualized and partitioned into VPNs, VRFs, VLANs OTV LAN Extensions OTV delivers a virtual L2 transport

  5. Challenges with LAN ExtensionsReal Problems Solved by OTV North Data Center • Extensions over any transport (IP, MPLS) • Failure Boundary Preservation • Site independence / Isolation • Optimal BW utilization (no head-end replication) • Resiliency/multi-homing • Built-in end-to-end Loop Prevention • Multi-site connectivity (inter and intra DC) • Scalability • VLANs, Sites, MACs • ARP, Broadcasts/Floods • Operations Simplicity • Topology Flexibility LAN Extension Fault Domain Fault Domain Fault Domain Fault Domain Only 5 CLI commands South Data Center

  6. Traditional Layer 2 VPNs EoMPLS Dark Fiber VPLS

  7. Flooding Behavior • Traditional Layer 2 VPN technologies rely on flooding to propagate MAC reachability. • The flooding behavior causes failures to propagate to every site in the L2-VPN. x2 Site A Site C MAC 1 propagation MAC 1 Site B • A solution that provides layer 2 connectivity, yet restricts the reach of the flood domain is necessary in order to contain failures and preserve the resiliency.

  8. Pseudo-wires Maintenance • Before any learning can happen a full mesh of pseudo-wires/tunnels must be in place. • For N sites, there will be N*(N-1)/2 pseudo-wires. Complex to add/remove sites. • Head-end replication for multicast and broadcast. Sub-optimal BW utilization. • A simple overlay protocol with built-in functionality and point-to-cloud provisioning is key to reducing the cost of providing this connectivity

  9. Multi-Homing • Require additional protocols to support Multi-homing. • STP is often extended across the sites of the Layer 2 VPN. Very difficult to manage as the number of sites grows. • Malfunctions on one site will likely impact all sites on the VPN. Active Active L2 VPN L2 Site L2 Site • A solution that natively provides automatic detection of multi-homing without the need of extending the STP domains is key.

  10. What can be improved Data Plane Learning Control Plane Learning • Moving to a Control Plane protocol that proactively advertises MAC addresses and their reachability instead of the current flooding mechanism. Pseudo-wires and Tunnels  Dynamic Encapsulation • No static tunnel or pseudo-wire configuration required. • Optimal replication of traffic done closer to the destination, which translates into much more efficient bandwidth utilization in the core. Multi-Homing  Native Built-in Multi-homing • Ideally a multi-homed solution should allow load balancing of flows within a single VLAN across the active devices in the same site, while preserving the independence of the sites.STP confined within the site (each site with its own STP Root bridge)

  11. Overlay Transport Virtualization Technology Pillars OTV is a “MAC in IP” technique for supporting Layer 2 VPNs OVER ANY TRANSPORT. Dynamic Encapsulation Protocol Learning OTV No Pseudo-Wire State Maintenance Built-in Loop Prevention Optimal Multicast Replication Preserve Failure Boundary Multi-point Connectivity Seamless Site Addition/Removal Point-to-Cloud Model Automated Multi-homing

  12. OTV at a Glance Ethernet traffic between sites is encapsulated in IP: “MAC in IP” Dynamic encapsulation based on MAC routing table No Pseudo-Wire or Tunnel state maintained Communication between MAC1 (West) and MAC2 (East) Encap OTV OTV Decap IP packet Ethernet Frame Ethernet Frame Ethernet Frame OTV IP B IP A East Site West Site

  13. 5 6 2 3 1 OTV Data Plane: Unicast MAC Table contains MAC addresses reachable through IP addresses OTV Inter-Site Traffic Layer 2 Lookup Layer 2 Lookup OTV OTV Animated Slide ! MAC 2 MAC 4 External IP B External IP A Eth 4 Eth 1 Eth 3 Eth 2 MAC 1  MAC 3 Core L3 L2 L2 L3 MAC 1  MAC 3 MAC 1  MAC 3 4 Decap MAC 3 MAC 1 East West IP A  IP B IP A  IP B MAC 1  MAC 3 MAC 1  MAC 3 No Pseudo-Wire state is maintained. The encapsulation is done based on a Layer 2 destination lookup. The encapsulation is done in hardware by the Forwarding Engine. Encap

  14. Building the MAC tablesThe OTV Control Plane The OTV control plane proactively advertises MAC reachability (control-plane learning). The MAC addresses are advertised in the background once OTV has been configured. No protocol specific configuration is required. Core MAC Addresses Reachability IP A IP B East West IP C South

  15. 4 2 3 3 4 1 OTV Control PlaneMAC address advertisements – Multicast Core Every time an Edge Device learns a new MAC address, the OTV control plane will advertise it together with its associated VLAN IDs and IP next hop. The IP next hops are the addresses of the Edge Devices through which these MACs are reachable in the core. A single update reaches all neighbors. OTV update is replicated by the core Core Animated Slide ! OTV Update OTV Update OTV Update East IP B IP A West IP C South-East

  16. Multicast Groups in the Core OTV will leverage the multicast capabilities of the core. This is the summary of the Multicast groups used by OTV: • An ASM/Bidirgroup to exchange MAC reachability. • An SSM group range for the multicast data generated by the site. Summary of the Multicast groups used by OTV

  17. What if core multicast is not an option?OTV in Unicast Mode – The Adjacency Server Mode The use of multicast in the core provides significant benefits: • Reduce the amount of hellos and updates OTV must issue • Streamline neighbor discovery, site adds and removes • Optimize the handling of broadcast and multicast data traffic However multicast may not always be available The OTV Adjacency Server Modeof operation provides a unicast based solution.

  18. Adjacency Server Despite the naming this is NOT a physical server. It is just a mode of operation of the Edge Devices. An OTV node which sends a multicast packet on a non-multicast capable network will “unicast replicate (head-end)” the packet. One of the OTV Edge Devices will be configured as an Adjacency Server and it will be responsible for communicating the IP addresses where the other Edge Devices can be reached. The group of IP addresses is called overlay Adjacency List (oAL) Two configuration steps: • Configure an OTV Edge Device to be the Adjacency Server • Configure the other Edge Devices to point to the Adjacency Server to retrieve each other IP address Core with no support for multicast: Adjacency Server

  19. Adjacency Server At first, the Adjacency Server knows about no other OTV Edge Devices because their oAL is empty. Once other OTV Edge Devices start sending to the Adjacency Servers their site-id and IP address, the Adjacency Server will build up its oAL. The contents of the oAL is advertised and sent unicast to each member of the oAL. Now the Edge Devices can communicate with each other. Site 2 Site 3 Core IP C Site3, IP C IP B Site2, IP B Site 1 oAL • Site 1, IP A oAL • Site 2, IP B oAL • Site 3, IP C oAL • Site 4, IP D oAL IP A • Site 5, IP E Adjacency Server IP E Site4, IP D IP D Site5, IP E Site 5 Site 4

  20. STP BPDU Handling When STP is configured at a site, an Edge Device will send and receive BPDUs on the internal interfaces. An OTV Edge Device will not originate or forward BPDUs on the overlay network. An OTV Edge Device can become (but it is not required to) a root of one or more spanning trees within the site. An OTV Edge Device will take the typical action when receiving Topology Change Notification (TCNs) messages. The BPDUs stop here OTV OTV Core

  21. Unknown Unicast Packet Handling Flooding of unknown unicast over the overlay is not required and is therefore suppressed. Any unknown unicasts that reach the OTV edge device will not be forwarded onto the overlay. The assumption here is that the end-points connected to the network are not silent or uni-directional. MAC addresses for uni-directional host are learnt and advertised by snooping the host’s ARP reply No MAC 3 in theMAC Table OTV OTV Core MAC 1  MAC 3

  22. 2 4 3 1 5 Controlling ARP trafficProxy ARP OTV Edge Devices can proxy ARP replies on behalf of remote hosts ARP traffic spanning multiple sites can thus be significantly reduced An ARP cache is maintained by every OTV edge device The ARP cache is populated by snooping ARP replies Initial ARP requests are broadcasted to all sites Subsequent ARP requests are suppressed and answered locally The ARP cache could also be populated at MAC learning time, this would allow the suppression of all ARP related broadcast Proxy ARP reply (IP A) Subsequent ARP requests (IP A) First ARP request (IP A) ARP reply Snoop & cache ARP reply OTV OTV Core OTV AED One time traffic

  23. OTV solves Layer 2 fault propagationSummary STP Isolation: BPDUs are not forwarded over the overlay Unknown unicasts are not flooded across sites • Selective flooding is optional Cross site ARP traffic is reduced with Proxy ARP Broadcast can be controlled based on a white list as well as a rate limiting profile

  24. Multi-Homing: Loop Condition Handling OTV includes the logic necessary to avoid the creation of loops in multi-homed site scenarios. Each site will have its own STP domain, which is separate and independent from the STP domains in other sites, even though all sites will be part of common Layer 2 domain. STP domain 1 STP domain 2 No STP OTV OTV OTV OTV Core OTV

  25. Authoritative Edge Device OTV provides loop-free multi-homing by electing a designated forwarding device per site for each VLAN. The designated forwarder is referred to as the Authoritative Edge Device (AED). The Edge Devices at the site peer with each other on the internal interfaces to elect the AED The AED is the only edge device that will forward multicast and broadcast traffic between a site and the overlay. OTV OTV OTV OTV Core OTV AED AED

  26. Multi-Homing: AED & Broadcast A broadcast packet gets to all the Edge Devices within a site. The AED for the VLAN is the only Edge Device that forwards broadcast packets on the overlay network. All the Edge Devices at a remote site will receive the broadcast packet, but only the AED at the remote site will forward the packet into the site. Once sent into the site, the packet gets to all switches on the site specific Spanning Tree. Broadcast stops here Broadcast stops here OTV OTV OTV OTV Core OTV Bcast pkt AED AED

  27. Multi-HomingAED & Unicast Forwarding One AED is elected for each VLAN on each site Different AEDs can be elected for each VLAN to balance traffic load Only the AED forwards unicast traffic to and from the overlay Only the AED advertises MAC addresses for any given site/VLAN Unicast routes will point to the AED on the corresponding remote site/VLAN AED AED OTV OTV OTV OTV IP A Core OTV IP B AED AED

  28. Configuration OTV CLI configuration Connects to the core. Used to join the Overlay network. Its IP address is used as source IP for the OTV encap ASM/Bidir group in the core used for the OTV Control Plane. interface Overlay0 descriptionotv-demo otvjoin-interface Ethernet1/1 otvcontrol-group 239.1.1.1 otvdata-group 232.192.1.2/32 otvextend-vlan100-150 otvsite-vlan 100 SSM group range used to carry the site’s mcast traffic data. Site VLANs being extended by OTV VLAN used within the Site for communication between the site’s Edge Devices

  29. Configuration OTV CLI configuration with adjacency server Connect to the core. Used to join the core mcast groups. Their IP addresses are used as source IP for the OTV encap interface Overlay0 descriptionotv-demo otvjoin-interface Ethernet1/1 otvadjacency-server orotvuse-adjacency-server 10.10.10.10 otvextend-vlan100-150 otvsite-vlan 100 Configures this Edge device as an Adjacency Server Use a remote Edge Device as the Adjacency Server (mutually exclusive with the previous line) Site VLANs being extended by OTV VLAN used within the Site for communication between the site’s Edge Devices

  30. Nexus 7000 Rollout plan EFT • Target start date: Mid January, 2010 FCS • Q1CY2010

More Related