1 / 34

Neutron hybrid mode

Neutron hybrid mode. Vinay Bannai. SDN Architect, Nov 8 2013. ABOUT PAYPAL. PayPal offers flexible and innovative payment solutions for consumers and merchants of all sizes . 137 Million Active Users $300,000 Payments processed by PayPal each minute 193 markets / 26 currencies

marsha
Télécharger la présentation

Neutron hybrid mode

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neutron hybrid mode Vinay Bannai SDN Architect, Nov 82013

  2. ABOUT PAYPAL PayPal offers flexible and innovative payment solutions for consumers and merchants of all sizes. • 137 Million Active Users • $300,000 Payments processed by PayPal each minute • 193 markets / 26 currencies • PayPal is the World’s Most Widely Used Digital Wallet

  3. Introduction • Data Center Architecture • Neutron Basics • Overlays vs Physical Networks • Use Cases • Problem Definition • Hybrid Solution • Performance Data • Analysis • Q&A

  4. Internet Layer-3 router Core Aggregation Layer-3 switch Access Layer-3 switch Data center architecture Data Center Bisection BW Bisection BW Bisection BW Racks

  5. Internet Layer-3 router Core Aggregation Layer-3 switch New datacenter architecture Data Center VM VM VM VM VM VM VM VM VM VM Bisection BW Bisection BW Access Layer-3 switch Bisection BW Edge Layer vswitches

  6. Datacenter with VSWITCHES Data Center VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM Layer-3 switch Access Racks Racks Racks

  7. Neutron Basics

  8. Overlay networks • Overlays provide connectivity between VMs and Network Devices using tunnels • The physical core network does not need to be re-provisioned constantly • The tunneling encap/decap is done at the edge in the virtual switch • Decouples the tenant network address from the physical Data Center network address • Easy to support overlapping address • Tunneling techniques in vogue • VXLAN • STT • NVGRE

  9. Physical networks • Physical Networks connect VM’s and Network Devices using provider network • VM’s are first class citizens with the hypervisor and the networking devices • No tunneling protocols used • Tenant separation is achieved by using VLANs or IP subnetting • Hard to achieve overlapping address spaces • Underlying network needs to be provisioned with VLANs

  10. Physical vs overlay Tenant on Overlay Network Tenant on Physical Network VM VM VM VM VM VM VM VM VM VM L2 L2 L3 L2 Network Virtualization Layer

  11. Pros & Cons

  12. Use cases • Production Environment • Production website across multiple data centers • Low latency and high throughput • Bridged Mode • Mergers & Acquisitions Private Community Cloud • Private Community Cloud • Needs address isolation and overlapping • Address isolation, Flexibility, low latency and high throughput • Overlay Mode • Development & QA Environment • Production development, QA & Staging • Flexibility, high throughput but can tolerate higher latency • Bridged and Overlay Mode

  13. Problem statement • Support flexibility, low latency, high throughput and overlapping address space all at the same time • Support both bridged and overlay networks • VM’s on a hypervisor should be able to choose networks • Need a consistent deployment pattern • Configurable by automation tools (puppet, chef, salt etc)

  14.  Hybrid vswitch Typical Vswitch VM Ta VM Tb VM Tc VLAN 200 Hypervisor br-int Bridged Traffic Overlay Traffic br-tun br-bond IP Interface Bond Intf Prod Interface Mgmt Interface

  15. Configuration of hybrid mode • Create the neutron networks • Flat Network • neutron net-create bridged-flat --provider:network_type=flat --provider: physical_network=<Physnet> • neutron subnet-create --allocation-pool start=10.x.x.100, end=10.x.x.200 bridged-flat --gateway 10.x.x.1 10.0.0.0/23 --name bridged-flat-subnet --enable_dhcp=False • VLAN Network • neutron net-create bridged-vlan --provider:network_type=vlan --provider: physical_network=<Physnet> --provider:segmentation_id=<vlan-id> • neutron subnet-create --allocation-pool start=10.x.x.100, end=10.x.x.200 bridged-vlan 10.x.x.1 10.0.0.0/23 --name bridged-vlan-subnet

  16. Contd. • Neutron networks (contd.) • Overlay Network • neutron net-create overylay-net • neutron subnet-create --allocation-pool start=10.x.x.100, end=10.x.x.200 overlay-net --gateway 10.x.x.1 10.0.0.0/23 --name overlay-net-subnet • On the compute node • Configure the bond • ovs-vsctl add-br br-bond0 • Configure the OVS • ovs-vsctlbr-set-external-id br-bond0 bridgeid br-bond0 • ovs-vsctl set Bridge br-bond0 fail-mode=standalone • ovs-vsctl add-port br-bond0 eth0 eth1

  17. Performance data • To measure latency and throughput, we ran following tests • Within a rack (L2 switching) • Bare metal to Bare metal • Bridged VM to Bridged VM • Tunneled VM to TunneledVM • Across racks (L3 switching) • Bare metal to Bare metal • Bridged VM to Bridged VM • tunneled VM to tunneled VM • Across the Network Gateway • Bare metal to Bare metal (outside the cloud) • Bridged VM to Bare metal (outside the cloud) • tunneled VM to Bare metal (outside the cloud)

  18. Hypervisor, VM and OS details • Compute Hypervisors • 2 sockets, 16 cores/socket SandyBridge @ 2.6GHz (32 Hyper Threaded) • 2 x 10G ports (Intel PCIe) • RAM : 256GB • Disk: 4 x 600GB in RAID-10 • RHEL 6.4 running OVS • VM • vCPUs: 2 • RAM: 8GB • Disk: 20GB • RHEL 6.4

  19. Test setup Half rack with Two Fault Zones L3 Gateways For Overlays X.X.X.X/23 Y.Y.Y.Y/23 X.X.X.X/23 Y.Y.Y.Y/23 X.X.X.X/23 Y.Y.Y.Y/23

  20. Testing methodology • Tunneling VM uses STT (OVS) • Bridged VM uses Flat Network (OVS) • Used nttcp 1.47 for throughput • Bi-directional TCP with varying buffer size • Buffer size in bytes : [64,… 65536] • MTU size : 1500 Bytes (on both bare metal and VM’s) • Used ping for latency measurement (60 samples) • Used python scripts and paramiko to run the tests • Tests done with other traffic (Dev/QA) • Around 470+ active VM’s • Around 100 Hypervisors • Multiple half racks

  21. Test setup for same rack

  22. Within a rack (l2 switching)throughput

  23. Within a rack (l2 switching)ping latency

  24. analysis • Observations • Results for buffer size < MTU size • Tunneled VM’s tend to have best overall throughput • Bridged VM’s tend to better than bare metal • OVS and tunnel optimizations at play • Results for buffer size > MTU size • Tunneled VM’s and bare metal performance about the same • Bridged VM’s bests both bare-metal and tunneled VMs (??) • OVS and tunnel optimizations apply for buffer sizes smaller than MTU • OVS optimization apply for buffer sizes greater than MTU • Tunneled and Bridged VM’s have a slightly higher latency than bare metal

  25. Test setup across racks

  26. across racks (l3 switching)throughput

  27. Across r3acks (l switching)ping latency

  28. analysis • No bridged VM’s in the tests (setup problem) • Results for buffer size < MTU size • tunneled VM’s tend to have best overall throughput • OVS and tunnel optimizations at play • Results for buffer size > MTU size • tunneled VM’s and bare metal performance about the same • OVS and tunnel optimizations apply for buffer sizes smaller than MTU • tunneled and Bridged VM’s have a slightly higher latency than bare metal

  29. Test setup across l3 gateway

  30. Across network gatewaythroughput

  31. Across network gatewayping latency

  32. analysis • tunneled VM’s tend to have similar if not better throughput as bare metal or bridged VM • tunneled VM’s have a slightly higher latency • Bridged VM’s tend to have same overall throughput as the hypervisor • Bridged VM’s tend to have same latency as the hypervisor • Latency from a tunneled VM across L3 gateway is higher than Physical VMs due to extra hops, but need to re-run the tests

  33. Conclusion & Future work • Understand your network requirements • Latency, bandwidth throughput, flexibility • Overlay Vs Physical • Hybrid Mode • Performance Analysis • Make your deployment patterns simple and repeatable • Future work • Additional performance tests • VXLAN, NVGRE • Varying MTU size • Setup without background traffic • Let me know if you are interested to collaborate

  34. Thank you vbannai@paypal.com

More Related