1 / 19

Serverless Network Srvices In OpenStack Data Centers

Serverless Network Srvices In OpenStack Data Centers. OpenStack Summit Boston May 2017. Eran Gampel, CEO at Cloudigo Erez Cohen, VP CloudX program at Mellanox. Virtual Networking. Mature. Provides virtualization, Isolation, advance services …

jcarter
Télécharger la présentation

Serverless Network Srvices In OpenStack Data Centers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Serverless Network SrvicesIn OpenStack Data Centers OpenStack Summit Boston May 2017 Eran Gampel, CEO at Cloudigo Erez Cohen, VP CloudX program at Mellanox

  2. VirtualNetworking • Mature. Provides virtualization, Isolation, advance services … • But does it fit high IO performance use cases? • More and More Use cases for  High-Performance Virtual Network are emerging . VM VM Hypervisor vSwitch NIC

  3. ExistingSolutions For Virtualized High performance IO • OVS-DPDK • VPP • SR-IOV

  4. SR-IOV • Single Root I/O Virtualization (SR-IOV) specify a standard way of bypassing the VMM’s involvement in data movement by providing independent memory space, interrupts, and DMA streams for each virtual machine. • SR-IOV allows a physical network adapter to appear as multiple PCIe network devices. SR-IOV Software Switch Hypervisor VM VM VM VM Hypervisor vSwitch Physical Function (PF) Virtual Function (VF) SR-IOV NIC NIC

  5. SR-IOV: Pros And Cons • Pros • Lower processor utilization and network latency • High IO performance • Persistent performance up to line rate • Cons • Static VF allocation, VF relay on PF configuration • Most NIC include limited switching capabilities for VF’s on the same NIC • No local virtual services such as Security Groups (SG) and more SR-IOV Hypervisor VM VM Virtual Function (VF) SR-IOV NIC

  6. High Performance VirtualizedServicesChallenges • Use relatively high CPU footprint for virtual network services • Relay on the virtual switch features, no custom action unless redirected to controller • Overhead (CPU, latency and bandwidth) increases with the number of flows • For advance services that are provided in VM or appliance, traffic need to be steered to service Compute Node Network Node VM VM Virtual Services VM Hypervisor SR-IOV vSwitch SR-IOV NIC NIC

  7. Web Application Evaluation - Progression to Serverless Tier 1 App Tier 2 Tier 3 Monolithic Application Serverless Microservice Services

  8. Serverless For Networking Services - Is It Possible? SEC VM QoS QoS NAT Scale CT DPI Alarm Alarm L2 L2 MNG LB NAT Scale CT DPI Route Shaping TCP IPS Route Shaping TCP IPS HA OEM Tunnel ACL Dpi ENCAP NAT HA OEM Tunnel ACL Route LB Smart NIC Monolithic Appliance Serverless Virtual Appliance

  9. TheRise Of The Smart NIC • Modern data centers demand advance, smart NICs • High bandwidth, 100G today, 200G this year • Low latency • Transport offloads • Kernel bypass • Advance virtualization support • Flow based switch • Software programmable • Smart NICs are the future • ASIC: Highest performance and efficiency • ASIC + FPGA: High efficiency and flexibility • CPU / System On a Chip: Most programmable

  10. ASIC based smart NIC Features Gen 3 and Gen 4 X16 PCIe 2 Network Ports, 10, 25, 40, 50 and 100G Stateless offloads (inc. overlay offload) Highest DPDK performance (over 131 Mpps) Single Root IO Virtualization (SR-IOV) HW based QoS, High Availability Accelerated Switch And Packet Processing (ASAP2) In host network services offload or acceleration VNF acceleration Remote Direct Memory Access (RDMA) Storage and application transport acceleration Mellanox ConnectX Family Introduction

  11. Accelerated Switching And Packet Proccessing (ASAP2) • NIC contains advance embedded flow based switch/router (eSwitch) • Offload “Match -> Action” operations • HW based classification, steering, encap/decap, header rewrite and more • Open source, standard control APIs: TC, DPDK ConnectX-5

  12. ASAP2 Implementation Example – OVS Offload • Zero! CPU utilization on hypervisor compared to 2 cores with OVS over DPDK • Same CPU load on VM

  13. OpenStack Virtual Services For SR-IOV • Line rate SR-IOV including distributed statefull services. SG, Virtual Router, LB • Linear Scale up to line rate without using extra CPU resources Virtual Router Virtual Switch Virtual Switch Compute Node SG SG SG Physical Host layout Tenant Overlay Logical View VM 1 VM 2 VM 3 CLOUDIGO ENGINE VM Neutron Server Neutron Server Virtual Function (VF) Compute Node Compute Node Compute Node VM SR-IOV NIC

  14. Cloudigo’sProgrammableNetworkInfrastructure • Programmable Infrastructure with Built-In ultra efficient network services • Seamless offload of the core discrete functions to commodity HW 20% Other… CLOUDIGO SW PROGRAMMABLE ENGINE User Defined LB NAT DDoS FW Offload Learning Engine ASIC Layer - NIC Adapter

  15. Cloudigo – HW + SW Engine VM VM Thin layer with minimal latency and resource usage Zero Copy Ports (SR-IOV like) CLOUDIGO SW PROGRAMMABLE ENGINE SR-IOV VF CLOUDIGO SW PROGRAMMABLE ENGINE 3rd Party DDoS LB FW NAT DDoS FW NAT SR-IOV VF HW Pipeline Route LB CLOUDIGO INSTALLED PIPELINE External Ports

  16. OpenStack Virtual Services For SR-IOV 20% Other… • Line rate SR-IOV including distributed statefull services. • SG, Virtual Router, LB • Linear Scale without using extra CPU resources CLOUDIGO ENGINE VM VM Logical Router 3rd Party DDoS LB … VM SR-IOV VF Logical Switch Logical Switch SG SG SG CLOUDIGO INSTALLED PIPELINE VM 1 VM 2 VM 3 ConnectX 5 External Ports SR-IOV VF SR-IOV VF

  17. CPU Utilization VM density and Latency Improvement Test Scenario: Virtual routing and state full security groups for VM’s 90% CPU saving Improving VM density by almost 200% Almost eliminating latency for virtual network services Server: 2xE5-2690v4 (14 cores) Total: 28 cores SR-IOV + Cloudigo VM VM Cloudigo OVS-DPDK Virtual Functions (VF) VM VM Smart NIC OVS-DPDK 1 core of Cloudigo engine for 100Gbps 27 cores left for VM’s = 4x27 = ~ 108 VMs Latency for Cloudigo layer = ~0us OVS-DPDK – ~8Gbps for 1 core For 100Gbps = 12 cores 16 cores left for VM’s = 4x16 = ~64 VMs Minimal Latency for OVS-DPDK layer = 33us NIC

  18. Solutions Use Cases SEC • vCPE • vBNG • … VM VM VM … VM QinQ PPPoE Tunnel CLOUDIGO ENGINE Route LB VM SR-IOV SR-IOV NIC

  19. Questions? Eran Gampel, CEO at Cloudigo (eran@cloudigo.io) Erez Cohen, VP CloudX program at Mellanox (erezc@mellanox.com)

More Related