1 / 1

Accelerating OpenFlow Switching With Network Processors

(3). (2). (4). (1). (5). Accelerating OpenFlow Switching With Network Processors. Yan Luo*, Eric Murray*, Pablo Cascon † , Julio Ortega † *Department of Electrical and Computer Engineering, University of Massachusetts-Lowell, Lowell, MA

fadhila
Télécharger la présentation

Accelerating OpenFlow Switching With Network Processors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. (3) (2) (4) (1) (5) Accelerating OpenFlow Switching With Network Processors Yan Luo*, Eric Murray*, Pablo Cascon†, Julio Ortega† *Department of Electrical and Computer Engineering, University of Massachusetts-Lowell, Lowell, MA †Department of Computer Engineering, University of Granada, Spain. Our Design (1) Packets enter the NP’s ethernet interface to be processed by the Microengines. (2) Message passing takes place over PCIe, and through a VNIC in User Space. (3) OpenFlow Switch software manipulates flow table normally. (4) Messages are passed back down to the NP, new entries stored in SRAM. (5) Packets continue to be processed on NP until transmitted to destination. Overview OpenFlow NP Architecture • OpenFlow on NP: • Separate flow table exists in SRAM on the NP. • First packet of a new flow is sent to the host PC over the PCIe bus. • PC adds the entry to its own table • Message is sent back to the NP to add that entry to its own flow table. • New packet coming in will be looked up in the NP flow table. • If match is found, the NP will process the packet, without forwarding it to the host. OpenFlow switching enables flexible management of enterprise network switches and experiments on regular network traffic. We have implemented an OpenFlow switching design which utilizes a network processor based acceleration card. The Netronome NFEi8000 network acceleration card used in our design was able to reduce CPU usage by a factor of 2.8 and 5 (depending on traffic type) and increase throughput up to 67%. Software Components Experiment Process and Measurement Throughput Results Round Trip Time Three PCs were used, as described in the figure below. The two end PCs were connected directly to the network processor card of the PC running the OpenFlow switch software. Packets were generated by Computer A using packETH, and through the switch to Computer B. Throughput was measured with tcpstat on Computer B. • The OpenFlow software is run as a kernel module on the host PC. • Netronome Flow Driver is used to run the microcode of the NP that implements the card's flow table. • A kernel module was created to send messages between host and NP over PCIe whenever the flow table on the host was updated. Source: http://www.openflowswitch.org/documents/openflow-wp-latest.pdf • OpenFlow on PC: • Flow table exists on the computer acting as an OpenFlow switch. • First packet of a new flow comes in. Flow table entry created. • Actions can be assigned for each flow. • New packets coming in will be looked up in the flow table. • If match is found, the specified action is taken. CPU Usage Reduction

More Related