1 / 48

High-Level Abstractions for Programming Software Defined Networks

High-Level Abstractions for Programming Software Defined Networks. Jennifer Rexford Princeton University http:// www.cs.princeton.edu /~ jrex. Joint with Nate Foster, David Walker, Arjun Guha , Rob Harrison, Chris Monsanto, Joshua Reich, Mark Reitblatt , Cole Schlesinger.

nituna
Télécharger la présentation

High-Level Abstractions for Programming Software Defined Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High-Level Abstractions for Programming Software Defined Networks Jennifer Rexford Princeton University http://www.cs.princeton.edu/~jrex Joint with Nate Foster, David Walker, ArjunGuha, Rob Harrison, Chris Monsanto, Joshua Reich, Mark Reitblatt, Cole Schlesinger

  2. Software Defined Networks

  3. Software Defined Networks decouple control and data planes

  4. Software Defined Networks decouple control and data planesby providing open standard API

  5. (Logically) Centralized Controller Controller Platform

  6. Protocols  Applications Controller Application Controller Platform

  7. Payoff • Cheaper equipment • Faster innovation • Easier management

  8. But How Should We Program SDNs? Network-wide visibility and control Controller Application Controller Platform Direct control via open interface Today’s controller APIs are tied to the underlying hardware

  9. OpenFlow Networks

  10. Data Plane: Packet Handling • Simple packet-handling rules • Pattern: match packet header bits • Actions: drop, forward, modify, send to controller • Priority: disambiguate overlapping patterns • Counters: #bytes and #packets • src=1.2.*.*, dest=3.4.5.*  drop • src = *.*.*.*, dest=3.4.*.*  forward(2) • 3. src=10.1.2.3, dest=*.*.*.*  send to controller

  11. Control Plane: Programmability Controller Application Controller Platform Events from switches Topology changes, Traffic statistics, Arriving packets Commands to switches (Un)install rules, Query statistics, Send packets

  12. E.g.: Server Load Balancing • Pre-install load-balancing policy • Split traffic based on source IP src=0* src=1*

  13. Seamless Mobility/Migration • See host sending traffic at new location • Modify rules to reroute the traffic

  14. Programming Abstractions for Software Defined Networks

  15. Network Control Loop Compute Policy Write policy Read state OpenFlow Switches

  16. Reading State SQL-Like Query Language

  17. Reading State: Multiple Rules • Traffic counters • Each rule counts bytes and packets • Controller can poll the counters • Multiple rules • E.g., Web server traffic except for source 1.2.3.4 • Solution: predicates • E.g., (srcip != 1.2.3.4) && (srcport == 80) • Run-time system translates into switch patterns 1. srcip = 1.2.3.4, srcport = 80 2. srcport = 80

  18. Reading State: Unfolding Rules • Limited number of rules • Switches have limited space for rules • Cannot install all possible patterns • Must add new rules as traffic arrives • E.g., histogram of traffic by IP address • … packet arrives from source 5.6.7.8 • Solution: dynamic unfolding • Programmer specifies GroupBy(srcip) • Run-time system dynamically adds rules 1. srcip = 1.2.3.4 2. srcip = 5.6.7.8 1. srcip = 1.2.3.4

  19. Reading: Extra Unexpected Events • Common programming idiom • First packet goes to the controller • Controller application installs rules packets

  20. Reading: Extra Unexpected Events • More packets arrive before rules installed? • Multiple packets reach the controller packets

  21. Reading: Extra Unexpected Events • Solution: suppress extra events • Programmer specifies “Limit(1)” • Run-time system hides the extra events not seen by application packets

  22. Frenetic SQL-Like Query Language • Get what you ask for • Nothing more, nothing less • SQL-like query language • Familiar abstraction • Returns a stream • Intuitive cost model • Minimize controller overhead • Filter using high-level patterns • Limit the # of values returned • Aggregate by #/size of packets Traffic Monitoring Select(bytes) * Where(in:2 & srcport:80) * GroupBy([dstmac]) * Every(60) Learning Host Location Select(packets) * GroupBy([srcmac]) * SplitWhen([inport]) * Limit(1)

  23. Computing Policy Parallel and Sequential Composition Abstract Topology Views

  24. Combining Many Networking Tasks Monolithic application Monitor + Route + FW + LB Controller Platform Hard to program, test, debug, reuse, port, …

  25. Modular Controller Applications A module for each task Monitor Route FW LB Controller Platform Easier to program, test, and debug Greater reusability and portability

  26. Beyond Multi-Tenancy Each module controls a different portion of the traffic ... Slice 2 Slice n Slice 1 Controller Platform Relatively easy to partition rule space, link bandwidth, and network events across modules

  27. Modules Affect the Same Traffic Each module partially specifies the handling of the traffic FW LB Monitor Route Controller Platform How to combine modules into a complete application?

  28. Parallel Composition [ICFP’11, POPL’12] srcip = 5.6.7.8  count srcip = 5.6.7.9  count dstip = 1.2/16  fwd(1) dstip = 3.4.5/24  fwd(2) Route on destprefix Monitor on source IP + Controller Platform srcip = 5.6.7.8, dstip = 1.2/16  fwd(1), count srcip = 5.6.7.8, dstip = 3.4.5/24  fwd(2), count srcip = 5.6.7.9, dstip = 1.2/16  fwd(1), count srcip = 5.6.7.9, dstip = 3.4.5/24  fwd(2), count

  29. Example: Server Load Balancer • Spread client traffic over server replicas • Public IP address for the service • Split traffic based on client IP • Rewrite the server IP address • Then, route to the replica 10.0.0.1 10.0.0.2 1.2.3.4 clients load balancer 10.0.0.3 server replicas

  30. Sequential Composition [NSDI’13] srcip = 0*, dstip=1.2.3.4  dstip=10.0.0.1 srcip = 1*, dstip=1.2.3.4  dstip=10.0.0.2 dstip = 10.0.0.1  fwd(1) dstip = 10.0.0.2  fwd(2) Routing Load Balancer >> Controller Platform srcip = 0*, dstip = 1.2.3.4  dstip= 10.0.0.1, fwd(1) srcip = 1*, dstip = 1.2.3.4  dstip = 10.0.0.2, fwd(2)

  31. Dividing the Traffic Over Modules • Predicates • Specify which traffic traverses which modules • Based on input port and packet-header fields Routing Monitor + dstport != 80 Routing Load Balancer >> dstport = 80

  32. High-Level Architecture M2 Composition Spec M1 M3 Controller Platform

  33. Partially Specifying Functionality • A module should not specify everything • Leave some flexibility to other modules • Avoid tying the module to a specific setting • Example: load balancer plus routing • Load balancer spreads traffic over replicas • … without regard to the network paths Routing Load Balancer >> Avoid custom interfaces between the modules

  34. Abstract Topology Views [NSDI’13] • Present abstract topology to the module • Implicitly encodes the constraints • Looks just like a normal network • Prevents the module from overstepping Real network Abstract view 34

  35. Separation of Concerns • Hide irrelevant details • Load balancer doesn’t see the internal topology or any routing changes Routing view Load-balancer view

  36. High-Level Architecture View Definitions M2 Composition Spec M1 M3 Controller Platform

  37. Supporting Topology Views • Virtual ports • (V, 1): [(P1,2)] • (V, 2): [(P2, 5)] • Simple firewall policy • in=1 out=2 • Virtual headers • Push virtual ports • Route on these ports • From (P1,2) to (P2,5) 2 1 V firewall 1 1 5 2 4 2 P2 P1 3 4 3 routing

  38. Writing State Consistent Updates

  39. Writing Policy: Avoiding Disruption • Invariants • No forwarding loops • No black holes • Access control • Traffic waypointing

  40. Writing Policy: Path for New Flow • Rules along a path installed out of order? • Packets reach a switch before the rules do packets Must think about all possible packet and event orderings.

  41. Writing Policy: Update Semantics • Per-packet consistency • Every packet is processed by • … policy P1 or policy P2 • E.g., access control, no loopsor blackholes • Per-flow consistency • Sets of related packets are processed by • … policy P1 or policy P2, • E.g., server load balancer, in-order delivery, … P1 P2

  42. Writing Policy: Policy Update • Simple abstraction • Update entire configuration at once • Cheap verification • If P1 and P2 satisfy an invariant • Then the invariant always holds • Run-time system handles the rest • Constructing schedule of low-level updates • Using only OpenFlow commands! P1 P2

  43. Writing Policy: Two-Phase Update • Version numbers • Stamp packet with a version number (e.g., VLAN tag) • Unobservable updates • Add rules for P2 in the interior • … matching on version # P2 • One-touch updates • Add rules to stamp packets with version # P2 at the edge • Remove old rules • Wait for some time, thenremove all version # P1 rules

  44. Writing Policy: Optimizations • Avoid two-phase update • Naïve version touches every switch • Doubles rule space requirements • Limit scope • Portion of the traffic • Portion of the topology • Simple policy changes • Strictly adds paths • Strictly removes paths

  45. Frenetic Abstractions Policy Composition Consistent Updates SQL-likequeries OpenFlow Switches

  46. Related Work • Programming languages • FRP: Yampa, FrTime, Flask, Nettle • Streaming: StreamIt, CQL, Esterel, Brooklet, GigaScope • Network protocols: NDLog • OpenFlow • Language: FML, SNAC, Resonance • Controllers: ONIX, POX, Floodlight, Nettle, FlowVisor • Testing: NICE, FlowChecker, OF-Rewind, OFLOPS • OpenFlowstandardization • http://www.openflow.org/ • https://www.opennetworking.org/

  47. Conclusion • SDN is exciting • Enables innovation • Simplifies management • Rethinks networking • SDN is happening • Practice: useful APIs and good industry traction • Principles: start of higher-level abstractions • Great research opportunity • Practical impact on future networks • Placing networking on a strong foundation

  48. Frenetic Project • Programming languages meets networking • Cornell: Nate Foster, Gun Sirer, ArjunGuha, Robert Soule, ShrutarshiBasu, Mark Reitblatt, Alec Story • Princeton: Dave Walker, Jen Rexford, Josh Reich, Rob Harrison, Chris Monsanto, Cole Schlesinger, Praveen Katta, NaydenNedev http://frenetic-lang.org Short overview at http://www.cs.princeton.edu/~jrex/papers/frenetic12.pdf

More Related