1 / 29

Packet Caches on Routers: The Implications of Universal Redundant Traffic Elimination

Packet Caches on Routers: The Implications of Universal Redundant Traffic Elimination. Ashok Anand , Archit Gupta, Aditya Akella University of Wisconsin, Madison Srinivasan Seshan Carnegie Mellon University Scott Shenker University of California, Berkeley.

maya
Télécharger la présentation

Packet Caches on Routers: The Implications of Universal Redundant Traffic Elimination

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Packet Caches on Routers: The Implications of Universal Redundant Traffic Elimination Ashok Anand, Archit Gupta, AdityaAkella University of Wisconsin, Madison SrinivasanSeshan Carnegie Mellon University Scott Shenker University of California, Berkeley

  2. Redundant Traffic in the Internet • Lots of redundant traffic in the Internet • Redundancy due to… • Identical objects • Partial content match (e.g. page banners) • Application-headers • … Same content traversing same set of links Time T + 5 Time T

  3. Redundancy Elimination • Object-level caching • Application layer approaches like Web proxy caches • Store static objects in local cache • [Summary Cache: SIGCOMM 98, Co-operative Caching: SOSP 99] • Packet-level caching • [Spring et. al: SIGCOMM 00] • WAN Optimization Products: Riverbed, Peribit, Packeteer, .. Enterprise Internet Access link Packet-Cache Packet-Cache Packet-level caching is better than object-level caching

  4. Benefits of Redundancy Elimination • Reduces bandwidth usage cost • Reduces network congestion at access links • Higher throughputs • Reduces in transfer completion times

  5. Towards Universal RE • However, existing RE approaches apply only to point deployments • E.g. at stub network access links, or between branch offices • They only benefit the system to which they are directly connected. • Why not make RE a native network service that everyone can use?

  6. Our Contribution • Universal redundancy elimination on routers is beneficial • Re-designing the routing protocol to be redundancy aware gives furthermore benefits • Practical to implement redundancy elimination

  7. Universal Redundancy Elimination At All Routers Wisconsin Packet cache at every router Total packets with universal RE= 12 (ignoring tiny packets) Upstream router removes redundant bytes. Downstream router reconstructs full packet Internet2 Total packets w/o RE = 18 33% Berkeley CMU

  8. Benefits of Universal Redundancy Elimination • Subsumes benefits of point deployments • Also benefits Internet Service Providers • Reduces total traffic carried  better traffic engineering • Better responsiveness to sudden overload (e.g. flash crowds) • Re-design network protocolswith redundancy elimination in mind  Further enhance the benefits of universal RE

  9. Redundancy-Aware Routing Wisconsin ISP needs information of traffic similarity between CMU and Berkeley ISP needs to compute redundancy-aware routes Total packets with RE = 12 Total packets with RE + routing= 10 (Further 20% benefit ) 45% Berkeley CMU

  10. Redundancy-Aware Routing • Intra-domain Routing for ISP • Every N minutes • Each border router computes a redundancy profile for the first Ts of the N-minute interval • Estimates how traffic is replicated across other border routers • High speed algorithm for computing profiles • Centrally compute redundancy-aware routes • Route traffic for next N minutes on redundancy-aware routes. • Redundancy elimination is applied hop-by-hop

  11. Redundancy Profile Example Wisconsin Dataunique,pitsburgh= 30 KB Dataunique,Berkeley= 30 KB Datashared= 20 KB Internet2 TotalCMU= 50 KB TotalBerkeley= 50 KB Berkeley CMU 11

  12. Centralized Route Computation • Linear Program • Objective: minimize the total traffic footprint on ISP links • Traffic footprint on each link as latency of link times total unique content carried by the link • Compute narrow, deep trees which aggregate redundant traffic as much as possible • Impose flow conservation and capacity constraints Route computation Centralized Platform

  13. Inter-domain Routing • ISP selects neighbor AS and the border router for each destination • Goal: minimize impact of inter-domain traffic on intra-domain links and peering links. • Challenges: • Need to consider AS relationships, peering locations, route announcements • Compute redundancy profiles across destination ASes • Details in paper

  14. Trace-Based Evaluation • Trace-based study • RE + Routing: Redundancy aware routing • RE: Shortest path routing with redundancy elimination • Baseline: Compared against shortest path routing without redundancy elimination • Packet traces • Collected at University of Wisconsin access link • Separately captured the outgoing traffic from separate group of high volume Web servers in University of Wisconsin • Represents moderate-sized data center • Rocketfuel ISP topologies • Results for intra-domain routing on Web server trace

  15. Benefits in Total Network Footprint • Average redundancy of this Web server trace is 50% using 2GB cache • ATT topology • 2GB cache per router • CDF of reduction in network footprint across border routers of ATT • RE gives reduction of 10-35% • (RE + Routing) gives reduction of 20-45%

  16. When is RE + Routing Beneficial? • Topology effect • E.g., multiple multi-hop paths between pairs of border routers • Redundancy profile • Lot of duplication across border routers

  17. Synthetic Trace Based Study • Synthetic trace for covering wide-range of situations • Duplicates striped across border routers in ISP (inter-flow redundancy) • Low striping across border routers , but high redundancy with in traffic to a border router (intra-flow-redundancy) • Understand topology effect

  18. Benefits in Total Network Footprint • Synthetic trace, average redundancy = 50% • ATT (7018) topology • Trace is assumed to enter at Seattle • RE + Routing, is close to RE at high intra-flow redundancy, 50% benefit • RE has benefit of 8% at zero intra-flow redundancy • RE + Routing, gets benefit of 26% at zero intra-flow redundancy.

  19. Benefits in Max Link Utilization • Link capacities either 2.5 or 10 Gbps • Comparison against traditional OSPF based traffic engineering (SP-MaxLoad) • RE offers 1-25% lower maximum link load . • RE + Routing offers 10-37% lower maximum link load. Max link Utilization = 80%, for SP-MaxLoad

  20. Evaluation Summary • RE significantly reduces network footprint • RE significantly improves traffic engineering objectives • RE + Routing further enhances these benefits • Highly beneficial for flash crowd situations • Highly beneficial in inter-domain traffic engineering

  21. Implementing RE on Routers Fingerprint s Fingerprint table Packet store • Main operations • Fingerprint computation • Easy, can be done with CRC • Memory operations, Read and Write

  22. High Speed Implementation • Reduced the number of memory operations per packet • Fixed number of fingerprints (<10 per packet) • Used lazy invalidation of fingerprint for packet eviction • Other optimizations in paper • Click-based software prototype runs at 2.3 Gbps (approx. OC 48 speed ).

  23. Summary • RE at every router is beneficial ( 10-50%) • Further benefits (10-25%) from redesigning routing protocol to be redundancy-aware. • OC48 speed attainable in software

  24. Thank you

  25. Backup

  26. Flash Crowd Simulation • Flash Crowd: Volume increases at one of the border routers • Redundancy ( 20% -> 50%) • Inter Redundancy Fraction (0.5 -> 0.75) • Max Link Utilization without RE is 50% • Traditional OSPF traffic engineering gets links at 95% utilization at volume increment factor > 3.5 • Whereas SP-RE at 85% , and RA further lower at 75%

  27. Impact of Stale Redundancy Profile • RA relies on redundancy profiles. • How stable are these redundancy profiles ? • Used same profile to compute the reduction in network footprint at later times ( with in an hour) • RA-stale is quite close to RA

  28. High Speed Implementation • Use specialized hardware for fingerprint computation • Reduced the number of memory operations per packet • Number of memory operations is function of number of fingerprints. Fixed the number of sampled fingerprints • During evicting packet, explicit invalidating fingerprint require memory operations. Used lazy invalidation • Fingerprint pointer is checked for validation as well as existence. • Store packet-table and fingerprint-table in DRAM for high-speed • Used Cuckoo Hash-table. As simple-hash based fingerprint table is too large to fit in DRAM

  29. Base Implementation Details (Spring et. al) • Compute fingerprints per packet and sample them • Insert packet into packet store • Check for existence of fingerprint pointer to any packet, for match detection. • Encode the match region in the packet. • Insert each fingerprint into Fingerprint table. • As store becomes full, evict the packet in FIFO manner • As a packet gets evicted, invalidate its corresponding fingerprint pointers

More Related