1 / 27

NPS: A N on-interfering Web P refetching S ystem

NPS: A N on-interfering Web P refetching S ystem. Ravi Kokku, Praveen Yalagandula, Arun Venkataramani, Mike Dahlin Laboratory for Advanced Systems Research Department of Computer Sciences University of Texas at Austin. Summary of the Talk.

marcel
Télécharger la présentation

NPS: A N on-interfering Web P refetching S ystem

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NPS: A Non-interfering Web Prefetching System Ravi Kokku, Praveen Yalagandula, Arun Venkataramani, Mike Dahlin Laboratory for Advanced Systems Research Department of Computer Sciences University of Texas at Austin

  2. Summary of the Talk Prefetching should be done aggressively, but safely  Safe: Non-interference with demand requests • Contributions: • A self-tuning architecture for web prefetching • Aggressive when abundant spare resources • Safe when scarce resources • NPS: A prototype prefetching system • Immediately deployable Department of Computer Sciences, UT Austin

  3. Outline Prefetch aggressively as well as safely • Motivation • Challenges/principles • NPS system design • Conclusion Department of Computer Sciences, UT Austin

  4. What is Web Prefetching? • Speculatively fetch data that will be accessed in the future • Typical prefetch mechanism [PM96, MC98, CZ01] Client Server Demand Requests Responses + Hint Lists Prefetch Requests Prefetch Responses Department of Computer Sciences, UT Austin

  5. Why Web Prefetching? • Benefits [GA93, GS95, PM96, KLM97, CB98, D99, FCL99, KD99, VYKSD01, …] • Reduces response times seen by users • Improves service availability • Encouraging trends • Numerous web applications getting deployed • News, banking, shopping, e-mail… • Technology is improving rapidly •  capacities and  prices of disks and networks Prefetch Aggressively Department of Computer Sciences, UT Austin

  6. Why doesn’t everyone prefetch? • Extra resources on servers, network and clients • Interference with demand requests • Two types of interference • Self-Interference– Applications hurt themselves • Cross-Interference– Applications hurt others • Interference at various components • Servers – Demand requests queued behind prefetch • Networks – Demand packets queued or dropped • Clients – Caches polluted by displacing more useful data Department of Computer Sciences, UT Austin

  7. Example: Server Interference • Common load vs. response curve • Constant-rate prefetching reduces server capacity 0.7 Pfrate=5 0.6 0.5 0.4 Pfrate=1 Avg. Demand Response Time (s) Demand 0.3 0.2 0.1 0 100 200 300 400 500 600 700 800 Demand Connection Rate (conns/sec) Prefetch Aggressively, BUT SAFELY Department of Computer Sciences, UT Austin

  8. Outline Prefetch aggressively as well as safely • Motivation • Challenges/principles • Self-tuning • Decoupling prediction from resource management • End-to-end resource management • NPS system design • Conclusion Department of Computer Sciences, UT Austin

  9. Goal 1: Self-tuning System • Proposed solutions use “magic numbers” • Prefetch thresholds [D99, PM96, VYKSD01, …] • Rate limiting [MC98, CB98] • Limitations of manual tuning • Difficult to determine “good” thresholds • Good thresholds depend on spare resources • “Good” threshold varies over time • Sharp performance penalty when mistuned • Principle 1: Self-tuning • Prefetch according to spare resources • Benefit: Simplifies application design Department of Computer Sciences, UT Austin

  10. Goal 2: Separation of Concerns • Prefetching has two components • Prediction – What all objects are beneficial to prefetch? • Resource management –How many can we actually prefetch? • Traditional techniques do not differentiate • Prefetch if prob(access) > 25% • Prefetch only top 10 important URLs • Wrong Way! We lose the flexibility to adapt • Principle 2: Decouple prediction from resource management • Prediction: Application identifies all useful objects • In the decreasing order of importance • Resource management: Uses Principle 1 • Aggressive – when abundant resources • Safe – when no resources Department of Computer Sciences, UT Austin

  11. Goal 3: Deployability • Ideal resource management vs. deployability • Servers • Ideal: OS scheduling of CPU, Memory, Disk… • Problem: Complexity – N-Tier systems, Databases, … • Networks • Ideal: Use differentiated services/ router prioritization • Problems: Every router should support it • Clients • Ideal: OS scheduling, transparent informed prefetching • Problem: Millions of deployed browsers • Principle 3: End-to-end resource management • Server – External monitoring and control • Network – TCP-Nice • Client – Javascript tricks Department of Computer Sciences, UT Austin

  12. Outline Prefetch Aggressively as well as safely • Motivation • Principles for a prefetching system • Self-tuning • Decoupling prediction from resource management • End-to-end resource management • NPS prototype design • Prefetching mechanism • External monitoring • TCP-Nice • Evaluation • Conclusion Department of Computer Sciences, UT Austin

  13. Prefetch Requests Prefetch Server Client Demand Requests Munger Demand Server Hint Lists Prefetch Mechanism Fileset Hint Server Server m/c 1. Munger adds Javascript to html pages 2. Client fetches html page 3. Javascript on html page fetches hint list 4. Javascript on html page prefetches objects Department of Computer Sciences, UT Austin

  14. Client Demand Server Hint Server Monitor End-to-end Monitoring and Control while(1) { getHint( ); prefetchHint( ); } • Principle: Low response times  server not loaded • Periodic probing for response times • Estimation of spare resources (budget) at server – AIMD • Distribution of budget • Control the number. of clients allowed to prefetch GET http://repObj.html 200 OK… if (budgetLeft) send(hints); else send(“return later”); Department of Computer Sciences, UT Austin

  15. Monitor Evaluation (1) 0.7 Manual tuning, Pfrate=5 • End-to-end monitoring makes prefetching safe 0.6 0.5 Manual tuning, Pfrate=1 No-Prefetching 0.4 Avg Demand Response Time(sec) 0.3 Monitor 0.2 0.1 0 0 100 200 300 400 500 600 700 800 Demand Connection Rate (conns/sec) Department of Computer Sciences, UT Austin

  16. Monitor Evaluation (2) • Manual tuning is too damaging at high load No-Prefetching 80 60 Bandwidth (Mbps) 40 Demand: pfrate=1 20 Prefetch: pfrate=1 0 0 100 200 300 400 500 600 700 800 Demand Connection Rate (conns/sec) Department of Computer Sciences, UT Austin

  17. Monitor Evaluation (2) • Manual tuning too timid or too damaging • End-to-end monitoring is both aggressive and safe No-Prefetching 80 Prefetch:Monitor Demand:Monitor 60 Bandwidth (Mbps) 40 Demand: pfrate=1 20 Prefetch: pfrate=1 0 0 100 200 300 400 500 600 700 800 Demand Connection Rate (conns/sec) Department of Computer Sciences, UT Austin

  18. Network Resource Management • Demand and prefetch on separate connections • Why is this required? • HTTP/1.1 persistent connections • In-order delivery of TCP • So prefetch affects demand • How to ensure separation? • Prefetching on a separate server port • How to use the prefetched objects? • Javascript tricks – In the paper Department of Computer Sciences, UT Austin

  19. Network Resource Management • Prefetch connections use TCP Nice • TCP Nice • A mechanism for background transfers • End-to-end TCP congestion control • Monitors RTTs and backs-off when congestion • Previous study [OSDI 2002] • Provably bounds self- and cross-interference • Utilizes significant spare network capacity • Server-side deployable Department of Computer Sciences, UT Austin

  20. PrefSvr Apache:8085 DemandSvr Apache: 80 End-to-end Evaluation • Measure avg. response times for demand reqs. • Compare with No-Prefetching and Hand-tuned • Experimental setup Network Cable modem, Abilene Fileset Client httperf Trace IBM server HintSvr PPM predict Department of Computer Sciences, UT Austin

  21. Prefetching with Abundant Resources • Both Hand-tuned and NPS give benefits • Note: Hand-tuned is tuned to the best Department of Computer Sciences, UT Austin

  22. Tuning the No-Avoidance Case • Hand-tuning takes effort • NPS is self-tuning Department of Computer Sciences, UT Austin

  23. Prefetching with Scarce Resources • Hand-tuned damages by 2-8x • NPS causes little damage to demand Department of Computer Sciences, UT Austin

  24. Conclusions • Prefetch aggressively, but safely • Contributions • A prefetching architecture • Self-tuning • Decouples prediction from resource management • Deployable – few modifications to existing infrastructure • Benefits • Substantial improvements with abundant resources • No damage with scarce resources • NPS prototype http://www.cs.utexas.edu/~rkoku/RESEARCH/NPS/ Department of Computer Sciences, UT Austin

  25. Thanks Department of Computer Sciences, UT Austin

  26. Prefetching with Abundant Resources • Both Hand-tuned and NPS give benefits • Note: Hand-tuned is tuned to the best Department of Computer Sciences, UT Austin

  27. Client Resource Management • Resources – CPU, memory and disk caches • Heuristics to control cache pollution • Limit the space prefetch objects take • Short expiration time for prefetched objects • Mechanism to avoid CPU interference • Start prefetching after all demand done • Handles self-interference – more common case • What about cross-interference? • Client modifications might be necessary Department of Computer Sciences, UT Austin

More Related