1 / 34

Manpreet Singh , Prashant Pradhan* and Paul Francis

MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS. Manpreet Singh , Prashant Pradhan* and Paul Francis. *. Each TCP flow gets equal bandwidth. Our Goal: enable bandwidth apportionment among TCP flows in a best-effort network. Transparency :

gwylan
Télécharger la présentation

Manpreet Singh , Prashant Pradhan* and Paul Francis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MPAT:Aggregate TCP Congestion Management as a Building Block for Internet QoS Manpreet Singh,Prashant Pradhan* and Paul Francis *

  2. Each TCP flow gets equal bandwidth

  3. Our Goal:enable bandwidth apportionment among TCP flows in a best-effort network

  4. Transparency: • No network support: • ISPs, routers, gateways, etc. • Clients unmodified • TCP-friendliness • “Total” bandwidth should be the same

  5. Why is it so hard? • Fair share of a TCP flow keeps changing dynamically with time. Lot of cross-traffic Server Client bottleneck

  6. Why not open extra TCP flows ? • pTCP scheme [Sivakumar et. al.] • Open more TCP flows for a high-priority application • Resulting behavior is unfriendly to the network • Large number of flows active at a bottleneck lead to significant unfairness in TCP

  7. Why not modify the AIMD parameters? • mulTCP scheme [ Crowcroft et. al. ] • Use different AIMD parameters for each flow • Increase more aggressively on successful transmission. • Decrease more conservatively on packet loss. •  Unfair to the background traffic •  Does not scale to larger differentials • Large number of timeouts • Two mulTCP flows running together try to “compete” with each other

  8. Properties of MPAT • Key insight: send the packets of one flow through the open congestion window of another flow. • Scalability • Substantial differentiation between flows (demonstrated up to 95:1) • Hold fair share (demonstrated up to 100 flows) • Adaptability • Changing performance requirements • Transient network congestion • Transparency • Changes only at the server side • Friendly to other flows

  9. MPAT: an illustration Unmodified client Server Total congestion window = 10

  10. MPAT: transmit processing Send three additional red packets through the congestion window of blue flow. cwnd 1 2 3 TCP1 4 5 6 cwnd 7 8 1 TCP2 2

  11. MPAT: implementation Maintain a virtual mapping • New variable: MPAT window • Actual window = min ( MPAT window, recv window) • Map each outgoing packet to one of the congestion windows.

  12. MPAT: receive processing For every ACK received on a flow, update the congestion window through which that packet was sent. Incoming Acks 1 cwnd 1 . 2 . 2 . 3 TCP1 4 5 7 6 8 7 1 8 2 cwnd 1 TCP2 2

  13. TCP-friendliness Invariant: Each congestion window experiences the same loss rate.

  14. MPAT decouples reliability from congestion control • Red flow is responsible for the reliability of all red packets. • (e.g. buffering, retransmission, etc. ) • Does not break the “end-to-end” principle.

  15. Experimental Setup • Wide-area network test-bed • Planet-lab • Experiments over the real internet • User-level TCP implementation • Unconstrained buffer at both ends • Goal: • Test the fairness and scalability of MPAT

  16. Bandwidth Apportionment MPAT can apportion available bandwidth among its flows, irrespective of the total fair share

  17. Scalability of MPAT 95 95 95 times differential achieved in experiments

  18. Responsiveness MPAT adapts itself very quickly to dynamically changing performance requirements

  19. Fairness • 16 MPAT flows • Target ratio: 1 : 2 : 3 : … : 15 : 16 • 10 standard TCP flows in background 1.6

  20. Applicability in real world • Deployment: • Enterprise network • Grid applications • Gold vs Silver customers • Background transfers

  21. Sample Enterprise network(runs over the best-effort Internet) New York (web server) New Delhi (application server) San Jose (database server) Zurich (transaction server)

  22. Background transfers • Data that humans are not waiting for • Non-deadline-critical • Examples • Pre-fetched traffic on the Web • File system backup • Large-scale data distribution services • Background software updates • Media file sharing • Grid Applications

  23. Future work • Benefit short flows: • Map multiple short flows onto a single long flow • Warm start • Middle box • Avoid changing all the senders • Detect shared congestion: • Subnet-based aggregation

  24. Conclusions • MPAT is a very promising approach for bandwidth apportionment • Highly scalable and adaptive: • Substantial differentiation between flows (demonstrated up to 95:1) • Adapts very quickly to transient network congestion • Transparent to the network and clients: • Changes only at the server side • Friendly to other flows

  25. Extra slides…

  26. MPAT exhibits much lower variancein throughput than mulTCP Reduced variance

  27. Fairness across aggregates Multiple MPAT aggregates “cooperate” with each other

  28. Multiple MPAT aggregates running simultaneously cooperate with each other

  29. TCP1 TCP3 TCP2 TCP4 Congestion Manager (CM) Goal: To ensure fairness Feedback Sender Receiver Data Callbacks API CM Per-”aggregate” statistics (cwnd, ssthresh, rtt, etc) Congestion controller Scheduler Per-flow scheduling Flow integration • An end-system architecture for congestion management. • CM abstracts all congestion-related info into one place. • Separates reliability from congestion control.

  30. Issues with CM CM maintains one congestion window per “aggregate” TCP1 TCP2 Congestion Manager TCP3 TCP4 TCP5 Unfair allocation of bandwidth to CM flows

  31. mulTCP • Goal: Design a mechanism to give N times more bandwidth to one flow over another. • TCP throughput = f(α, β) / (rtt *sqrt(p)) • α: additive increase factor • β: multiplicative decrease factor • p: loss probability • rtt: round-trip time • Set α = N and β = 1 - 1/(2N) • Increase more aggressively on successful transmission. • Decrease more conservatively on packet loss. • Does not scale with N • Loss process induced is much different from that of N standard TCP flows. • Unstable controller as N increases.

  32. Gain in throughput of mulTCP

  33. Drawbacks of mulTCP • Does not scale with N • Large number of timeouts • The loss process induced by a single mulTCP flow is much different • Increased variance with N • Amplitude increases with N • Unstable controller as N grows • Two mulTCP flows running together try to “compete” with each other

  34. TCP Nice • Two-level prioritization scheme • Only give less bandwidth to low-priority applications • Cannot give more bandwidth to deadline-critical jobs

More Related