1 / 60

15-441 Computer Networking

This lecture discusses TCP timeouts, congestion sources and collapse, congestion control basics, TCP congestion control, and TCP fast recovery. It also explores the challenges of reliability such as congestion related losses, variable packet delays, reordering of packets, and sequence number reuse. The lecture further covers the estimation of round-trip time, Go-Back-N variant of TCP, and retransmission timeouts. Additionally, it explains Karn's RTT estimator, timestamp extension, timer granularity, delayed ACKs, and TCP ACK generation.

palmermary
Télécharger la présentation

15-441 Computer Networking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 15-441 Computer Networking TCP

  2. Overview • TCP timeouts • Congestion sources and collapse • Congestion control basics • TCP congestion control • TCP fast recovery Lecture 15: 10-25-01

  3. Reliability Challenges • Like reliability on links • Similar techniques (timeouts, acknowledgements, etc.) • New challenges • Congestion related losses • Variable packet delays • What should the timeout be? • Reordering of packets • Ensure sequences numbers are not reused • How long do packets live? • MSL = 120 seconds based on IP behavior Lecture 15: 10-25-01

  4. TCP = Go-Back-N Variant • Receiver can only return a single “ack” sequence number to the sender. • Acknowledges all bytes with a lower sequence number • Starting point for retransmission • But: sender only retransmits a single packet. • Reason??? • Error control is based on byte sequences, not packets. • Retransmitted packet can be different from the original lost packet – why? • Packets can overlap – why? • Sliding window with cumulative acks • Ack field contains last in-order packet received • Duplicate acks sent when out-of-order packet received Lecture 15: 10-25-01

  5. Round-trip Time Estimation • Wait at least one RTT before retransmitting • Importance of accurate RTT estimators: • Low RTT  unneeded retransmissions • High RTT  poor throughput • RTT estimator must adapt to change in RTT • But not too fast, or too slow! • Spurious timeouts • “Conservation of packets” principle – more than a window worth of packets in flight Lecture 15: 10-25-01

  6. Initial Round-trip Estimator • Round trip times exponentially averaged: • New RTT = a (old RTT) + (1 - a) (new sample) • Recommended value for a: 0.8 - 0.9 • 0.875 for most TCP’s • Retransmit timer set to b RTT, where b = 2 • Every time timer expires, RTO exponentially backed-off • Like Ethernet • Not good at preventing spurious timeouts Lecture 15: 10-25-01

  7. Jacobson’s Retransmission Timeout • Key observation: • At high loads round trip variance is high • Solution: • Base RTO on RTT and standard deviation or RRTT • rttvar =  * dev + (1- )rttvar • Dev = linear deviation • Inappropriately named – actually smoothed linear deviation Lecture 15: 10-25-01

  8. A B Original transmission RTO ACK Sample RTT retransmission Retransmission Ambiguity A B Original transmission X RTO Sample RTT retransmission ACK Lecture 15: 10-25-01

  9. Karn’s RTT Estimator • Accounts for retransmission ambiguity • If a segment has been retransmitted: • Don’t count RTT sample on ACKs for this segment • Keep backed off time-out for next packet • Reuse RTT estimate only after one successful transmission Lecture 15: 10-25-01

  10. Timestamp Extension • Used to improve timeout mechanism by more accurate measurement of RTT • When sending a packet, insert current timestamp into option • 4 bytes for seconds, 4 bytes for microseconds • Receiver echoes timestamp in ACK • Actually will echo whatever is in timestamp • Removes retransmission ambiguity • Can get RTT sample on any packet Lecture 15: 10-25-01

  11. Timer Granularity • Many TCP implementations set RTO in multiples of 200,500,1000ms • Why? • Avoid spurious timeouts – RTTs can vary quickly due to cross traffic • Make timers interrupts efficient • What happens for the first couple of packets? • Pick a very conservative value (seconds) Lecture 15: 10-25-01

  12. Delayed ACKS • Problem: • In request/response programs, you send separate ACK and Data packets for each transaction • Solution: • Don’t ACK data immediately • Wait 200ms (must be less than 500ms – why?) • Must ACK every other packet • Must not delay duplicate ACKs Lecture 15: 10-25-01

  13. TCP ACK Generation[RFC 1122, RFC 2581] TCP Receiver action Delayed ACK. Wait up to 500ms for next segment. If no next segment, send ACK Immediately send single cumulative ACK Send duplicate ACK, indicating seq. # of next expected byte Immediate ACK if segment starts at lower end of gap Event In-order segment arrival, No gaps, Everything else already ACKed In-order segment arrival, No gaps, One delayed ACK pending Out-of-order segment arrival Higher-than-expect seq. # Gap detected Arrival of segment that Partially or completely fills gap Lecture 15: 10-25-01

  14. Overview • TCP timeouts • Congestion sources and collapse • Congestion control basics • TCP congestion control • TCP fast recovery Lecture 15: 10-25-01

  15. 10 Mbps 1.5 Mbps 100 Mbps Congestion • Different sources compete for resources inside network • Why is it a problem? • Sources are unaware of current state of resource • Sources are unaware of each other • Manifestations: • Lost packets (buffer overflow at routers) • Long delays (queuing in router buffers) • In many situations will result in < 1.5 Mbps of throughput for the above topology (congestion collapse) Lecture 15: 10-25-01

  16. Two senders, two receivers One router, infinite buffers No retransmission Large delays when congested Maximum achievable throughput Causes & Costs of Congestion: Scenario 1 Lecture 15: 10-25-01

  17. One router, finite buffers Sender retransmission of lost packet Causes & Costs of Congestion: Scenario 2 Lecture 15: 10-25-01

  18. Always: (goodput) “Perfect” retransmission only when loss: Retransmission of delayed (not lost) packet makes larger (than perfect case) for same l l l > = l l l in in in out out out Causes & Costs of Congestion: Scenario 2 “Costs” of congestion: • More work (retrans) for given “goodput” • Unneeded retransmissions: link carries multiple copies of pkt Lecture 15: 10-25-01

  19. Four senders Multihop paths Timeout/retransmit l l in in Causes & Costs of Congestion: Scenario 3 Q:what happens as and increase ? Lecture 15: 10-25-01

  20. Causes & Costs of Congestion: Scenario 3 Another “cost” of congestion: • When packet dropped, any “upstream transmission capacity used for that packet was wasted! Lecture 15: 10-25-01

  21. Congestion Collapse • Definition: Increase in network load results in decrease of useful work done • Many possible causes • Spurious retransmissions of packets still in flight • Classical congestion collapse • How can this happen with packet conservation • Solution: better timers and TCP congestion control • Undelivered packets • Packets consume resources and are dropped elsewhere in network • Solution: congestion control for ALL traffic Lecture 15: 10-25-01

  22. Other Congestion Collapse Causes • Fragments • Mismatch of transmission and retransmission units • Solutions • Make network drop all fragments of a packet (early packet discard in ATM) • Do path MTU discovery • Control traffic • Large percentage of traffic is for control • Headers, routing messages, DNS, etc. • Stale or unwanted packets • Packets that are delayed on long queues • “Push” data that is never used Lecture 15: 10-25-01

  23. Congestion Control and Avoidance • A mechanism which: • Uses network resources efficiently • Preserves fair network resource allocation • Prevents or avoids collapse • Congestion collapse is not just a theory • Has been frequently observed in many networks Lecture 15: 10-25-01

  24. End-end congestion control: No explicit feedback from network Congestion inferred from end-system observed loss, delay Approach taken by TCP Network-assisted congestion control: Routers provide feedback to end systems Single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) Explicit rate sender should send at Approaches Towards Congestion Control • Two broad approaches towards congestion control: Lecture 15: 10-25-01

  25. Example: ATM Rate-based Flow Control • Provide explicit “per flow” feedback. • Host tells the network how fast it is sending • Switches calculate how fast the host should be sending and include this information in explicit flow control packets that are sent periodically • Feedback can be binary or explicit. • binary: source slows down or speeds up • explicit: switch specifies the rate 16 16 16 16 16 7 7 10 16 10 Lecture 15: 10-25-01

  26. Example: TCP • Very simple mechanisms in network • FIFO scheduling with shared buffer pool • Feedback through packet drops • TCP interprets packet drops as signs of congestion and slows down • This is an assumption: packet drops are not a sign of congestion in all networks • E.g. wireless networks • Periodically probes the network to check whether more bandwidth has become available. Lecture 15: 10-25-01

  27. Tradeoffs in Congestion Control • Explicit schemes that isolate traffic flows seem preferable, but require more support inside the networks • per-flow buffering, weighted fair queuing, policing • scalability??? • determining nature of the explicit feedback in heterogeneous environment • Diversity in networks makes TCP approach a good solution • Dropping packets is universally a natural response to congestion • But many open issues: how to isolate poorly behaved sources, diversity in TCP implementations, ... Lecture 15: 10-25-01

  28. Overview • TCP timeouts • Congestion sources and collapse • Congestion control basics • TCP congestion control • TCP fast recovery Lecture 15: 10-25-01

  29. Objectives • Simple router behavior • Distributedness • Efficiency: X = Sxi(t) • Fairness: (Sxi)2/n(Sxi2) • Convergence: control system must be stable Lecture 15: 10-25-01

  30. Basic Control Model • Reduce speed when congestion is perceived • How is congestion signaled? • Either mark or drop packets • How much to reduce? • Increase speed otherwise • Probe for available bandwidth – how? Lecture 15: 10-25-01

  31. Linear Control • Many different possibilities for reaction to congestion and probing • Examine simple linear controls • Window(t + 1) = a + b Window(t) • Different ai/bi for increase and ad/bd for decrease • Supports various reaction to signals • Increase/decrease additively • Increased/decrease multiplicatively • Which of the four combinations is optimal? Lecture 15: 10-25-01

  32. Phase Plots • Simple way to visualize behavior of competing connections over time User 2’s Allocation x2 User 1’s Allocation x1 Lecture 15: 10-25-01

  33. Phase Plots • What are desirable properties? • What if flows are not equal? Fairness Line Overload User 2’s Allocation x2 Optimal point Underutilization Efficiency Line User 1’s Allocation x1 Lecture 15: 10-25-01

  34. Additive Increase/Decrease • Both X1 and X2 increase/ decrease by the same amount over time • Additive increase improves fairness and additive decrease reduces fairness Fairness Line T1 User 2’s Allocation x2 T0 Efficiency Line User 1’s Allocation x1 Lecture 15: 10-25-01

  35. Muliplicative Increase/Decrease • Both X1 and X2 increase by the same factor over time • Extension from origin – constant fairness Fairness Line T1 User 2’s Allocation x2 T0 Efficiency Line User 1’s Allocation x1 Lecture 15: 10-25-01

  36. Convergence to Efficiency Fairness Line xH User 2’s Allocation x2 Efficiency Line User 1’s Allocation x1 Lecture 15: 10-25-01

  37. a=0 b=1 Fairness Line xH User 2’s Allocation x2 Efficiency Line User 1’s Allocation x1 Distributed Convergence to Efficiency Lecture 15: 10-25-01

  38. Convergence to Fairness Fairness Line xH User 2’s Allocation x2 xH’ Efficiency Line User 1’s Allocation x1 Lecture 15: 10-25-01

  39. Convergence to Efficiency & Fairness Fairness Line xH User 2’s Allocation x2 xH’ Efficiency Line User 1’s Allocation x1 Lecture 15: 10-25-01

  40. Fairness Line x1 x0 User 2’s Allocation x2 x2 Efficiency Line User 1’s Allocation x1 What is the Right Choice? • Constraints limit us to AIMD • Can have multiplicative term in increase • AIMD moves towards optimal point Lecture 15: 10-25-01

  41. Overview • TCP timeouts • Congestion sources and collapse • Congestion control basics • TCP congestion control • TCP fast recovery Lecture 15: 10-25-01

  42. TCP Congestion Control • Motivated by ARPANET congestion collapse • Underlying design principle: packet conservation • At equilibrium, inject packet into network only when one is removed • Basis for stability of physical systems • Why was this not working? • Connection doesn’t reach equilibrium • Spurious retransmissions • Resource limitations prevent equilibrium Lecture 15: 10-25-01

  43. TCP Congestion Control - Solutions • Reaching equilibrium • Slow start • Eliminates spurious retransmissions • Accurate RTO estimation • Fast retransmit • Adapting to resource availability • Congestion avoidance Lecture 15: 10-25-01

  44. TCP Congestion Control • Packet loss is seen as sign of congestion and results in a multiplicative rate decrease • Factor of 2 • TCP periodically probes for available bandwidth by increasing its rate Rate Time Lecture 15: 10-25-01

  45. TCP Congestion Control Open Questions • How can this be implemented? • Operating system timers are very coarse – how do you accurately calculate the transmission rate? • How does TCP know what is a good initial rate to start with? • Should work both for a CDPD (10s of Kbs or less) and for supercomputer links (2.4 Gbs and growing) Lecture 15: 10-25-01

  46. TCP Congestion ControlImplementation • Implemented using a congestion window that limits how much data can be in the network. • TCP also keeps track of how much data is in transit • Data can only be sent when the amount of outstanding data is less than the congestion window. • The amount of outstanding data is increased on a “send” and decreased on “ack” • (last sent – last acked) < congestion window • Window limited by both congestion and buffering • Sender’s maximum window = Min (advertised window, cwnd) Lecture 15: 10-25-01

  47. TCP Packet Pacing • Congestion window helps to “pace” the transmission of data packets • In steady state, a packet is sent when an ack is received • Data transmission remains smooth, once it is smooth • Self-clocking behavior Pb Pr Sender Receiver As Ar Ab Lecture 15: 10-25-01

  48. Congestion Avoidance • If loss occurs when cwnd = W • Network can handle 0.5W ~ W segments • Set cwnd to 0.5W (multiplicative decrease) • Upon receiving ACK • Increase cwnd by 1/cwnd • Implements AIMD Lecture 15: 10-25-01

  49. Congestion Avoidance Sequence Plot Sequence No Time Lecture 15: 10-25-01

  50. Congestion Avoidance Behavior Congestion Window Time Cut Congestion Window and Rate Grabbing back Bandwidth Packet loss + Timeout Lecture 15: 10-25-01

More Related