1 / 18

Principles of Congestion Control

Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control! Manifestations (symptoms): lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem!. Principles of Congestion Control.

cerise
Télécharger la présentation

Principles of Congestion Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control! Manifestations (symptoms): lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem! Principles of Congestion Control 3: Transport Layer

  2. two senders, two receivers one router, infinite buffers no retransmission C - link capacity large delays when congested (nearing link capacity; at maximum achievable throughput) Causes/costs of congestion: scenario 1Assumptions: 3: Transport Layer

  3. one router, finite buffers sender retransmission of lost packet Causes/costs of congestion: scenario 2 3: Transport Layer

  4. Ideally, no loss: (top line in fig. (a) below) “perfect” retransmission, only when loss: (fig. (b) e.g. 1/3 of bytes retransmitted, so 2/3 original received, per host) retransmission of delayed (not lost) packet makes larger (than perfect case) for same (fig. (a) second line e.g.) l l l > = l l l in in in out out out Causes/costs of congestion: scenario 2 “costs” of congestion: • more work (retrans) of original data (goodput) • unneeded retransmissions: link carries multiple copies of pkt 3: Transport Layer

  5. four senders multihop paths timeout/retransmit l l in in Causes/costs of congestion: scenario 3 Q:what happens as and increase ? 3: Transport Layer

  6. Causes/costs of congestion: scenario 3 Another “cost” of congestion: • when packet dropped, any “upstream transmission capacity used for that packet was wasted! 3: Transport Layer

  7. End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP (IP provides no feedback) Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECnet, TCP/IP ECN, ATM) explicit rate that sender should send at Approaches towards congestion control Two broad approaches towards congestion control: 3: Transport Layer

  8. ABR: available bit rate: “elastic service” if sender’s path “underloaded”: sender should use available bandwidth if sender’s path congested: sender throttled to minimum guaranteed rate RM (resource management) cells: sent by sender, interspersed with data cells bits in RM cell set by switches (“network-assisted”) NI bit: no increase in rate (mild congestion) CI bit: congestion indication RM cells returned to sender by receiver, with bits intact (note: switch can also generate RM cell) Case study: ATM ABR congestion control 3: Transport Layer

  9. two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell sender’s send rate thus minimum supportable rate of all switches on path EFCI (explicit forward congestion indication) bit in data cells: set to 1 in congested switch if data cell preceding RM cell has EFCI set, receiver sets CI bit in returned RM cell Case study: ATM ABR congestion control 3: Transport Layer

  10. end-end control (no network assistance) transmission rate limited by congestion window size, Congwin, over segments: w * MSS throughput = Bytes/sec RTT TCP Congestion Control Congwin • w segments, each with MSS bytes sent in one RTT: 3: Transport Layer

  11. two “phases” slow start congestion avoidance important variables: Congwin threshold: defines threshold between slow start phase and congestion avoidance phase “probing” for usable bandwidth: ideally: transmit as fast as possible (Congwin as large as possible) without loss increaseCongwin until loss (congestion) loss: decreaseCongwin, then begin probing (increasing) again TCP congestion control: 3: Transport Layer

  12. exponential increase (per RTT) in window size (not so slow!) loss event: timeout (Tahoe TCP) and/or three duplicate ACKs (Reno TCP) Slowstart algorithm time TCP Slowstart Host A Host B one segment RTT initialize: Congwin = 1 for (each segment ACKed) Congwin++ until (loss event OR CongWin > threshold) two segments four segments 3: Transport Layer

  13. TCP Congestion Avoidance Congestion avoidance /* slowstart is over */ /* Congwin > threshold */ Until (loss event) { after every w segments ACKed: Congwin++ } threshold = Congwin/2 Congwin = 1 perform slowstart 1 1: TCP Reno skips slowstart (fast recovery) after three duplicate ACKs 3: Transport Layer

  14. Fairness goal: if N TCP sessions share same bottleneck link, each should get 1/N of link capacity TCP congestion avoidance (ignoring slow start): AIMD: increase window by 1 per RTT decrease window by factor of 2 on loss event TCP Fairness AIMD (additive increase, multiplicative decrease) TCP connection 1 bottleneck router capacity R TCP connection 2 3: Transport Layer

  15. Assume two competing sessions (same MSS and RTT): Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally Why is TCP fair? equal bandwidth share R loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 2 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R 3: Transport Layer

  16. Q: client latency from object request from WWW server to receipt? TCP connection establishment data transfer delay Notation, assumptions: Assume: fixed congestion window, W, giving throughput of R bps S: MSS (bits) O: object size (bits) no retransmissions (no loss, no corruption) Effects of TCP latencies Two cases to consider: • WS/R > RTT + S/R: server receives ACK for first segment in window before window’s worth of data sent • WS/R < RTT + S/R: server must wait for ACK after sending window’s worth of data sent 3: Transport Layer

  17. Effects of TCP latenciesW=4, WS/R > RTT + S/R W=2, WS/R < RTT + S/R Case 2: latency = 2RTT + O/R + (K-1)[S/R + RTT - WS/R] K = O/(WS) Case 1: latency = 2RTT + O/R (O is num bits in entire object) 3: Transport Layer

  18. principles behind transport layer services: multiplexing/demultiplexing reliable data transfer flow control congestion control instantiation and implementation in the Internet UDP TCP Next: leaving the network “edge” (application transport layer) into the network “core” Summary on Transport Layer 3: Transport Layer

More Related