1 / 52

An Introduction to Computer Networks

An Introduction to Computer Networks. Lecture 14: Congestion Control. University of Tehran Dept. of EE and Computer Engineering By: Dr. Nasser Yazdani. Outline. Allocating resource among competing users. Bandwidth on the line Buffers on the routers

kaelem
Télécharger la présentation

An Introduction to Computer Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An IntroductiontoComputer Networks Lecture 14: Congestion Control University of Tehran Dept. of EE and Computer Engineering By: Dr. Nasser Yazdani Introduction to Computer Network

  2. Outline • Allocating resource among competing users. • Bandwidth on the line • Buffers on the routers • Congestion control and resource allocation, two side of the same coin. • Different congestion control Policies. • Reacting to Congestion • Avoiding Congestion Introduction to Computer Network

  3. Network Capacity? • How TCP determine the Network capacity? • Pumping less traffic leads to wasting resource in the network. • Too much traffic leads to congestion. • Solution: Be adaptive: TCP pumping load into the network based on network current situation. • How TCP does this- This is studies under title of “Congestion control” Introduction to Computer Network

  4. 10 Mbps 1.5 Mbps 100 Mbps Congestion • Different sources compete for resources inside network • Why is it a problem? • Sources are unaware of current state of resource • Sources are unaware of each other • In many situations will result in < 1.5 Mbps of throughput (congestion collapse) Introduction to Computer Network

  5. Source 1 10-Mbps Ethernet Router Destination 1.5-Mbps T1 link 100-Mbps FDDI Source 2 Issues • Two sides of the same coin • pre-allocate resources so to avoid congestion • control congestion if (and when) is occurs • Two points of implementation • hosts at the edges of the network (transport protocol) • routers inside the network (queuing discipline) • Underlying service model • best-effort (assume for now) • multiple qualities of service (later) Introduction to Computer Network

  6. Why is Congestion Bad? • Wasted bandwidth: retransmission of dropped packets. • Poor user service : unpredictable delay, reduced throughput. Increased load can even result in lower network throughput. » Switched nets: heavy traffic -> long queues -> lost packets ->retransmits » Ethernet: high demand -> many collisions » compare with highways: too much traffic slows down throughput • Solutions? Redesign the network. » Add capacity to congested links » Reroute traffic. » What are the options? Introduction to Computer Network

  7. Evaluation We need to know how good is a resource allocation/congestion aviodness mechanism. • Fairness • Power of network (ratio of throughput to delay), Maximize this ratio. • Distributedness • Efficiency • Cost Throughput/delay Optimal load Load Introduction to Computer Network

  8. Evaluation (Fairness) • What is fair resource allocation? • Equal share of bandwidth • How about the length of paths? • Given a set of flow throughput (x1,x2,…, xn) f(x1,x2,…, xn)= (Σi=1n xi)2/n Σi=1nxi2 • Fairness is always between 0, 1 and 1 completely fair and 0 completely unfair. Introduction to Computer Network

  9. Possible Solutions • Redesign the network. » Add capacity to congested links » Very slow solution: takes days to months! • Reroute traffic. » Alternate paths are not always available » Also too slow: takes 10s of seconds » In practice, most routing algorithms are static • Adjust the load in the network. Load balancing » What are the options? Introduction to Computer Network

  10. Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control! manifestations: lost packets (buffer overflow at routers) long delays (queuing in router buffers) a top-10 problem! Principles of Congestion Control Introduction to Computer Network

  11. two senders, two receivers one router, infinite buffers no retransmission large delays when congested maximum achievable throughput Causes/costs of congestion: scenario 1 Introduction to Computer Network

  12. four senders multihop paths timeout/retransmit l l in in Another scenario Q:what happens as and increase ? Introduction to Computer Network

  13. Causes/costs of congestion: scenario 3 Another “cost” of congestion: • when packet dropped, any “upstream transmission capacity used for that packet was wasted! Introduction to Computer Network

  14. Source 1 Router Destination 1 Router Source 2 Router Destination 2 Source 3 Framework • Connectionless flows • sequence of packets sent between source/destination pair • maintain soft state at the routers Introduction to Computer Network

  15. Framework (cont) • Taxonomy: different approaches for congestion Ctrl. • router-centric versus host-centric: • Router-Centric: each router takes the responsibility for dropping packets, and informing the generating host the amount of traffic allowed to be sent. • Host-Centric: the end hosts observe the amount of traffic that is successfully getting through the network and adjust their transmission rates accordingly. • reservation-based versus feedback-based • Reservation-Based: the end host asks the network for a certain amount of bandwidth when requesting a connection (or flow). If the network does not have enough bandwidth it will reject the connection. • Feedback-Based: the end hosts send data without first reserving any capacity and then adjust their sending rate according to the feedback they received • Explicit feedback • Implicit feedback. Introduction to Computer Network

  16. Framework (cont) • window-based versus rate-based: • Window-Based: the receiver advertises a window to the sender. The window corresponds to how much buffer space the receiver has and it limits how much data the sender can transmit. Network also can do that, like X.25. • Rate-Based: How many bit the sender can send or the network can absorb. • Rate-Based can support video. • Rate-Based still is an open problem Introduction to Computer Network

  17. Where to Prevent Collapse? • Can end hosts prevent problem? • Yes, but must trust end hosts to do right thing • E.g., sending host must adjust amount of data it puts in the network based on detected congestion • Can routers prevent collapse? • No, not all forms of collapse • Doesn’t mean they can’t help • Sending accurate congestion signals • Isolating well-behaved from ill-behaved sources Introduction to Computer Network

  18. What’s Really Happening? packet loss • Knee – point after which • throughput increasesvery slow • delay increases fast • Cliff – point after which • throughput starts to decrease very fastto zero (congestion collapse) • delay approaches infinity knee cliff Throughput congestion collapse Load Delay Load Introduction to Computer Network

  19. Goals • Operate near the knee point • Remain in equilibrium • How to maintain equilibrium? • Don’t put a packet into network until another packet leaves. How do you do it? • Use ACK: send a new packet only after you receive and ACK (self-clocking) • This maintains the number of packets in network “constant” Introduction to Computer Network

  20. Self-clocking Pr Pb Sender Receiver Ab As Ar Introduction to Computer Network

  21. How Do You Do It? • Detect when network approaches/reaches knee point • Stay there • Questions • How do you get there? • What if you overshoot (i.e., go over knee point) ? • Possible solution: • Increase window size until you notice congestion • Decrease window size if network congested Introduction to Computer Network

  22. TCP Congestion Control • Idea • assumes best-effort network (FIFO or FQ routers)each source determines network capacity for itself • uses implicit feedback • ACKs pace transmission (self-clocking) • Challenge • determining the available capacity in the first place • adjusting to changes in the available capacity Introduction to Computer Network

  23. Additive Increase/Multiplicative Decrease • Objective: adjust to changes in the available capacity • New state variable per connection: CongestionWindow (cwnd) • Idea: • increase Cwnd when congestion goes down • decrease Cwnd when congestion goes up MaxWindow = min(cwnd, AdvertisedWindow) EffectiveWindow = MaxWindow – (LastByteSent – LastByteAcked) sequence number increases LastByteSent LastByteAcked Introduction to Computer Network

  24. AIMD (cont) • Question: how does the source determine whether or not the network is congested? • Answer: a timeout occurs • timeout signals that a packet was lost • packets are seldom lost due to transmission error • lost packet implies congestion Introduction to Computer Network

  25. AIMD (cont) • In practice: increment a little for each ACK Increment = MSS * (MSS/cwnd) cwnd += Increment MSS= Maximum segment size • Algorithm • increment Cwnd by one packet per RTT (linear or additive increase) • divide Cwnd by two whenever a timeout occurs (multiplicative decrease) Destination Source … Introduction to Computer Network

  26. AIMD (cont) • Trace: saw tooth behavior 70 60 50 40 KB 30 20 10 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 T ime (seconds) Introduction to Computer Network

  27. TCP: Slow Start • Question? How much is the initial cwnd size? • Small cwnd implies less network utilization • Big cwnd means congestion again • What is optimal of cwnd. • Solution: Start with cwnd =1 and Quickly increase cwnd until network congested  get a rough estimate of the optimal of cwnd • Each time a segment is acknowledged increment cwnd by one (cwnd++) • Increase of cwnd is exponential

  28. segment 1 cwnd = 1 ACK for segment 1 segment 2 segment 3 ACK for segments 2 + 3 segment 4 segment 5 segment 6 segment 7 ACK for segments 4+5+6+7 Slow Start Example destination source • TCP slows down the increase of cwnd when cwnd >=ssthresh • Ssthresh - slow start threshold value. • After ssthresh, TCP change from slow start to congestion avoidance! cwnd = 2 cwnd = 4 cwnd = 8 Introduction to Computer Network

  29. cwnd = 1 cwnd = 2 cwnd = 4 cwnd = 8 cwnd = 9 cwnd = 10 Slow Start/Congestion Avoidance Example Client Server • Assume that ssthresh = 8 14 12 10 8 ssthresh Cwnd (in segments) 6 4 2 0 0 2 4 6 = = = = t t t t Roundtriptimes Introduction to Computer Network

  30. Putting Everything Together:TCP Pseudocode Initially: cwnd = 1; ssthresh = infinite; New ack received: if (cwnd < ssthresh) /* Slow Start*/ cwnd = cwnd + 1; else /* Congestion Avoidance */ cwnd = cwnd + 1/cwnd; Timeout: /* Multiplicative decrease */ ssthresh = win/2; cwnd = 1; /* Again slow start */

  31. The big picture cwnd Timeout Congestion Avoidance Slow Start Time

  32. 70 60 50 KB 40 30 20 10 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 Slow Start (cont) • Exponential growth, but slower than all at once • Used… • when first starting connection • when connection goes dead waiting for timeout • Trace • Problem: lose up to half a Cwnd’s worth of data Introduction to Computer Network

  33. Packet Loss Detection • Wait for Retransmission Time Out (RTO) • What’s the problem with this? • Because RTO is performance killer • In BSD TCP implementation, RTO is usually more than 1 second • the granularity of RTT estimate is 500 ms • retransmission timeout is at least two times of RTT • Solution: Don’t wait for RTO to expire Introduction to Computer Network

  34. segment 1 cwnd = 1 Fast Retransmit destination • Resend a segment after 3 duplicate ACKs • Remember, a duplicate ACK means that an out-of sequence segment was received • Notes: • Duplicate ACKs due packet reordering! • If window is small won’t get duplicate ACKs! source ACK 2 cwnd = 2 segment 2 segment 3 ACK 3 ACK 4 cwnd = 4 segment 4 segment 5 segment 6 segment 7 ACK 4 ACK 4 3 duplicate ACKs • Set cwnd to 1 after each retransmit!! Introduction to Computer Network

  35. 70 60 50 40 KB 30 20 10 1.0 2.0 3.0 4.0 5.0 6.0 7.0 Results • Compare to slow start, it avoids long time out waiting in 4.0 – 5.0 period. • How avoid slow start at each fast retransmission? Introduction to Computer Network

  36. Fast Recovery • After a fast-retransmit set cwnd to ssthresh/2 • i.e., don’t reset cwnd to 1, then, avoid slow start again!. • But when RTO expires still do cwnd = 1 • Fast Retransmit and Fast Recovery  implemented by TCP Reno; most widely used version of TCP today Introduction to Computer Network

  37. Fast Retransmit and Fast Recovery • Prevent expensive timeouts • No need to slow start again • At steady state, cwnd oscillates around the optimal window size. cwnd Congestion Avoidance Slow Start Fast retransmit Time

  38. Congestion Control Summary • Architecture: end system detects congestion and slow down • Starting point: • Slow start/congestion avoidance • packet drop detected by retransmission timeout RTO as congestion signal • Fast retransmission/fast recovery • packet drop detected by (three) duplicate acks • Router support • Binary feedback scheme: explicit signaling • Today Explicit Congestion Notification [RF99] Introduction to Computer Network

  39. Congestion Control vs. Congestion Avoidance • Congestion control goal • Stay left of cliff • Congestion avoidance goal • Stay left of knee knee cliff Throughput congestion collapse Load Introduction to Computer Network

  40. Congestion Avoidance • TCP’s strategy • control congestion once it happens • repeatedly increase load in an effort to find the point at which congestion occurs, and then back off • Alternative strategy • predict when congestion is about to happen • reduce rate before packets start being discarded • call this congestion avoidance, instead of congestion control • Two possibilities • router-centric: DECbit and RED Gateways • host-centric: TCP Vegas Introduction to Computer Network

  41. DECbit • Add binary congestion bit to each packet header • Router • monitors average queue length over last busy+idle cycle • set congestion bit if average queue length > 1 • attempts to balance throughout against delay Queue length Current time T ime Previous Current cycle cycle A veraging interval Introduction to Computer Network

  42. End Hosts • Destination echoes bit back to source • Source records how many packets resulted in set bit • If less than 50% of last window’s worth had bit set • increase Cwnd by 1 packet • If 50% or more of last window’s worth had bit set • decrease CongestionWindow by 0.875 times Introduction to Computer Network

  43. Random Early Detection (RED) • Notification is implicit • just drop the packet (TCP will timeout) • could make explicit by marking the packet • Early random drop • rather than wait for queue to become full, drop each arriving packet with some drop probability whenever the queue length exceeds some drop level Introduction to Computer Network

  44. RED Details • Compute average queue length AvgLen AvgLen = (1 - Weight) * AvgLen + Weight * SampleLen 0 < Weight < 1 (usually 0.002) SampleLen is queue length each time a packet arrives MaxThreshold MinThreshold Queue A vgLen Introduction to Computer Network

  45. RED Details (cont) • Two queue length thresholds if AvgLen <= MinThreshold then enqueue the packet if MinThreshold < AvgLen < MaxThreshold then calculate probability P drop arriving packet with probability P if ManThreshold <= AvgLen then drop arriving packet Introduction to Computer Network

  46. P(drop) 1.0 MaxP AvgLen MinThresh MaxThresh RED Details (cont) • Computing probability P TempP = MaxP * (AvgLen - MinThreshold)/ (MaxThreshold - MinThreshold) P = TempP/(1 - count * TempP) • Drop Probability Curve Count is the # of queued from last drop. Introduction to Computer Network

  47. Tuning RED • P of a particular flow’s packet(s) is roughly proportional to the share of the flow’s bandwidth. • MaxP is typically 0.02, meaning when is halfway between the two thresholds, router drops roughly one out of 50 packets. • If traffic is bursty, then MinTh. should be sufficiently large to allow link utilization to be acceptably high. • Difference between two thresholds should be larger than the typical increase in the calculated average queue length in one RTT; setting MaxThreshold to twice MinThreshold is reasonable. • Penalty Box for Offenders Introduction to Computer Network

  48. Sourced-based Cong. Avoidance • End hosts adapt traffic before any lost in the net. • Watch for router queue’s lengths by checking RTT. • A collection of related algorithms • In each 2 RTT check if RTT > (minRTT + maxRTT)/2 -> decrease cwnd, cwnd -= cwnd/8. Increase otherwise. • Check (currentWind –oldWind)x (CurrentRTT- oldRTT), if the result is positive, decrease cwnd, cwnd -= cwnd/8. Increase 1 otherwise. • Check flattening of sending rate. Increase Cwnd by 1 in each RTT, compare throughput with previous one, if it is less than half of one packet, decrease cwnd by one. • TCP Vegas is like the last one with a difference to compare to expected rate. Introduction to Computer Network

  49. TCP Vegas • Idea: source watches for some sign that router’s queue is building up and congestion will happen too; e.g., • RTT grows • sending rate flattens • Top is cwn size • Middle is ave. rate in source • Bottom is ave. queue length in the bottleneck router • In Shade, Cwnd increase, but ave. rate stay flat. Introduction to Computer Network

  50. TCP Vegas • Keep track of extra data in the network. • Extra data is the amount more than available bandwidth. • Maintain “Right” amount of extra data. Introduction to Computer Network

More Related