1 / 40

Congestion Control in Computer Networks

Learn about congestion control algorithms and factors that contribute to congestion in computer networks. Understand the difference between congestion control and flow control, and explore general principles of congestion control.

binette
Télécharger la présentation

Congestion Control in Computer Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Networks 2 Congestion Control Veton Këpuska

  2. Congestion Control Algorithms • The situation when to many packets are present in subnet it is called congestion. Veton Këpuska

  3. Congestion Factors • Streams of inputs packets arriving from multiple lines (3-4 or more) needing the same output line => queue buildup. • Adding more memory may help to a point but – a study by Nagle suggest that increasing memory to ∞ to accommodate larger queues congestions gets worse not better due to time required to get to the front of the queue (timed out). • Slow Processors: • Queue build up due to routers slow CPU’s at performing bookkeeping task: • Queueing buffers, • Updating tables, etc. • Low bandwidth lines. • Typically the problem is that upgrading one part of the system shifts the bottleneck to the someplace else. The real problem frequently is a mismatched parts of the system. The problem will persist until all components of the system are in balance. Veton Këpuska

  4. Congestion Control vs. Flow Control • Congestion Control has to due with making sure the subnets are able to carry the offered traffic. Thus it is a global issue involving the behavior of • all the hosts, • all the routers, • the store-and forwarding processing within the routers, and • all the other factors that tend to diminish the carrying capacity of the subnet. • Flow Control relates to point-to-point traffic between a given sender and a given receiver. Its job is to make sure that a fast sender cannot continually transmit data faster than the receiver is able to absorb it. • It frequently involves some direct feedback from the receiver to the sender to tell the sender how things are doing at the other end. Veton Këpuska

  5. Congestion Control vs. Flow ControlExample • Flow Control Problem: Consider a fiber optic network with a capacity of 1000 Gbps on which a supercomputer is trying to transfer a file to a personal computer at 1 Gbps. Although there is no congestion the supercomputer has to frequently stop in order to allow PC to catch up. • Congestion Control Problem: Consider a store-and-forward network with 1 Mbps lines and 1000 large computers, half of which are trying to transfer files at 100 kbps to the other half creating a traffic of 500x100 kbps = 50000 kbps = 50 Mbps. The problem here is that this traffic exceeds the capacity of what the network can handle. Veton Këpuska

  6. General Principles of Congestion Control • Analogy with Control Theory: • Open-loop, and • Closed-loop approach. • Open-loop approach • Problem is solved at the design cycle • Once the system is running midcourse corrections are NOT made. • Tools for doing open-loop control: • Deciding when to accept new traffic, • Deciding when to disregard packets and which ones. • Making scheduling decision at various points in the network. • Note that all those decisions are made without regard to the current state of the network. Veton Këpuska

  7. General Principles of Congestion Control • Closed-loop approach • It is based on the principle of feedback-loop. The approach has three parts when applied to congestion control: • Monitor the system to detect when and where congestion occurs, • Pass this information to places where action can be taken • Adjust system operation to correct the problem. Veton Këpuska

  8. General Principles of Congestion Control • Monitoring - Metrics to be used to monitor subnet congestion: • Percentage of all packets disregarded for lack of buffer space. • Average queue length • Number of packets that time out and are retransmitted, • Average packet delay • Standard deviation of packed delay. • Transferring congestion information • Information send to the source(s) of the traffic. Undesirable because extra traffic is initiated when opposite is needed. • Another approach is to reserve a bit field that can be set when congestion gets above some level of threshold. • Routers/Hosts send periodically probe packets out to explicitly ask about congestion. This information can then be used to route traffic around problem areas. • In general in all feedback schemes, the hope is that knowledge of congestion will cause the hosts to take appropriate action to reduce the congestion. Time constant in any adjustment scheme is critical and non-trivial problem: • To fast adjustment is responsive but can lead to oscillations. • To slow adjustment is response time is sluggish and of no real value. Veton Këpuska

  9. General Principles of Congestion Control • The presence of a congestion means that the load is (temporarily) greater than the resources (in part of the system) can handle. • Increase Resources, and/or • Decrease the Load. Veton Këpuska

  10. General Principles of Congestion Control • Increase of Resources: • Use dial-up telephone lines to temporarily increase the bandwidth between certain points. • On satellite systems increasing transmission power often gives higher bandwidth. • Splitting traffic over multiple routes (instead of using the best one) may effectively increase the bandwidth. • Use of spare routers that are normally used as backups to give more capacity. Veton Këpuska

  11. General Principles of Congestion Control • Decrease the Load • It is not always possible to increase capacity of the subsystem. Thus the only other way to reduce congestion is to decrease the load. • Denying service to some users (AOL in early days), • Degrading service to some or all users, • Having users schedule their demands in more predictable way. Veton Këpuska

  12. Congestion Prevention PoliciesOpen-loop Approach • Open-loop system is designed to minimize congestion in the first place rather then reacting after it happened. • Try to achieve this goal at various levels/layers: • Data Link Layer: • Retransmission Policy: • Sender with quick time out and “go-back n” packets and retransmitting all outstanding packets vs. • Selective repeat with slower time out. • Out-of-order catching policy (if disregarded by receiver need to be resent later). • Acknowledgment policy (generate extra traffic but they can be piggyback onto reverse traffic) • Flow control policy – reducing data rate with tight control scheme. Veton Këpuska

  13. Congestion Prevention PoliciesOpen-loop Approach • Network Layer: • Virtual circuit versus datagram inside the subnet: • Many congestion control algorithms work only with virtual circuit subnets. • Packet queuing and service policy: • Whether Routers have one queue per input line or output line or both. • Processing the queue: • Round-robin • Priority based queue processing. • Packet discard policy • Bad policy can make the problem worse. • Routing algorithm • Spreading the traffic over all the lines can help, vs. • Directing the traffic on the already congested line. • Packet lifetime management – Deals with issue that determines how long a packet may live before it is disregarded. • To long a life time of lost packets may clog up the network for a long time. • To short life time of the packets on the other hand may cause time packets to be disregarded before even given a chance to reach the destination, thus inducing retransmission. Veton Këpuska

  14. Congestion Prevention PoliciesOpen-loop Approach • Transport Layer – same issues like in Data Link Layer with one addition: determining the time out is harder because the transit time across the network is less predictable than the transit time over a wire between two routers. Policy issues with Transport layer are: • Retransmission Policy, • Out-of-order catching policy, • Acknowledgment policy, • Flow control policy, • Time out determination. Veton Këpuska

  15. Congestion Prevention Policiesin Virtual-Circuit Subnets • Admission Control: • Simple algorithm – once congestion has been signaled, no more virtual circuits are set up until the problem had gone away. • Attempts to setup new transport layer connections will fail. • Alternate Approach to Admission Control: • Allow new virtual-circuits but carefully route all new ones around the problem area: Veton Këpuska

  16. Alternate Approach to Admission Control Example New Virtual-Circuit Congestion A A B B b) A redrawn subnet that eliminates the congestion a) A congested subnet Veton Këpuska

  17. Negotiated Approach to Congestion Control • Agreement is negotiated between the host and subnet when a virtual circuit is set-up. • Agreement specifies: • Volume and shape of the traffic, • Quality of service requires, etc. • Subnet guarantees the connection since it will make all necessary resource available during the set. Resource typically include: • Table and buffer space in the routers, and • bandwidth on the lines. • Agreement can be done • All the time as part of standard operating procedure (wastes resources), or • Only when the subnet is congested. Veton Këpuska

  18. Congestion Control in Datagram Subnets • Each Router monitors the utilization of utilization of its output lines: • f – instantaneous line utilization (0 or 1) • u – averaged line utilization (0.0-1.0) • a – time constant which determines amount of smoothing. unew = a*uuold + (1-a)*f • Whenever u moves above the threshold, the output line enters a “warning” state. Each newly-arrived packet is checked to see if its output line is in warning state. • Several alternatives for the action taken if the line is in warning state are presented next. Veton Këpuska

  19. Congestion Control in Datagram Subnets • Warning Bit approach • DECNET architecture also frame relay. Warning state is signaled by setting a special bit in the packet’s header. When the packet arrived at its destination, the transport entity copied the bit into the next acknowledgment sent back to the source. The source then cut back on the traffic. • Note that since every router along the path could set the warning bit, traffic could only increase when no routers in the whole path are in trouble. Veton Këpuska

  20. Congestion Control in Datagram Subnets • Choke Packets • Direct approach alternative to Warning Bit. • Router sends a choke packet back to the source host with • Destination information found in the packet. • Original packet is also tagged so that it will not generate any more choke packets further along the path. • When the source host gets the choke packet, it is required to reduce the traffic sent to the specified destination by X percent. • Since other packets aimed at the same destination would generate yet more choke packets, the host should ignore choke packets referring to that destination for a fixed time interval. • After that fixed time interval the line is still congested, the flow is reduced further. • Hosts can reduce traffic by adjusting its policy parameters. For example: • First choke packet causes the data rate to be reduced to 0.50 of its previous rate. • Next choke packet causes reduction to 0.25, … Veton Këpuska

  21. Congestion Control in Datagram Subnets • Hop-by-Hop Choke Packets • Example of communication of a node A (San-Francisco) and D (New-York). • Source A sending at a rate 155 Mbps • From Destination Host to Source Host it will take about 30 msec for a choke packet to arrive. • In this 30 msec another 4.6 MBits have been sent. • See Fig. in the next slide under (a) • Modification of Hop-by-Hop to increase response upon congestion detection by distributing the effect along the routers on the reverse path from destination to source. Veton Këpuska

  22. Hip-by-Hop Choke Packets Veton Këpuska

  23. Congestion Control in Datagram Subnets • Random Early Detection • It is well known that dealing with congestion after it is first detected is more effective than letting it choke up the network and then trying to deal with it. • Idea of RED (Random Early Detection) was initially applied to TCP as response to lost packets (e.g., by making the source to slow down). • TCP was designed for wired networks and wired networks are very reliable, thus lost packets are mostly due to buffer overruns (caused by congestion) rather than transmission errors (usually not caused by congestion). • Idea is for routers to drop packets before the situation becomes hopeless. To determine when to start discarding packets, routers maintain a running average of their queue lengths. When the average queue length on some line exceeds a threshold, the line is said to be congested and action is taken. • If routers cannot tell which source is causing most of the trouble, picking a packet at random from the queue that triggered the action (indicating congestion) is probably as good as it can do. • If the source of the congestion problem is detected, • Router can send a choke packet. • A problem with this approach is that it puts even more load on the already congested network. • Different strategy is to just discard the selected packet and not to report it. • The source will eventually notice the lack of acknowledgment and take action. Since source knows that lost packets are generally caused by congestion and discards, it will respond by slowing down instead of trying harder. Obviously this implicit feedback only works if source respond to lost packets by slowing down their transmission rate. • This approach can not be used in wireless networks, where most losses are due to noise on air links and not due to congestion. Veton Këpuska

  24. Congestion Control in Datagram Subnets • Jitter Control • Some applications actual delay is not relevant rather the fact that that delay remain constant. • Audio, Video Streaming. • Quality of transmission is related to variation in the packet arrival times – Jitter. Veton Këpuska

  25. Jitter Control • High jitter (some packets taking 20 msec and other 30 msec to arrive) will provide uneven quality of the sound or movie. Veton Këpuska

  26. Jitter Control • Approaches and issues for Jitter Control: • Router responsible for: • Checking for each packet how far or ahead of its nominal schedule is. • Take appropriate action (late packets are processed earlier and vise-versa). • One algorithm for determining which of several packets competing for an output line should go next can always choose the packet furthers behind in its schedule. This approach will ensure that packets that are ahead of schedule get slowed down and packets that are behind the schedule get speeded up. • Solution is Application Dependent: • Video On Demand: Jittering can be eliminated by buffering data at the receiver. • Real-time Video Conferencing, Internet telephony would require different solution. Veton Këpuska

  27. QUALITY OF SERVICE • Growth of multimedia networking require novel approaches and solutions to guarantee quality of service. • Flow: a stream of packets from a source to a destination. • The needs of each flow (connection oriented service or connectionless oriented service) can be characterized by four primarily parameters: • Reliability (e.g., Number of bits in error) • Delay • Jitter • Bandwidth • Together those characterizations determine QoS (Quality of Service). Veton Këpuska

  28. Requirements • How stringent QoS requirements are: Veton Këpuska

  29. Requirements (cont) • Reliability (E-mail, File transfer, Web access, Remote login): • No bits of data may be delivered incorrectly. • Checksumming each packet at the source and verifying the checksum at the destination. • Delay (Real-time applications: Telephony, video-conferencing – strict delay requirements, Interactive applications: Web surfing, remote login – delay sensitive) • Jitter (Video and audio are extremely sensitive to jitter) • Bandwidth (Video needs high bandwidth) Veton Këpuska

  30. Techniques for Achieving Good Quality of Service • Over-provisioning • Router capabilities are an order of magnitude higher then required: • Capacity, • Buffer Space, and • Bandwidth. • Expensive solution. • Buffering • Does not affect reliability or bandwidth, • Increases Delay, • Minimizes Jitter. Veton Këpuska

  31. Techniques for Achieving Good Quality of Service • Traffic Shaping • Problem: • Source packets may be emitted irregularly (may cause congestion). • Buffering not possible (e.g., video conferencing) • Solution: Method for Regulating Average rate of data transmission (Traffic Shaping) • Leaky Bucket Algorithm (introduce a queue) • Constant Bit Rate vs. • Constant packet rate (for packets that have different sizes). • The Token Bucket Algorithm • Allow some flexibility in changing Bit Rate when large packets arrive (vs. fixed rate as in Leaky Bucket Algorithm). Veton Këpuska

  32. Resource Reservation: Bandwidth – not to oversubscribe any output lines. Buffer Space CPU cycles Admission Control – Assuming that at this point the incoming traffic from some flow is well shaped and can potentially follow single route in which capacity was reserved in advance on the routers along ht path. When such a flow is offered to a router it has to decide based on its capacity and how many commitments it has already made for other flows, weather to admit or reject the flow. Complicated processes of estimating its residual capacity to handle this case – which depends on the nature of the request (degree at which requesting application is tolerant to reliability, delay, jitter, or bandwidth) Need an accurate description that quantifies the requested flow – “Flow Specification” Example of flow specification Techniques for Achieving Good Quality of Service Veton Këpuska

  33. Proportional Routing Alternative to best path routing. Split the traffic over multiple routes in proportion to the capacity of the outgoing links. Practical solution considering the fact that a typical router does not know global state of the network (only local information). Packet Scheduling Ensuring that a flow does not grab most of routers resource thus degrading QoS for remaining flows. Fair Queuing Separate queue for each flow. When a line becomes idle – router scans the queues round robin taking the first packet on the next queue. N-hosts competing for one output line each host gets to send one out of every n packets. Problem with large packets that get higher bandwidth. Modified Fair Queuing Round Robin takes into account packet size. Problem: gives hosts the same priority. Weighted fair queuing. A router with five packets queued for line O. Finishing times for the five packets. Techniques for Achieving Good Quality of Service Veton Këpuska

  34. Integrated Services • Internet Engineering Tasks Force [IETF] has invested a lot of effort into devising an architecture for streaming multimedia. Generic name for this work is flow-based algorithms or integrated services. • Aim at uni-cast and multi-cast applications: • Uni-cast application: single user streaming a video clip form a news site. • Multi-cast application: collection of digital television stations broadcasting their programs as streams of IP packets to many receivers at various locations. • Uni-cast can be view as a special case of Multi-cast applications. • In many multi-cast applications groups can change membership dynamically: • People enter a video conference and then get bored and switch to a soap opera or some other channel. • Under those conditions the approach of having the senders reserve bandwidth in advance does not work well, since it would require each sender to track all entries and exits of its audience. • For a system designed to transmit television with millions of subscribers, it would not work at all. Veton Këpuska

  35. RSVP – Resource reSerVation Protocol • RSVP main IETF Protocol for the integrated services architecture. • This protocol is used for making the reservations; • Other protocols are used for sending the data. • It allows multiple senders to transmit to multiple groups of receivers. • Permits individual receivers to switch channels freely. • Optimizes bandwidth use while at the same time eliminating congestion. Veton Këpuska

  36. RSVP – Resource reSerVation Protocol • RSVP in its simplest form uses: • Multi-cast routing using spanning trees. • The routing algorithm builds a spanning tree covering all group members (Note that routing algorithm is not part of RSVP). • The only difference from normal multi-casting is a little extra information that is multi-cast to the group periodically to inform the routers along the tree to maintain certain data structures in their memories. • Example: Veton Këpuska

  37. RSVP – Resource reSerVation Protocol • A network • The multicast spanning tree for host 1. • The multicast spanning tree for host 2 Veton Këpuska

  38. RSVP – Resource reSerVation Protocol • Quality of Reception and Eliminate Congestions: • Any of the receivers can send a reservation request message up the tree to the sender. • Message is propagated back using reverse path forwarding. • At each hop, the router notes the reservation and reserves the necessary bandwidth. If insufficient bandwidth is available, it reports back (to receiver) failure. • By the time message gets to the source, bandwidth has been reserved all the way from the sender to the receiver. Veton Këpuska

  39. RSVP – Resource reSerVation Protocol • Host (receiver) 3 requests a channel to host (sender) 1. • Host (receiver) 3 then request a second channel, to host (sender) 2. Note that two separate channels are needed from host 3 to router 3 because two independent streams are being transmitted. • Host (receiver) 5 requests a channel, to host (sender) 1. First dedicated bandwidth is reserved as far as router H. However, router H has already a feed from host 1, so if the necessary bandwidth is reserved, it does not have to reserve additional bandwidth. Note that hosts 3 and 5 might have asked for different amounts of bandwidth (black & white vs. color transmission) thus capacity reserved must be large enough to satisfy the greediest receiver. Veton Këpuska

  40. RSVP – Resource reSerVation Protocol • Routers use the information provided during reservation request to optimize the bandwidth planning. • Thus, when making a reservation, a receiver can (optionally) specify • One or more sources that it want to receive from • Whether these choices are fixed for the duration of reservation, or • Receiver wants to keep open the option of changing sources later. • Two receivers are only set up to share a path if they both agree not to change sources later on. • This fully dynamic arrangement is used to decouple reserved bandwidth from the choice of source. That is once a receiver has reserved bandwidth, it can switch to another source and keep that portion of the existing path that is valid for the new source. • Example: If host 2 is transmitting several video streams, host 3 may switch between them at will without changing its reservation: routers do not care what program the receiver is watching. Veton Këpuska

More Related