1 / 32

Lecture 14

Lecture 14. High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation and QoS. Protection against wraparound.

dinez
Télécharger la présentation

Lecture 14

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 14 • High-speed TCP connections • Wraparound • Keeping the pipeline full • Estimating RTT • Fairness of TCP congestion control • Internet resource allocation and QoS

  2. Protection against wraparound • What is wraparound: A byte with a sequence number x may be sent at one time and then on the same connection a byte with the same sequence number x may be sent again. • Wrap Around: controlled by the 32-bit SequenceNum • The maximum lifetimeof an IP datagram is 120 sec thus we need to have a wraparound time at least 120 sec. • For slow links OK but no longer sufficient for optical networks. • Bandwidth & Time Until Wrap Around Bandwidth T1 (1.5Mbps) Ethernet (10Mbps) T3 (45Mbps) FDDI (100Mbps) STS-3 (155Mbps) STS-12 (622Mbps) STS-24 (1.2Gbps) Time Until Wrap Around 6.4 hours 57 minutes 13 minutes 6 minutes 4 minutes 55 seconds 28 seconds

  3. Keeping the pipe full • The SequenceNum, the sequence number space (32 bits) should be twice as large as the window size (16 bits). It is. • The window size (the number of bytes in transit) is given by the the AdvertisedWindow field (16 bits). • The higher the bandwidth the larger the windowsize to keep the pipe full. • Essentially we regard the network as a storage system and the amount of data is equal to: ( bandwidth x delay )

  4. Required window size for a 100 msec RTT. Delay x Bandwidth Product 18KB 122KB 549KB 1.2MB 1.8MB 7.4MB 14.8MB Bandwidth T1 (1.5Mbps) Ethernet (10Mbps) T3 (45Mbps) FDDI (100Mbps) STS-3 (155Mbps) STS-12 (622Mbps) STS-24 (1.2Gbps)

  5. Original Algorithm for Adaptive Retransmission • MeasureSampleRTTfor each segment/ACK pair • Compute weighted average of RTT • EstimatedRTT = ax EstimatedRTT + (1- a)x SampleRTT • where 0.8 < a < 0.9 • Set timeout based onEstimatedRTT • TimeOut = 2 x EstimatedRTT

  6. Karn/Partridge Algorithm • Do not sample RTT when re-transmitting • Double timeout after each retransmission

  7. Karn/Partridge Algorithm

  8. Jacobson/Karels Algorithm • New calculation for average RTT Diff = SampleRTT - EstimatedRTT EstimatedRTT = EstimatedRTT + (dx Deviation = Deviation + d(|Diff|- Deviation) • where d is a fraction between 0 and 1 • Consider variance when setting timeout value • TimeOut = m x EstimatedRTT + f x Deviation • where m = 1 and f = 4 • Notes • algorithm only as good as granularity of clock (500 microseconds on Unix) • accurate timeout mechanism important to congestion control (later)

  9. Congestion Control Mechanisms • The sender must perform retransmissions to compensate for lost packets due to buffer overflow. • Unneeded retransmissions by the sender due to large delays causes a router to use link bandwidth to forward unneeded copies of a packet. • When a packet is dropped along a path the capacity used used at each upstream routers to forward packets to the point where it was dropped was wasted.

  10. Delay/Throughput Tradeoffs

  11. Router with infinite buffer capacity

  12. Fairness of TCP congestion mechanism

  13. Flows and resource allocation • Flow: sequence of packets with a common characteristics • A layer-N flow  the common attribute a layer-N attribute • All packets exchanged between two hosts  network layer flow • All packets exchanged between two processes  transport layer flow

  14. Min-max fair bandwidth allocation • Goal: fairness in a best-effort network. • Consider: • Unidirectional flows • Routers with infinite buffer space • Link capacity is the only limiting factor.

  15. Algorithm • Start with an allocation of zero Mbps for each flow. • Increment equally the allocation for each flow until one of the links of the network becomes saturated. Now all the flows passing through the saturated link get an equal fraction of the link capacity. • Increment equally the allocation for each flow that does not pass through the first saturated link until a second link becomes saturated. Now all the flows passing through the saturated link get an equal fraction of the link capacity. • Continue by incrementing equally the allocations of all flows that do not use a saturated link until all flows use at least one saturated link.

  16. QoS in a datagram network? • Buffer acceptance algorithms. • Explicit Congestion Notification. • Packet Classification. • Flow measurements

  17. Buffer acceptance algorithms • Tail Drop. • RED – Random Early Detection • RIO – Random Early Detection with In and Out packet dropping strategies.

  18. Explicit Congestion Notification (ECN) • The TCP congestion control mechanism discussed earlier has a major flow; it detects congestion after the routers have already started dropping packets. Network resources are wasted because packets are dropped at some point along their path, after using link bandwidth as well as router buffers and CPU cycles up to the point where they are discharged. • The question that comes to mind is: Could routers prevent congestion by informing the source of the packets when they become lightly congested, but before they start dropping packets? This strategy is called source quench.

  19. Source quench • Send explicit notifications to the source, e.g., use the ICMP. Yet, sending more packets in a network that shows signs of congestion may not be the best idea. • Modify a congestion notification flag in the IP header to inform the destination; then have the destination inform the source by setting a flag in the TCP header of segments carrying acknowledgments.

  20. Problems with ECN (1) TCP must be modified to support the new flag. (2) Routers must be modified to distinguish between ECN-capable flows and those who do not support ECN. (3) IP must be modified to support the congestion notification flag. (4) TCP should allow the sender to confirm the congestion notification to the receiver, because acknowledgments could be lost.

  21. Maximum and minimum bandwidth guarantees • A. Packet classification. • Identify the flow the packet belongs to. • At what layer should be done? Network layer? • At each router  too expensive. • The edge routers may be able to do that. • At application layer? Difficult. • MPLS – multi protocol label switch. Add an extra header in front of the IP header. Now a router decides the output link based upon the input link and the MPLS header.

  22. Maximum and minimum bandwidth guarantees • B. Flow measurements • How to choose the measurement interval to accommodate bursty traffic? • Token bucket

  23. The token bucket filter • Characterized by : (1) A token rate R, and (2) The depth of the bucket, B • Basic idea the sender is allocated tokens at a given rate and can accumulate tokens in the bucket until the bucket is filled. To send a byte the sender must have a token. The maximum burst can be of size B because at most B token can be accumulated.

  24. Example • Flow A: generates data at a constant rate of 1 Mbps. Its filter will support a rate of 1 Mbps and a bucket depth of 1 byte, • Flow B: alternates between 0.5 and 2.0 Mbps. Its filter will support a rate of 1 Mbps and a bucket depth of 1 Mbps • Note: a single flow can be described by many token buckets.

  25. Example

  26. Token bucket L = packet length C = # of tokens in the bucket --------------------------------------------------- if ( L <= C ) { accept the packet; C = C - L; } else drop the packet;

  27. A shaping buffer delays packets that do not confirm to the traffic shape if ( L <= C ) { accept the packet; C = C - L;} else { /* the packet arrived early, delay it */ while ( C < L ) { wait; } transmit the packet; C = C - L;}

  28. A QoS Capable Router

More Related