1 / 37

Low-Latency Adaptive Streaming Over TCP

Author. Low-Latency Adaptive Streaming Over TCP. Ashvin Goal (University of Toronto) Charles Krasic (University of British Columbia) Jonathan Walpole (Portland State University) . Presented By. Kulkarni Ameya.s JongHwa Song. Concept of Paper.

ora
Télécharger la présentation

Low-Latency Adaptive Streaming Over TCP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Author Low-Latency Adaptive Streaming Over TCP Ashvin Goal (University of Toronto) Charles Krasic (University of British Columbia) Jonathan Walpole (Portland State University) Presented By KulkarniAmeya.sJongHwa Song

  2. Concept of Paper • Benefits of TCP for Media streaming – Congestion control delivery Flow control Reliable delivery -Packet loss recovery • Problem:- TCP introduces latency at application level • Solution proposed:- adaptive buffer-size tuning technique

  3. Agenda • First 2 sections talk about the ‘Challenge’ • Next section analyses ways in which TCP introduces latency • Adaptive send-buffer technique for reducing TCP latency • Effect on throughput and Tradeoff – Network throughput and latency • Implementation • Real streaming application example • Justification of Benefits

  4. The Challenge • Application must adapt media quality in response to TCP’s estimate of current bandwidth availability • Adaptive streaming applications - prioritized data dropping and dynamic rate shaping • Application must make adaptation decisions far in advance of data transmission • Result –adaptation is unresponsive and performs poorly as the available bandwidth varies over time

  5. TCP induced latency • End to end latency consists of • Application level latency • Protocol latency TCP introduces protocol latency in 3 ways:- • Packet retransmission • Congestion control • Sender side buffering

  6. TCP congestion window • Window size (CWND)= max number of unacknowledged and distinct packets in flight • Send buffer keeps copies of packets in flight • Throughput of TCP stream = CWND/RTT • TCP induces latency in following ways:- • Packet retransmission – at least 1 RTT delay • Congestion control- at least 1.5 RTT delay • Sender side buffering

  7. Adaptive send buffer tuning • Latency is introduced into the network because of blocked packets in the send buffer • Reduce the send buffer size to CWND • Send Buffer size should never be less than CWND • Tune the send buffer size to follow the CWND • Since the CWND changes dynamically over time this technique is called Adaptive send buffer tuning and TCP connection is MIN_BUF TCP flow

  8. MIN_BUF TCP • Blocks an application from writing data to a socket when there are CWND packets in send buffer • Application can write packet to socket only when an ack arrives and the CWND is opened up • MIN_BUF TCP moves the latency due to blocked packets to the application level • Application has much greater control in sending time-critical data

  9. Evaluation Forward path congestion topology Reverse path congestion topology

  10. Comparison of Latencies

  11. Other factors affecting latency • Experiments performed with smaller bandwidth 10 Mb/s at the router • Experiments performed with smaller round trip time (25 ms and 50 ms) • Observation – • Latency increases with lesser available BW • Latency decreases with lower RTT

  12. Experiment with ECN • ECN – External Congestion Notification • ECN MIN_BUF TCP still performs better than TCP flows • TCP with ECN but without MIN_BUF drops few packets but still suffers large delays because of blocked packets • Thus, MIN_BUF TCP gives more control and flexibility to the application in terms of what data to be sent and when it should be sent • Eg – high priority data • Next – we will delve on TCP throughput

  13. Consideration: Latency + ThroughputPacket for the ACK arrival Standard TCP -> Send ImmediatelyMIN_BUF TCP-> Wait until writing the next packetSolution : Adjusting the buffer size (slightly larger than CWND) EFFECT ON THROUGHPUT

  14. A. ACK ArrivalStandard TCPWhen an ACK arrives for the first packet in TCP window,the window admits a new packet in the window.MIN_BUF TCP Buffer one additional packet to send immediatelyB. Delayed ACKPurpose : To save bandwidth in the reverse directionFunction : One ACK for every two data packets received. Each ACK arrival opens TCP’s window by two packetsMIN_BUF TCPBuffer two instead of one additional packets Considering Event 5. EFFECT ON THROUGHPUT

  15. C. CWND IncreaseTCP increments CWND by 1 on every round-trip timeACK arrival -> releasing two packetsDelayed ACKs -> Three additional packets.Byte-counting AlgorithmMitigating the impact of delayed ACKs on the growth of CWNDD. ACK CompressionFunction : At routers, ACK can arrive at the sender in a bustyWorst Case : The ACKs for all the CWND packets arrive togetherSolution of MIN_BUF TCP 2* CWND packets(default send buffer size is large) Considering Event 5. EFFECT ON THROUGHPUT

  16. E. Dropped ACKWhen the reverse path is congested, ACK packets dropsLater ACK -> acknowledge more than two packetsAct similar to ACK Compression Considering Event 5. EFFECT ON THROUGHPUT

  17. A*CWND + B (A>0, B≥0)(A) Handling any bandwidth reduction caused by ACK compression(B) Taking ACK arrivals, delayed ACK arrivals, delayed ACKs and CWND increase into account(A ≥ 2, B≥1, TCP send & acknowledged packets is unaffected so Throughput between MIN_BUF TCP and TCP is comparable)Tradeoff of A and B => latency and throughput(every additional block packet -> increase latency) MIN_BUF(A,B) ( default size : MIN_BUF(1,0) ) MIN_BUF TCP Streams 5. EFFECT ON THROUGHPUT

  18. MIN_BUF(1,0) -> original streamMIN_BUF(1,3) -> take ACK arrivals, delayed ACKs CWND increase into accountMIN_BUF(2,0) -> take ACK compression and dropped ACKs Evaluation 5. EFFECT ON THROUGHPUT

  19. X- axis : protocol latency in millisecondsY- axis : percentage of packets that arrive at the receiver within a delay threshold Evaluation 5. EFFECT ON THROUGHPUT

  20. 160 ms threshold : Requirement of interactive application such as video conferencing500 ms threshold : Requirement of media control operationsEach experiment performs 8 times & latencies accumulated over all the runs Evaluation 160 500 5. EFFECT ON THROUGHPUT

  21. 160 ms thresholdMIN-BUF(1,0)MIN-BUF(1,3):less than 2% MIN-BUF(2,0) :10%TCP:30% Forward path topology 5. EFFECT ON THROUGHPUT

  22. Acknowledgement->DropsSlightlyincrease delay160 ms thresholdMIN-BUF(1,0)MIN-BUF(1,3):less than 10% TCP:40% Reverse path topology 5. EFFECT ON THROUGHPUT

  23. MIN-BUF(2,0) : Close to Std TCPMIN-BUF(1,0) : Least Throughput TCP has no new packets in the send buffer after each ACK is received MIN-BUF(1,3) : 95% -> Achieving low latency and good throughput Normalized Throughput 5. EFFECT ON THROUGHPUT

  24. Write Data to the Kernel: High system overhead because more system calls are invoked to transfer the same amount of data.Write (MIN_BUF(1,0) slightly over) MIN_BUF TCP one packet at a timeTCP several packets at a timeTCP amortized context switching overhead System Overhead 5. EFFECT ON THROUGHPUT

  25. Poll calls (MIN_BUF(1,0) significantly more overhead)Standard TCP poll calls after every 14 writes MIN_BUF(1,0) called everytimeRatio between two is 12.66 System Overhead 5. EFFECT ON THROUGHPUT

  26. Total CPU time (MIN_BUF(1,0) three times) As a result of fine-grained writeTo redece overhead -> Larger values of the MIN_BUF parameters System Overhead 5. EFFECT ON THROUGHPUT

  27. MIN BUF TCP approach with a small modificationin the Linux 2.4 kernal :Using a new SO_TCP_MIN BUF optionLimit send buffer A∗CWND+MIN(B, CWND)segments (segments are packets of maximum segment size or MSS)The send buffer size is at least CWND because A must be an integer greater than zero and B is zero or largerDefault A is one and B is zero IMPLEMENTATION

  28. A. Sack CorrectionTerm “sacked out” to A ∗ CWND + MIN(B, CWND). The sacked out term is maintained by a TCP SACK sender and is the number of selectively acknowledged packets. To ensure that the send buffer limit includes this window and is thus at least CWND+sacked out. Without this correction,TCP SACK is unable to send new packets for a MIN_BUF flow and assumes that the flow is application limited IMPLEMENTATION 6. IMPLEMENTATION

  29. B. Alternate Application-Level ImplementationThe application would stop writing data when the socket buffer has a fill level of packets. The problem with this approach is that the applicationhas to poll the socket fill level.Polling is potentially both expensive in terms of CPU consumption and inaccurate since the application is not informed immediately when the socket-fill level goes below the threshold IMPLEMENTATION 6. IMPLEMENTATION

  30. C. Application ModelMIN BUF TCP should explicitly align their data Two benefits: (1) it minimizes any latency due to coalescing or fragmenting of packets below the application layer.(2) it ensures that low-latency applications are aware of the latency cost and throughput overhead of coalescing or fragmenting application data into network packets. For alignment, an application should write maximum segment size (MSS) packets on each writeTCP CORK socket option in Linux improves throughput &not affect protocol latency IMPLEMENTATION 6. IMPLEMENTATION

  31. Evaluate timing behavior of a real live streamingMIN_BUF TCP helps improving end to end latencyQstream(open-source adaptive streaming application)- Adaptive media format- Adaptation mechanism Application-LEVEL Evaluation

  32. Adaptive media format : SPEQ(scalable MPEQ)Variant of MPEG-1 that supports layered encoding of video data that allows dynamic data droppingAdaptation mechanism : PSS(priority-progress streaming)The key idea is an adaptation period, which determineshow often the sender drops data. Within each adaptation period, the sender sends data packets in priority order, from the highest priority to the lowest priority Application-LEVEL Evaluation 7. Application level evaluation

  33. Evaluation Methodology 7. Application level evaluation

  34. Result 7. Application level evaluation

  35. Result 7. Application level evaluation

  36. CONCLUSIONS Tuning TCP’s send buffer : low latency streaming over TCP : show the significant effect on TCP of latency at the application level Extra Packet(Blocked Packet) : Help to recover throughput without increasing protocol latency Used layered media encoding for evaluation : reducing end-to-end latency and variation in media quality

  37. Questions

More Related