1 / 38

8.4 Wide -Scale Internet Streaming Study

8.4 Wide -Scale Internet Streaming Study. CMPT 820 – November 2 nd 2010 Presented by : Mathieu Spénard. Goal. Measure the performance of the internet while streaming multimedia content from a user point of view. Previous Studies – TCP Perspective.

fleur
Télécharger la présentation

8.4 Wide -Scale Internet Streaming Study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 8.4 Wide-Scale Internet StreamingStudy CMPT 820 – November 2nd 2010Presentedby: Mathieu Spénard

  2. Goal • Measure the performance of the internet while streaming multimedia content from a user point of view

  3. Previous Studies – TCP Perspective • Study the performance of the internet • At backbonerouters, campus networks • Some studies (Paxson, Bolliger et al) mimican FTP, which is goodfornow, butdoesn'trepresenthowentertainment-oriented service willevolve (few backbone video servers, lots of users) • Ping, traceroute, UDP echo packets, multicastbackbone audio packets

  4. Problem? • Notrealistic! Do notrepresentwhatpeopleexperience at home whenusingreal-time video streaming

  5. StudyReal-TimeStreaming • Use 3 different dial-up Internet Service Provider in the U.S.A. • Mimictheirbehaviour in the late 1990s-early 2000s • Real-Timestreaming different than TCP because: • TCP rate is drivenbycongestioncontrol • TCP usesan ACK forretransmission; real-timeapplicationssendan NACK which is different • TCP reliesonwindow-basedflowcontrol; real-timeapplicationsutilizesrate-basedflowcontrol

  6. Setup • Unix video server to the UUNET backbonewith a T1 • AT&T WorldNet, Earthlink, IBM Global Network • 56kbps, V.90 modems • All clientswere in NY state, but dialed long-distancenumbers to every 50 states to connect, fromvarious major cities in the U.S.A. To the ISP via PPP • Issue a parallel traceroute to the server and thenrequest to stream a 10-min long video

  7. Setup (cont'd) • Phone database of all numbers to dial • Dialer • Parallel Traceroute • Implementedusing ICMP (instead of UDP) • Send all probes in parallel • Record IP Time-to-live (TTL) foreachreturnedmessages

  8. What is a success? • Sustain the transmission of the 10-minute video sequence at the stream's target IP rater • Aggregatepacket loss is lessthan a specificthreshold • Aggregateincoming bit rateabove a specific bit rate • Experimentallyfoundthatthisfilter-outmodem-related issues

  9. When does the experiment end? • 50 states (including AK and HI) • Eachdayseparatedinto 8 chunks of 3 hourseach • One week • 50 * 8 * 7 = 2800 successfulsessions per ISP

  10. StreamingSequences • 5 frames per second, encodedusing MPEG-4 • 576-byte IP packetthatalways start at the beginning of a frame • Startupdelay: networkindependant: 1300ms, delayjitter: 2700ms. Total: 4000ms Multimedia over IP and Wireless Networks Table 8.1 page 246

  11. Client-ServerArchitecture • Multi-threaded server, goodfor NACK requests • Burstsbetween 340 and 500ms for a low server overhead • Clientuses NACK for lost packets • Clientcollectsstatsaboutreceived packets and decoded frames

  12. Client-ServerArchitecture (cont'd) • Example: RTT. Clientsends a NACK. Server respondswithretransmissionsequencenumber. Clientcanmeasure the time difference • Ifnotenough NACK needed, the clientcanrequestsome, soitactually has data. This happens every 30seconds ifpacket loss < 1%

  13. Notation • DXnfor Dataset collectedbyISPx (x = a, b, c) withStream Sn (n = 1, 2) • Dnfor the combined set {Dan U Dbn U Dcn}

  14. ExperimentalResults • D1 • 3 clientsperformed 16,783 long-distanceconnections • 8429 successes • 37.7 million packets arrived at clients • 9.4 GB of data • D2 • 17,465 connections • 8423 successes • 47.3 million packets arrived at clients • 17.7 GB of data

  15. ExperimentalResults (cont'd) • Failurereasons: • PPP-layerconnectionproblem • Can'treach server (failedtraceroute) • High bit-errorrates • Low modem connectionrate

  16. ExperimentalResults (cont'd) • Average time to traceanend-to-endpath: 1731ms • D1encountered 3822 different Internet routers; D2 4449 and together, 5266 • D1encounteredon average 11.3 hops (from 6 to 17), 11.9 in D2 (from 6 to 22)

  17. ExperimentalResults (cont'd) Multimedia over IP and Wireless Networks Fig. 8.9 (top) page 250

  18. Purged Datasets • D1p and D2p made up of successfulsessions • 16,852 successfulsessions • Accounts for 90% of the bytes and packets • 73% of the routers

  19. Packet Loss • D1p average packet lost was 0.53%, D2p 0.58% • MuchhigherthanwhatISPsadvertise (0.01 – 0.1%) • Therefore, suspect lost happens at the edges • 38% of all sessions had nopacket lost; 75% had loss rates < 0.3% and 91% rate lost < 2% • 2% of all sessions have packet lost > 6%

  20. Packet Loss – Time factor Multimedia over IP and Wireless Networks Fig. 8.10 (top) page 252

  21. Loss BurstLengths • 207,384 loss bursts and 431,501 lost packets Multimedia over IP and Wireless Networks Fig. 8.11 (top) page 253

  22. Loss BurstLengths (cont'd) • Router queues overflowed at a rate smaller than the time to transmit a single IP packet over a T1 • Random EarlyDetection (RED): Was disabledfrom the ISPs • Whenburstlength lost >= 2, samerouter, or different ones?

  23. Loss BurstLengths (cont'd) • In each of D1p and D2p: • Single packetburstscontained 36% of all lost packets • Bursts <= 2 contained 49% • Bursts <= 10 contained 68% • Bursts <= 30 contained 82% • Bursts >= 50 contained 13%

  24. Loss BurstDurations • If a router's queue is full, and if packets are really close to oneanotherwithin the burst, theymight all bedropped • Loss-burstduration = time between the last packetreceived, and the onereceivedafter the burst loss • 98% of loss-burstdurations < 1second, whichcouldbecausedbydata-linkretransmission

  25. Heavy Tails • Packetlosses are dependantfromoneanother; itcancreate a cascading effect • Futurereal-timeprotocolsshould account forbursty loss packets, and heavy tail distribution • How to estimateit?

  26. Heavy Tails (cont'd) • Use a Parettofunction • CDF: F(x) = 1 – (β/x)α • PDF: f(x) = αβαx-α-1 • In the case, α = 1.34 and β = 0.65 Multimedia over IP and Wireless Networks Fig. 8.12 (top) page 256

  27. UnderflowEvents • Packet loss: 431,501 • 159,713 (37%) werediscovered missing whenit was too late => noNACK • 431,501 – 159,713 = 271,788 left • 257,065 (94,6%) recoveredbeforetheir deadline, 9013 (3.3%) were late and 5710 (2.1%) wereneverrecovered

  28. UnderflowEvents (cont'd) • 2 types of late retransmission: • Packets thatarriveafter the last frame of theirGoP is decoded => completelyuseless • Packets that are late, butcanstillbeusedforpredicting frames withintheirGoP => partiallylate • Of the 9013 late retransmission, 4042 (49%) werepartially late

  29. UnderflowEvents (cont'd) • Total underflowbypacket loss: 174,436 • 1,167,979 underflows in data packets, whichwerenotretransmitted • 1.7% of all packets causedunderflows • Frame-freeze of 10.5s on average for D1p, and 8.6s for D2p

  30. Round-TripDelay • 660,439 RTT foreach D1p and D2p • 75% < 600ms, 90% < 1s, 99.5% < 10s and 20 > 75s Multimedia over IP and Wireless Networks Fig. 8.13 (top) page 259

  31. Round-TripDelay (cont'd) • Varyaccording to the period of the day • Correlatedto the length of the end-to-endpath (measured in hops withtraceroute) • Verylittlecorrelationwithgeographicallocation

  32. DelayJitter • One-waydelayjitter = differencebetweenone-waydelay of 2 consecutivepackets • Usingpositivevaluesforone-waydelayjitter, highestvalue was 45s, 97.5% < 140ms, and 99.9% < 1s • Cascadingeffect: many packets canthenbedelayed, causingmanyunderflows

  33. PacketReordering • In Da1p, 1/3 missing packets was actuallyreordered • Frequencyof reordering = % of reordered packets/totalnumber of missing packets • In the experiment, this was 6.5% of missing packets, or 0.04% of all sent packets. • 9.5% of sessionsexperienced at leastonereordering • Independantof time of day and state

  34. PacketReordering (cont'd) Largestdelay was 20s (interestingthough, distance was onepacket) Multimedia over IP and Wireless Networks Fig. 8.16 page 265

  35. AsymmetricPaths • Usingtraceroute and TTL-expired packets, canestablishnumber of hops betweensender and receiver • Ifnumber is different, definitelyasymmetric • Ifthe same, we don'tknow and callitpotentiallysymmetric

  36. AsymmetricPaths (cont'd) • 72% of sessionsweredefinitelyasymmetric • Couldhappen becausepaths crosses over Autonomous Systems (AS) boundaries, where a “hot-potato” policy is enforced • 95% of all sessionsthat had at leastonereordering had asymmetricalpaths • 12,057 asymmetricalpathsessions => 1522 had a reordering. 4795 possiblysymmetricpaths, only 77 had reordering

  37. Conclusion • Internet studyforReal-timestreaming • Usevarioustoolssuch as traceroute to know the routersalong a path • Analyse the percentage of requestthatfail • Packetloss and loss-burstdurations • Underflowevents • Roundtrip delay • DelayJitter • Reorderingand AsymmetricPaths

  38. Questions? Thankyou!

More Related