240 likes | 339 Vues
A Measurement Study of a Peer-to-Peer Video-on-Demand System. Bin Cheng 1 , Xuezheng Liu 2 , Zheng Zhang 2 and Hai Jin 1 1 Huazhong University of Science and Technology 2 Microsoft Research Asia IPTPS 2007, Feb. 28 2007. Motivation. VoD is every coach potato’s dream
E N D
A Measurement Study of a Peer-to-Peer Video-on-Demand System Bin Cheng1, Xuezheng Liu2, ZhengZhang2 and Hai Jin1 1Huazhong University of Science and Technology 2Microsoft Research Asia IPTPS 2007, Feb. 28 2007
Motivation • VoD is every coach potato’s dream • Select anything, start at any time, jump to anywhere • Centralized VoD is costly • Servers, bandwidth, contents () • P2P VoD is attractive, but challenging: • Harder than streaming: no single stream; unpredictable, multiple “swarms” • Harder than file downloading: globally optimal (e.g. “rarest first”) policy inapplicable • VoD is a superset of file downloading and streaming
Main Contribution • Detailed measurement of a real, deployed P2P VoD system • What do we measure? • E.g. What does it mean that a system delivers good UX? • How far off are we from an ideal system? • How does users behave? • Etc. Etc… • Problems spotted • There is a great tension between scalability and UX • Network heterogeneity is an issue • Is P2P VoD a luxury that poor peers cannot afford?
Outline • Motivation • System background: GridCast • Measurement methodology • Evaluation • Overall performance • User behavior and UXexperience • Conclusions
channel list Initial neighbor list GridCast Overview source tracker • Tracker server • Index all joined peers • Source server • Stores a copy for every video file • Web portal • Provide channel list • Peer • Feed data to player • Cache all fetched data of the current file • Exchange data with others web
One Overlay per Channel • Finding the partners • Get the initial content-closer set from the tracker when joining • Periodically gossip with some near- & far-neighbors (30s) • Look up new near-neighbors from the current neighbors when seeking • Refresh the tracker every 5minutes t
Scheduling (every 10s) Current position next 200 seconds next 10 seconds Feed to the player Fetch the next 200 seconds from partners (if they have them) Fetch the next 10 seconds from the source server if no partners have them If bandwidth budget allows, fetch the rarest anchor from the source server or partners
Anchor Prefetching • Anchors are used to improve seek latency • Each anchor is a segment of 10 seconds • Anchors are 5 minutes apart • Playhead adjusted to the nearest anchor (if present) 10s 5 Minutes
System Setup • GridCast was deployed since May 2006 • The tracker server and the Web server share one machine • One source server with 100Mb, 2GB Memory and 1 TB disk • Popularity keeps on climbing up; in Dec 2006 – • Users : 91K; sessions: 290K; total bytes from server: 22TB • Peer logs collected at the tracker (30s) • Latency, jitter, buffer map and anchor usage • Sep-log and Oct-log w/o and w/ log, respectively • Just a matter of switch the codepath as the peer joins in • The source server keeps other statistics (e.g. total bytes served)
Strong Diurnal Pattern • Hot time vs. cold time • Hot time (10:00 ~24:00) • Cold time (0:00 ~ 10:00) • Two peaks • After lunch time & before midnight • Higher at weekends or holidays
Scalability • Idealmodel: only the lead peer fetches from the source server • cs model: all data from the source server Significantly decreases the source server load (against cs), especially in hot time. Follows quite closely the ideal curve. # of active channel increase 3x from cold to hot – the long tail effect!
Why? Understand the Ceiling • Utilization = data from peers / total fetched data • Calculated from the snapshots • For the ideal model, utilization = (n-1)/n • n is # of users in a session; or concurrency • GridCast achieves the ideal when n is large
Why do we fall short (when n is small) • The peer cannot get the content if: • It’s only available from the server (missing content); caused by random seeks • It exists in disconnected peers; caused by NAT • Its partners do not have enough bandwidth missing content dominates for those unpopular files
UX: latency • Startup Latency ( 70% < 5s, 90% < 10s ) • Seek latency ( 70% < 3.5s, 90% < 8s ) • Seek latency is smaller: • There is a 2-second delay to create TCP connections with initial partners • Short seeks hit cached data
UX: jitter • For sessions with 5 minutes, 72.3% has not any jitter • For sessions with 40 minutes, 40.6% has not any jitter • Avg. delayed data: 3~4%
Reasons for Bad UX • Network capacity • CERNET to CERNET: >100KB/s • Non-CERNET to Non-CERNET: 20~50KB/s • CERNET to Non-CERNET: 4-5KB/s • Bad UX in Non-CERNET region might have prevented swarm to form
Reasons for Bad UX (cont.) • Server stress and UX is inversely correlated • Hot time -> lots of active channels -> long tail -> high server stress -> bad UX • Most pronounced for movies at the tail (next slide)
UX Correlation with Concurrency • Higher concurrency: • Reduces both startup and seek latencies • Reduces amount of jitters • Getting close to that of cold time
User Seek Behavior • Seek behavior (Without anchor) BACKWORAD:FORWARD ~= 3:7 BACKWARD FORWARD Short seeks dominate (80% within 500seconds)
Seek Behavior vs. Popularity • Fewer seeks in more popular channels • More popular channels usually have longer sessions • So: stop making bad movies
Benefit of Anchor Prefetching • Significant reduction of seek latency • FORWARD seeks get more benefit (seeks < 1s jump from 33% to 63%) • “next-anchor first” is statistically optimal from any one peer’s point of view • “rarest-first” is globally optimal in reducing the load of the source server (sees 30% prefetched but unused
Conclusions • A few things are not new: • Diurnal pattern; the looooooooong tail of content • A few things are new: • Seeking behaviors (e.g. 7:3 split of forward/backward seeks; 80% seeks are short etc.) • The correlation of UX to source server stress and concurrency • A few things are good to know: • Even moderate concurrency improves system utilization and UX • Simple prefetching helps to improve seeking performance • A few things remain to be problematic • The looooooong tail • Network heterogeneity • A lot remain to be done (and are being done) • Multi-file caching and proactive replication
http://grid.hust.edu.cn/gridcast • http://www.gridcast.cn Thank you! Q&A