p2p vod n.
Skip this Video
Loading SlideShow in 5 Seconds..
P2P-VoD PowerPoint Presentation


241 Vues Download Presentation
Télécharger la présentation


- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. P2P-VoD

  2. Other P2P services • P2P file downloading: BitTorrent and Emule. Different downloading rates depending users are able to contribute. • P2P live streaming: Coolstreaming and PPLive. When a live video is watched by many users – Challenge - to ensure that all peers can receive the streamed video at the playback rate.

  3. P2p VoD Systems • P2P-VoD has less synchronicity in the users sharing video content. • Like P2P streaming, P2P-VoD also use streaming but peers can watch different parts of a video at the same time. • To compensate, P2P VoD requires each user to contribute around 1 GB storage instead of just the playback buffer like in P2P streaming.

  4. PPLiveVoDsystem • This study is based on real-world P2p VoD system built and deployed by PPLive in fall 2007. • By late November 2007 2.2 million users had tried the system. • Total of 3900 movies were published in November and December 2007. • 500 movies available on-line simultaneously. • Januray 2008- 150 K + users online at the same time.

  5. Major components of P2P-VoD system. a) Set of servers as the source of content (e.g. movies) b) Set of trackers to help peers connect to other peers to share the same content. c) Bootstrap server to help peers find a suitable tracker. (based on location) d) Log servers – data measurement. Transit servers – help peers behind NAT boxes. e) Peers

  6. Segment sizes • Good to divide the content into as many pieces as possible for flexibility in scheduling. • It is also good to have big segments to minimize overhead. • Types of overhead: header overhead, the larger the header the smaller the segment overhead. • Advertizing overhead, larger segment = smaller overhead, bitmap representing pieces that peer is holding. • Protocol overhead, larger segment = smaller overhead, (request packets or other protocol packets)

  7. Different units of a movie in PPLive’sVoD System. Piece: the size is dictated by the media player and 16KB is chosen. Subpiece: If size of piece is too large, subpiece is used. Chunk: advertising to neighbors what parts of a movie a peer holds.

  8. Replication strategies • Peer is assumed to contribute a fixed amount of hard disc storage ( e.g. 1GB) • When all pieces of a chunk are available it is advertised to other peers. • Replication strategy goal: make chunks as available to user population as possible to meet users viewing demand without excessive overhead. • MVC (Multiple Movie Cache) – Peer is watching one movie while providing upload to another movie. • No Pre-fetching. Only movies already viewed locally can be found in a peer’s disk cache.

  9. Content Discovery and Peer Overlay Management • Objective: Peers need to be able to discover they content they need and which peers hold it. • Challenge: Accomplish this with minimum overhead. Mechanisms user in PPLive: tracker (or super node), DHT, gossiping.

  10. Which chunk/movie to remove when disk cache is full? LRU (Least Recently Used) for PPLiveVoD . Weight based LRU. • Each movie is assigned a weight based on: • -How complete is the movie is already cached locally. • -How needed a copy of the movie is. (availability to demand ratio, ATD) • Weight based LRU improves server loading from 19% down to a range of 11% to 7%.

  11. Piece Selection • A peer download chunks from other peers using pull method. • Which piece to download first? 1. sequential: piece closest to what is needed for video playback. 2. rarest first: piece that is the rarest 3. anchor-based: based on anchor points through movie. • Anchoring is not currently used in current version of PPLive.

  12. Transmission strategy • How to select which neighbor to download from? • How many neighbors to use for simultaneous download? • How to schedule requests and set timeouts to multiple neighbors for simultaneous download? • Goals: - Maximize downloading rate. - Minimize overhead due to duplicate transmissions and requests.

  13. PPLiveVoD transmission algorithm • Algorithm tried to proportionally send more requests to the neighbor based on response time. • Example: Playback rate of around 500 Kbps, 8-20 neighbors is ideal. • When neighboring peers cannot supply sufficient downloading rate, the content server can be used to supplement the need.

  14. Other design issues. • Client can’t adjust contribution levels. In order for the software to deliver content for playback, client must regularly advertise its chunk bitmap to tracker. • PPLiveVoD uses standard methods for peers to discover different types of NAT boxes and advertise address accordingly. • PPLive pace the upload rate and request rate to make sure firewalls will not consider the software as malicious attackers.

  15. Performance Metrics and Measurement Methodology

  16. What to measure 1. User behavior: user arrival patterns and how long they stayed watching a movie. User behavioral information can be used to improve the design of the replication system. 2. External performance metrics: user satisfaction and server load. These metrics are used to measure the system performance perceived externally. 3. Health of replication: measure how well a P2P VoD system is replicating a content. Internal metric.

  17. Measuring user behavior • Basic user activity is recorded in a movie viewing record (MVR). • Each user has a unique User ID (derived from hardware component serial numbers (HCSN) of the computer, the memory module) • Each movie has a unique Movie ID (Hash of the movie content).

  18. Example to show how MVRs are generated.

  19. User satisfaction • Total viewing time (TVT) can be determined using MVRs. • Fluency: fraction of time a user spends watching a movie out of the total time he/she spends waiting for and watching that movie. • Further research can be done in for the P2P VoD clients software to infer /estimate user satisfaction for each MVR based on user actions. • For this research only fluency is taken as an indicator of user satisfaction.

  20. Health of replication • Health index for replication is defined at 3 levels: a) Movie level: number of active peers who have advertised storing chunks of a specific movie. b) Weighted movie level: takes the fraction of chunks a peer has into account in computing the index. c) Chunk bitmap level: average number of copies of a chunk in a movie, the minimum number of chunks, the variance of the number of chunks and so on.

  21. Measurement methodology • Log server that collects various sorts of measurement data from peers. • Tracker passes some information to the log server. • Peers collect data and send it to the log server after a Stop event. • Peers send a report of the MVRs and the fluency. • Peer reports the chunk bitmap to the log server.

  22. Measurement results and analysis

  23. Statistics on video objects • Overall statistics of 3 typical movies.

  24. Observations: 1. Movie 2 is the smallest video object with a viewing duration of about 45minutes, Movie 3 is the longest video object with viewing duration of 110 minutes. 2. Movie 2 is the most popular movie with 95005 users and Movie 3 is the last popular movie with 8423 users. 3. Movie 2 has the highest average number of jumps, Movie 1 has the lowest. 4. Movie 1 has the largest viewing duration – consistent with Movie 1 being the one that has the least average number of jumps. 5. Movie 1 has larger average viewing length.

  25. Statistics on user behavior

  26. Conclusions • P2P-VoD streaming service is an up and coming application for the Internet. • In this paper, they present a general architecture and • important building blocks of realizing a P2P-VoD system. • One can use this general framework and taxonomy to further study various design choices.