1 / 126

Video Delivery Techniques

2. Server Channels. Videos are delivered to clients as a continuous stream.Server bandwidth determines the number of video streams can be supported simultaneously.Server bandwidth can be organized and managed as a collection of logical channels.These channels can be scheduled to deliver various v

arleen
Télécharger la présentation

Video Delivery Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. 1 Video Delivery Techniques My name is Ying Cai. I am a student of computer science department at the university of central florida. My today’s presention is “Patching: a multicast technique for true video-on-demand services”.My name is Ying Cai. I am a student of computer science department at the university of central florida. My today’s presention is “Patching: a multicast technique for true video-on-demand services”.

    2. 2 Server Channels Videos are delivered to clients as a continuous stream. Server bandwidth determines the number of video streams can be supported simultaneously. Server bandwidth can be organized and managed as a collection of logical channels. These channels can be scheduled to deliver various videos.

    3. 3 Using Dedicated Channel Video-on-Demand is a critical technology for many important multimedia applications. The easiest way to implement a VOD system is to use a dedicated channel for each video request. Because each client has his own video channel, he can playback any video anytime he wants. Thus, true VOD can be achieved. However, the maximum number of video channels that can be sustained simultaneously by today’s video servers is very limited. Considering that a typical movie lasts more than one hour, the throughput of a VOD system would be miserable if we dedicate one channel just for one client. Therefore, true VOD is achieved, but it’s truly expensive.Video-on-Demand is a critical technology for many important multimedia applications. The easiest way to implement a VOD system is to use a dedicated channel for each video request. Because each client has his own video channel, he can playback any video anytime he wants. Thus, true VOD can be achieved. However, the maximum number of video channels that can be sustained simultaneously by today’s video servers is very limited. Considering that a typical movie lasts more than one hour, the throughput of a VOD system would be miserable if we dedicate one channel just for one client. Therefore, true VOD is achieved, but it’s truly expensive.

    4. 4 “Video on Demand” Quiz Video-on-demand technology has many applications:

    5. 5 Push Technologies Broadcast technologies can deliver videos on demand. Requirement on server bandwidth is independent of the number of users the system is designed to support.

    6. 6 Simple Periodic Broadcast Staggered Broadcast Protocol A new stream is started every interval for each video. The worst service latency is the broadcast period.

    7. 7 Simple Periodic Broadcast A new stream is started every interval for each video. The worst service latency is the broadcast interval.

    8. 8 Limitation of Simple Periodic Broadcast Access latency can be improved only linearly with increases to the server bandwidth. Substantial improvement can be achieved if we allow the client to preload data

    9. 9 Pyramid Broadcasting – Segmentation [Viswanathan95] Each data segment Di is made a times the size of Di-1 , for all i. ? = , where B is the system bandwidth; M is the number of videos; and K is the number of server channels. ? opt = 2.72 (Euler’s constant).

    10. 10 Pyramid Broadcasting Download & Playback Strategy Server bandwidth is evenly divided among the channels, each much faster than the playback rate. Client software has two loaders: Begin downloading the first data segment at the first occurrence, and start consuming it concurrently. Download the next data segment at the earliest possible time after beginning to consume the current data segment.

    11. 11 Disadvantages of Pyramid Broadcasting The channel bandwidth is substantially larger than the playback rate Huge storage space is required to buffer the preloaded data It requires substantial client bandwidth

    12. 12 Permutation-Based Pyramid Broadcasting (PPB) [Aggarwal96] PPB further partitions each logical channel in PB scheme into P subchannels. A replica of each video fragment is broadcast on P different subchannels with a uniform phase delay.

    13. 13 Advantages and Disadvantages of PPB Requirement on client bandwidth is substantially less than in PB Storage requirement is also reduced significantly (about 50% of the video size) The synchronization is difficult to implement since the client needs to tune to an appropriate point within a broadcast

    14. 14 Each video is fragmented into K segments, each repeatedly broadcast on a dedicated channel at the playback rate. The sizes of the K segments have the following pattern: [1, 2, 2, 5, 5, 12, 12, 25, 25, …, W, W, …, W] Skyscraper Broadcasting [Hua97]

    15. 15 Generating Function The broadcast series is generated using the following recursive function:

    16. 16 The Odd Loader and the Even Loader download the odd groups and the even groups, respectively. The W-segments are downloaded sequentially using only one loader. As the loaders fill the buffer, the Video Player consumes the data in the buffer. Skyscraper Broadcasting Playback Procedure

    17. 17 Advantages of Skyscraper Broadcasting Since the first segment is very short, service latency is excellent. Since the W-segments are downloaded sequentially, buffer requirement is minimal.

    18. 18 SB Example Blue people share 2nd and 3rd fragments and 6th, 7th, 8th with Red people.

    19. 19

    20. 20 CCA Broadcasting Server broadcasts each segment at the playback rate Clients use c loaders Each loader download its streams sequentially, e.g., i th loader is responsible for segments i, i+c, i+2c, i+3c, … Only one loader is used to download all the equal-size W-segments sequentially

    21. 21 Advantages of CCA It has the advantages of Skyscraper Broadcasting. It can leverage client bandwidth to improve performance.

    22. 22 Cautious Harmonic Broadcasting (Segmentation Design) A video is partitioned into n equally-sized segments. The first channel repeatedly broadcasts the first segment S1 at the playback rate. The second channel alternately broadcasts S2 and S3 repeadtedly at the playback rate. Each of the remaining segment Si is repeatedly broadcast on its dedicated channel at 1/(i–1) the playback rate.

    23. 23 Cautious Harmonic Broadcasting (Playback Strategy) The client can start the playback as soon as it can download the first segment. Once the client starts receiving the first segment, the client will also start receiving every other segment.

    24. 24 Cautious Harmonic Broadcasting Advantage: Better than SB in terms of service latency. Disadvantage: Requires about three times more receiving bandwidth compared to SB. Implementation Problem: The client must receive data from many channels simultaneously (e.g., 240 channels are required for a 2-hour video if the desired latency is 30 seconds). No practical storage subsystem can move their read heads fast enough to multiplex among so many concurrent streams.

    25. 25 Pagoda Broadcasting Download and Playback Strategy Each channel broadcasts data at the playback rate The client receives data from all channels simultaneously. It starts the playback as soon as it can download the first segment.

    26. 26 Pagoda Broadcasting Advantage & Disadvantage Advantage: Required server bandwidth is low compared to Skyscraper Broadcasting Disadvantage: Required client bandwidth is many times higher than Skyscraper Broadcasting Achieving a maximum delay of 138 seconds for a 2-hour video requires each client to have a bandwidth five times the playback rate, e.g., approximately 20 Mbps for MPEG-2 System cost is significantly more expensive

    27. 27 New Pagoda Broadcasting [Paris99] New Pagoda Broadcasting improves on the original Pagoda Broadcasting. Required client bandwidth remains very high Example: Achieving a maximum delay of 110 seconds for a 2-hour video requires each client to have a bandwidth five times the playback rate. Approximately 20 Mbps for MPEG-2 System cost is very expensive

    28. 28 Limitations of Periodic Broadcast Periodic broadcast is only good for very popular videos It is not suitable for a changing workload It can only offer near-on-demand services

    29. 29 Batching FCFS MQL (Maximum Queue Length First) MFQ (Maximum Factored Queue Length

    30. 30 Current Hybrid Approaches FCFS-n : First Come First Served for unpopular video and n channels are reserved for popular video. MQL-n : Maximum Queue Length policy for unpopular video and n channels are reserved for popular video. Performance is limited.

    31. 31 New Hybrid Approach

    32. 32 LAW (Largest Aggregated Waiting Time First) MFQ tends to MQL; loosing fairness q1 / ? f1, q2 / ? f2, q3 / ? f3, q4 / ? f4, … f1 ? f2 ? f3 ? f4 ? ... q1, q2, q3, q4, ... Whenever a stream becomes available, schedule the video with the maximum value of Si : Si = c * m - (ai1+ ai2 + …+ aim ), where c is current time, m is total number of requests for video i, aij is arrival time of jth request for video i. (Sum of each request’s waiting time in the queue)

    33. 33 LAW (Example) By MFQ, q1*?t1= 5*(128-106)=110, q2*?t2 = 4*(128-100)=112. selected By MFQ Average waiting times are 12 and 8 time units. S1 = 128*5 - (107+111+115+121+126) = 60 selected S2 = 128*4 - (112+119+122+127) = 32 by LAW

    34. 34 AHA (Adaptive Hybrid Approach) Popularity is re-evaluated periodically. If a video is popular so broadcasting by SB currently, then go Case.1. Otherwise, go Case.2.

    35. 35 Performance Model 100 videos (120 min. each), Client Behavior follows - the Zipf distribution (z = 0.1 ~ 0.9) for choice of videos, - the Poisson distribution for arrival time. - popularity is changing gradually every 5 min for dynamic environment. - for waiting time, ? = 5 min., s = 1 min. Performance Metrics - Defection Rate, - Average access latency, - Fairness, and - Throughput.

    36. 36 LAW vs. MFQ

    37. 37 AHA vs. MFQ-SB-n

    38. 38 Low Latency: requests must be served immediately Challenges – conflicting goals To achieve these two goals, the challenges are: First, We should not ask early customers to wait for latecomers. Second, it must be still highly efficient. That is, each video stream must still be able to serve a large number of users. It looks like these two goals conflicting each other. However, it is achievable in our new approach, Patching.To achieve these two goals, the challenges are: First, We should not ask early customers to wait for latecomers. Second, it must be still highly efficient. That is, each video stream must still be able to serve a large number of users. It looks like these two goals conflicting each other. However, it is achievable in our new approach, Patching.

    39. 39 Some Solutions Application level: Piggybacking Patching Chaining Network level: Caching Multicast Protocol (Range Multicast)

    40. 40 Piggybacking is another related approach. Piggybacking is a policy which alters the playback rate of video stream in order to merge them into a single video stream. For example, the playback rate for client C can be speed up 5% than the normal rate while that of client B is slowed down 5%. Once they playback at the same frame, one stream be be released. The playback speed adjustment, however, must be within a certain range, say 5%, so that the speed change is not perceivable to the users. This limits the efficiency of Piggyback approach. For example, if two streams are started 6 minutes differently, it will take almost 2 hours to merge them if they playback rate is 5% different. Before they merge together, the video is likely to finished. Another problem is the complexity of implementation. Dynamic change of playback rate on the fly is not an easy issue and requires specialized hardware, which is usually more expensive than the regular disk space used in Patching.Piggybacking is another related approach. Piggybacking is a policy which alters the playback rate of video stream in order to merge them into a single video stream. For example, the playback rate for client C can be speed up 5% than the normal rate while that of client B is slowed down 5%. Once they playback at the same frame, one stream be be released. The playback speed adjustment, however, must be within a certain range, say 5%, so that the speed change is not perceivable to the users. This limits the efficiency of Piggyback approach. For example, if two streams are started 6 minutes differently, it will take almost 2 hours to merge them if they playback rate is 5% different. Before they merge together, the video is likely to finished. Another problem is the complexity of implementation. Dynamic change of playback rate on the fly is not an easy issue and requires specialized hardware, which is usually more expensive than the regular disk space used in Patching.

    41. 41 Patching Let’s start from an example. Here is the length of a video. When client A comes for the video, we schedule a video stream to multicast the video data to this client. This is the same as the dedicated channel approach does.Let’s start from an example. Here is the length of a video. When client A comes for the video, we schedule a video stream to multicast the video data to this client. This is the same as the dedicated channel approach does.

    42. 42 Proposed Technique: Patching After time t later, client A is playing back at this skew point when another client, say B, comes for the same video. Similar to batching, we can bundle A and B together so that they can share the same multicast data. However, client B has missed the leading video segment. So we schedule a new video channel to deliver just that portion to client B. We call this video stream as patching stream. As a contrast, the video stream for client A is called regular stream because it sends the video in its entirety. The data from the regular multicast is buffered to its local disk temporally. Meanwhile, we scheduled a new video stream to send B the missed segment. The concatenation of this segment and this segment consists of an entire video for client B. While the data from the patching stream are playback as soon as they are downloaded, the data from the regular multicast are buffered temporally. After time t later, client A is playing back at this skew point when another client, say B, comes for the same video. Similar to batching, we can bundle A and B together so that they can share the same multicast data. However, client B has missed the leading video segment. So we schedule a new video channel to deliver just that portion to client B. We call this video stream as patching stream. As a contrast, the video stream for client A is called regular stream because it sends the video in its entirety. The data from the regular multicast is buffered to its local disk temporally. Meanwhile, we scheduled a new video stream to send B the missed segment. The concatenation of this segment and this segment consists of an entire video for client B. While the data from the patching stream are playback as soon as they are downloaded, the data from the regular multicast are buffered temporally.

    43. 43 Proposed Technique: Patching After another time t, the skew point has been absorbed by the client buffer. The patching stream for client B can be released. Because the pat Here a small disk buffer is used to absorb the time distance of two requests. The costs, however, is minimal. Disk space is very cheap today. A 100Mbytes of disk space can buffer 10 minutes of MPEG-I video and costs less than $5. In fact, the high costs of a VOD system are mostly due to the networking cost. For example, the cost of networking contributes to more than 90% of the hardware cost of the Time Warner’s Full Service Network Project in Orlando. Therefore, it is essential for a VOD system to take full advantage of the aggregate bandwidth of the network. In practice, this additional disk space can be provided freely by the content provider because the significant increase in the number of subscribers can easily make up for this nominal cost. The client buffer is also required to implement VCR functions. In this case, the buffer can be used to support patching at no additional cost. After another time t, the skew point has been absorbed by the client buffer. The patching stream for client B can be released. Because the pat Here a small disk buffer is used to absorb the time distance of two requests. The costs, however, is minimal. Disk space is very cheap today. A 100Mbytes of disk space can buffer 10 minutes of MPEG-I video and costs less than $5. In fact, the high costs of a VOD system are mostly due to the networking cost. For example, the cost of networking contributes to more than 90% of the hardware cost of the Time Warner’s Full Service Network Project in Orlando. Therefore, it is essential for a VOD system to take full advantage of the aggregate bandwidth of the network. In practice, this additional disk space can be provided freely by the content provider because the significant increase in the number of subscribers can easily make up for this nominal cost. The client buffer is also required to implement VCR functions. In this case, the buffer can be used to support patching at no additional cost.

    44. 44 Client Design Under Patching, a client might be serviced by a single regular stream or a regular stream plus a patching stream. For this client, it is serviced by a single regular stream, from which it can receive the entire video. As soon as the data is downloaded, it is piped to the video player directly. This client comes a little later for the same video and is asked to share with the regular stream for the first client. While the data from the patching stream is piped to the video player directly, the data from the regular stream is buffered to its local disk temporally. After the patching clip is finished, the client starts playing the buffered data while this data loader keeps downloading data until it is finished. Under Patching, a client might be serviced by a single regular stream or a regular stream plus a patching stream. For this client, it is serviced by a single regular stream, from which it can receive the entire video. As soon as the data is downloaded, it is piped to the video player directly. This client comes a little later for the same video and is asked to share with the regular stream for the first client. While the data from the patching stream is piped to the video player directly, the data from the regular stream is buffered to its local disk temporally. After the patching clip is finished, the client starts playing the buffered data while this data loader keeps downloading data until it is finished.

    45. 45 Server Design Server must decide when to schedule a regular stream or a patching stream

    46. 46 Two Simple Approaches If no regular stream for the same video exists, a new regular stream is scheduled Otherwise, two policies can be used to make decision: Greedy Patching and Grace Patching

    47. 47 Greedy Patching Patching stream is always scheduled The Greedy Patching tries to share with an existing multicast whenever possible. Let’s say the client buffer is able to buffer this size of video segment. If a client comes in this time period, it can share the whole remaining data. If it comes in this time period, it can share the tail portion as much as the client buffer can hold. An advantage of this strategy is, it can always share something. The disadvantage is, in case the buffer size is very small, only a small tail portion can be shared. The Greedy Patching tries to share with an existing multicast whenever possible. Let’s say the client buffer is able to buffer this size of video segment. If a client comes in this time period, it can share the whole remaining data. If it comes in this time period, it can share the tail portion as much as the client buffer can hold. An advantage of this strategy is, it can always share something. The disadvantage is, in case the buffer size is very small, only a small tail portion can be shared.

    48. 48 Grace Patching If client buffer is large enough to absorb the skew, a patching stream is scheduled; otherwise, a new regular stream is scheduled. As an alternative, we can issue a new regular stream whenever the temporal skew is too large for client buffer to bridge. We call this strategy as Grace Patching. Under this approach, regular streams will be initiated more frequently than Greedy Patching, but the sizes of patching clips delivered by patching streams are always no large than the client buffer size.As an alternative, we can issue a new regular stream whenever the temporal skew is too large for client buffer to bridge. We call this strategy as Grace Patching. Under this approach, regular streams will be initiated more frequently than Greedy Patching, but the sizes of patching clips delivered by patching streams are always no large than the client buffer size.

    49. 49

    50. 50 Performance Study Compared with conventional batching Maximum Factored Queue (MFQ) is used Two scenarios are studied No defection average latency Defection allowed average latency, defection rate, and unfairness In our performance study, we compare the new technique with the conventional batching. We use Maximum Factored Queue as the scheduling policy. There are some other good scheduling strategies, like fist-come-first-serice or maximum-queue-length. But MFQ is able to achieve good system throughput while maintains a good fairness. The performance studies are conducted under two scenarios. One does not allow defection. That is, we assume the client cannot cancel their requests. We use the average latency as the performance metric. The other one allow defection and we study three performance metrics: average latency, defection rate and unfairness. Due to the time contrain, we will discuss only the no-defection scenario.In our performance study, we compare the new technique with the conventional batching. We use Maximum Factored Queue as the scheduling policy. There are some other good scheduling strategies, like fist-come-first-serice or maximum-queue-length. But MFQ is able to achieve good system throughput while maintains a good fairness. The performance studies are conducted under two scenarios. One does not allow defection. That is, we assume the client cannot cancel their requests. We use the average latency as the performance metric. The other one allow defection and we study three performance metrics: average latency, defection rate and unfairness. Due to the time contrain, we will discuss only the no-defection scenario.

    51. 51 Simulation Parameters Here is the simulation parameters. We assume there are 100 videos with a fixed length of 90 minutes. The access skew factor is 0.7 as it was reported that it is a typical factor observed from the video rental store. For each run, we simulates a 200,000 requests. Here is the simulation parameters. We assume there are 100 videos with a fixed length of 90 minutes. The access skew factor is 0.7 as it was reported that it is a typical factor observed from the video rental store. For each run, we simulates a 200,000 requests.

    52. 52 Effect of Server Bandwidth First we study the effect of server bandwidth on the system latency. We assume there is no defection and client buffer is able to buffer at most 5 minutes of video length. We observe that the performance of conventional batching is always the worst. However, their performances are similar, when the server bandwidth is less than 600 streams. That is because the server is overloaded. Under that situation, the client buffer is too small to bridge the temporal distance as the average latency is more than 5 minutes. However, when the server bandwidth increases, Grace Patching is distinguished. In particular, when the server bandwidth is increased to 1400 streams, Grace patching is able to provide true video on demand service for that kind of workload. As a contrast, the latency under the conventional Batching is still more than 3 minutes. First we study the effect of server bandwidth on the system latency. We assume there is no defection and client buffer is able to buffer at most 5 minutes of video length. We observe that the performance of conventional batching is always the worst. However, their performances are similar, when the server bandwidth is less than 600 streams. That is because the server is overloaded. Under that situation, the client buffer is too small to bridge the temporal distance as the average latency is more than 5 minutes. However, when the server bandwidth increases, Grace Patching is distinguished. In particular, when the server bandwidth is increased to 1400 streams, Grace patching is able to provide true video on demand service for that kind of workload. As a contrast, the latency under the conventional Batching is still more than 3 minutes.

    53. 53 Effect of Client Buffer Here is the effect of client buffer size on the average latency. We varied the client buffer from 0 to 10 minutes. The curve for the conventional batching is flat because it does not take advantage of the client buffer. The average latency under both Patching schemes decreases with the augment of client buffer. However, the latency decreases much more sharply under Grace Patching. In particular, when the clients are equiped with a 10 minutes of disk space, Grace Patching is able to achieve no latency, that is, true VOD service.Here is the effect of client buffer size on the average latency. We varied the client buffer from 0 to 10 minutes. The curve for the conventional batching is flat because it does not take advantage of the client buffer. The average latency under both Patching schemes decreases with the augment of client buffer. However, the latency decreases much more sharply under Grace Patching. In particular, when the clients are equiped with a 10 minutes of disk space, Grace Patching is able to achieve no latency, that is, true VOD service.

    54. 54 Effect of Request Rate Finally, we study the effect of request rate on the average latency. Again, the client buffer is fixed at 5 minutes. It shows that the average latency under both conventional Batching and Greedy Patching increase sharply as the increase of request rate. The Grace patching, however, can enjoy almost 0 latency when the request rate is as high as 30 requests per minutes. Finally, we study the effect of request rate on the average latency. Again, the client buffer is fixed at 5 minutes. It shows that the average latency under both conventional Batching and Greedy Patching increase sharply as the increase of request rate. The Grace patching, however, can enjoy almost 0 latency when the request rate is as high as 30 requests per minutes.

    55. 55 Optimal Patching

    56. 56 Optimal Patching Window

    57. 57 Optimal Patching Window Compute D, the mean amount of data transmitted for each multicast group Determine ? , the average time duration of a multicast group Server bandwidth requirement is D/? which is a function of the patching period Finding the patching period that minimize the bandwidth requirement

    58. 58 Candidates for Optimal Patching Window

    59. 59

    60. 60

    61. 61

    62. 62

    63. 63

    64. 64 Slow Incoming Stream

    65. 65 Downward Reconnection

    66. 66 Limitation of Patching The performance of Patching is limited by the server bandwidth. Can we scale the application beyond the physical limitation of the server ?

    67. 67 Chaining Using a hierarchy of multicasts Clients multicast data to other clients in the downstream Demand on the server-bandwidth requirement is substantially improved

    68. 68 Let’s discuss some related techniques before we reach the conclusion. In our previous research, we have exploited a Chaining technique. The basic idea of Chaining is to make client act like a mini-server and is willing to forward the cached data to the other clients. For example, the client A is admitted for service, it receives data from the video server. The data are playback as soon as they arrive. But the client does not discard the used data. Instead, it is buffer to its local disk. Let’s say it is able to buffer 5 minutes of video data. If within the 5 minutes, another client B comes for the same video, the client A can forward the cached to client B. Thus, server does not to schedule any resource for client B. It is similar to client C, which is served by the client B. Chaining is highly efficient and scalable because each client contributes its bandwidth and disk buffer to the community, instead of being just a burden to the video server. The implementation of this idea, however, is a great challenge. If any client in the link decides to quit, the server must detect immediately and either assign the affected clients to another multicast path or schedule a new stream for them. Such control mechanism is very complicated.Let’s discuss some related techniques before we reach the conclusion. In our previous research, we have exploited a Chaining technique. The basic idea of Chaining is to make client act like a mini-server and is willing to forward the cached data to the other clients. For example, the client A is admitted for service, it receives data from the video server. The data are playback as soon as they arrive. But the client does not discard the used data. Instead, it is buffer to its local disk. Let’s say it is able to buffer 5 minutes of video data. If within the 5 minutes, another client B comes for the same video, the client A can forward the cached to client B. Thus, server does not to schedule any resource for client B. It is similar to client C, which is served by the client B. Chaining is highly efficient and scalable because each client contributes its bandwidth and disk buffer to the community, instead of being just a burden to the video server. The implementation of this idea, however, is a great challenge. If any client in the link decides to quit, the server must detect immediately and either assign the affected clients to another multicast path or schedule a new stream for them. Such control mechanism is very complicated.

    69. 69 Scheduling Multicasts Conventional Multicast I State: The video has no pending requests. Q State: The video has at least one pending request. Chaining C State: Until the first frame is dropped from the multicast tree, the tree continues to grow and the video stays in the C state.

    70. 70 Enhancement When resources become available, the service begins for all the pending requests except for the “youngest” one. As long as new requests continue to arrive, the video remains in the E state. If the arrival of the requests momentarily discontinues for an extended period of time, the video transits into the C state after initiating the service for the last pending request.

    71. 71 Advantages of Chaining Requests do not have to wait for the next multicast. Better service latency Clients can receive data from the expanding multicast hierarchy instead of the server. Less demanding on server bandwidth Every client that uses the service contributes its resources to the distributed environment. Scalable

    72. 72 Chaining is Expensive ? Each receive end must have caching space. 56 Mbytes can cache five minutes of MPEG-1 video The additional cost can easily pay for itself in a short time.

    73. 73 Limitation of Chaining It only works for a collaborating environment i.e., the receiving nodes are on all the time It conserves server bandwidth, but not network bandwidth.

    74. 74 Another Challenge Can a multicast deliver the entire video to all the receivers who may subscribe to the multicast at different times ? If we can achieve the above capability, we would not need to multicast too frequently.

    75. 75 Range Multicast [Hua02] Deploying an overlay of software routers on the Internet Video data are transmitted to clients through these software routers Each router caches a prefix of the video streams passing through This buffer may be used to provide the entire video content to subsequent clients arriving within a buffer-size period

    76. 76 Range Multicast Group Caching Multicast Protocol (CMP)

    77. 77 Multicast Range All members of a conventional multicast group share the same play point at all time They must join at the multicast time Members of a range multicast group can have a range of different play points They can join at their own time

    78. 78 Network Cache Management Initially, a cache chunk is free. When a free chunk is dispatched for a new stream, the chunk becomes busy. A busy chunk becomes hot if its content matches a new service request.

    79. 79 CMP vs. Chaining

    80. 80 CMP vs. Proxy Servers

    81. 81 CMP vs. Proxy Servers

    82. 82 VCR-Like Interactivity Continuous Interactive functions Fast forward Fast rewind Pause Discontinuous Interactive functions Jump forward Jump backward Useful for many VoD applications

    83. VCR Interaction Using Client Buffer

    84. 84 Interaction Using Batching [Almeroth96] Requests arriving during a time slot form a multicast group Jump operations can be realized by switching to an appropriate multicast group Use an emergency stream if a destination multicast group does not exist

    85. Continuous Interactivity under Batching Pause: Stop the display Return to normal play as in Jump Fast Forward: Fast forward the video frames in the buffer When the buffer is exhausted, return to normal play as in Jump Fast Rewind: Same as in fast forward, but in reverse direction

    86. SAM (Split and Merge) Protocol [Liao97] Uses 2 types of streams, S streams for normal multicast and I streams for interactivity. When a user initiates an interactive operation: Use an I channel to interact with the video When done, use the I channel as a patching stream to join an existing multicast Return the I channel

    87. 87 Resuming Normal Play in SAM

    88. 88 Interaction with Broadcast Video The interactive techniques developed for Batching can also be used for Staggered Broadcast However, Staggered Broadcast does not perform well

    89. 89 Client Centric Approach (CCA) Server broadcasts each segment at the playback rate Clients use c loaders Each loader download its streams sequentially, e.g., i th loader is responsible for segments i, i+c, i+2c, i+3c, … Only one loader is used to download all the equal-size W-segments sequentially

    90. 90 CCA is Good for Interactivity Segments in the same group are downloaded at the same time Facilitate fast forward The last segment of a group is of the same size as the first segment of the next group Ensure smooth continuous playback after interactivity

    91. 91 Broadcast-based Interactive Technique (BIT) [Hua02]

    92. 92 BIT

    93. 93 BIT – Resume-Play Operation

    94. 94 BIT - User Behavior Model mx: duration of action x Px: probability to issue action x Pi: probability to issue interaction mi: duration of the interaction mff = mfr = mpause = mjf = mjb, Ppause = Pff = Pfb = Pjf = Pjb = Pi/5. dr : mi/mp interaction ratio.

    95. Performance Metrics Percentage of unsuccessful action Interaction fails if the buffer fails to accommodate the operation E.g., a long-duration fast forward pushes the play point off the Interactive Buffer Average Percentage of Completion Measure the degree of incompleteness E.g., if a 20-second fast forward is forced to resume normal play after 15 seconds, the Percentage of Completion is 15/20, or 75%.

    96. 96 BIT - Simulation Results

    97. 97 Support Client Heterogeneity Using multi-resolution encoding Bandwidth Adaptor HeRO Broadcasting

    98. 98 Multi-resolution Encoding Encode the video data as a series of layers A user can individually mould its service to fit its capacity A user keeps adding layers until it is congested, then drops the higher layer

    99. 99 Bandwidth Adaptors

    100. 100 Requirements for an Adaptor An adaptor dynamically transforms a given broadcast into another less demanding one The segmentation scheme must allow easy transformation of a broadcast into another CCA segmentation technique has this property

    101. 101 Two Segmentation Examples

    102. 102 Adaptation (1)

    103. 103 Adaptation (2)

    104. 104 Buffer Management insertChunk implements an As Late As Possible policy, i.e., If another occurrence of this chunk will be available from the server before it is needed, then ignore this one, else buffer it. deleteChunk implements an As soon As Possible policy, i.e., Determine the next time when the chunk will need to be broadcast to the downstream. If this moment comes before the availability of the chunk at the server, then keep it in storage, else delete it.

    105. 105 The Adaptor Buffer Computation is not intensive. It is only performed for the first chunk of the segment, i.e., If this initial chunk is marked for caching, so will be the rest of the segment. Same thing goes for deletion.

    106. 106 The start-up delay The start-up delay is the broadcast period of the first segment on the server

    107. 107 HeRO – Heterogeneous Receiver-Oriented Broadcasting Allows receivers of various communication capabilities to share the same periodic broadcast All receivers enjoy the same video quality Bandwidth adaptors are not used

    108. 108 HeRO – Data Segmentation The size of the i th segment is 2i-1 times the size of the first segment

    109. 109 HeRO – Download Strategy The number of channels needed depends on the time slot of the arrival of the service request Loader i downloads segments i, i+C, i+2C, i+3C, etc. sequentially, where C is the number of loaders available.

    110. 110 HeRO – Regular Channels The first user can download from six channels simultaneously

    111. 111 HeRO – Regular Channels The second user can download from two channels simultaneously

    112. 112 Worst-Case for Clients with 2 loaders Worst-case latency is 11 time units The worst-cases appear because the broadcast periods coincide at the end of the global period

    113. 113 Worst-Case for Clients with 3 loaders Worst-case latency is 5 time units The worst-cases appear because the broadcast periods coincide at the end of the global period

    114. 114 Observations of Worst-Cases For a client with a given bandwidth, the time slots it can start the video are not uniformly distributed over the global period. The non-uniformity varies over the global period depending on the degree of coincidence among the broadcast periods of various segments.

    115. 115 Observations of Worst-Cases (cont…) The worst non-uniformity occurs at the end of each global period when the broadcast periods of all segments coincide. The non-uniformity causes long service delays for clients with less bandwidth. We need to minimize this coincidence to improve the worst case.

    116. 116 We broadcast the last segment on one more channel, but with a time shift half its size. We now offer more possibilities to download the last segment; and above all, we eliminate every coincidence with the previous segments. Adding one more channel

    117. 117 To reduce service latency for less capable clients, broadcast the longest segments on a second channel with a phase offset half their size. HeRO

    118. 118 Under a homogeneous environment, HeRO is very competitive in service latencies compared to the best protocols to date the most efficient protocol to save client buffer space HeRO is the first periodic broadcast technique designed to address the heterogeneity in receiver bandwidth Less capable clients enjoy the same playback quality HeRO – Experimental Results

    119. 119 2-Phase Service Model (2PSM) Browsing Videos in a Low Bandwidth Environment

    120. 120 Search Model

    121. 121 Conventional Approach

    122. 122 Search Techniques

    123. 123 Challenges

    124. 124 2PSM – Preview Phase

    125. 125 2PSM – Playback Phase

    126. 126 Remarks

More Related