1 / 40

Hierarchical Caching and Prefetching for Continuous Media Servers with Smart Disks

Hierarchical Caching and Prefetching for Continuous Media Servers with Smart Disks. By:Amandeep Singh Parth Kushwaha. Index:. Introduction Media Servers Proposed Algorithms -Sweep and Prefetch(S&P) -Gradual Prefetching -Grouped Periodic Multiground Prefetching(GPMP)

flann
Télécharger la présentation

Hierarchical Caching and Prefetching for Continuous Media Servers with Smart Disks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hierarchical Caching and Prefetching for Continuous Media Servers with Smart Disks By:Amandeep Singh ParthKushwaha

  2. Index: • Introduction • Media Servers • Proposed Algorithms -Sweep and Prefetch(S&P) -Gradual Prefetching -Grouped Periodic Multiground Prefetching(GPMP) • Performance Evaluation • The Experiment • Results • Conclusion • Future Use • References

  3. Introduction • Due to increase in CPU performance,I/O systems have become the performance bottleneck. • Reason for this performance bottleneck is mechanical movement of disk head. • Several algorithms are introduced that exploit emerging smart disk technologies and increase data throughput on media servers.

  4. Media Server • media server is a device that simply stores and shares audio,video media files • Media server offers client access to media types such as video,audio and so on. • To avoid glitches media server must retrieve data from secondary memory at specific rate.

  5. Working of Media server • To avoid glitches media server require double buffer in main memory to ensure a data streams continuous playback. • Data read from the disk fills half of buffer while other half is use to play video • When data in first buffer is consumed,a switch occurs and media server use the second buffer to play video and uses the empty buffer for storage.

  6. Contd.. • Media server serve multiple requests concurrently • Therefore media server serves streams in rounds, which is of specific length • During a round system reads one block for each stream.

  7. Contd..

  8. Contd.. • Each request which is to served in a round is added to service list • A resolution mechanism maps each requested video block to disk block • I/O controller routes this requested disk block list to disk drive • Then the low level schedular schedules the corresponding disk block for each drive,reducing disk head positioning overhead

  9. contd.. • The disk head then transfers the requested data from the disk surface to the disk buffer cache. • With the help of I/O bus this buffer cache data transferred to server RAM

  10. Contd.. • Several algorithms are introduced to achieve maximum throughput, which is maximum number of streams served by disk. • Two important factors that limits maximum throughput are: • Time required to transfer data from the disk to main memory • Cost of cache memory

  11. Proposed Algorithms • In these proposed algorithm we increase maximum throughput while keeping round duration constant,with relatively low memory requirement. • To improve the drives maximum stream throughput,our algorithm use caching and prefetching techniques. • It actually increase throughput because prefetched requests are served from the disk’s cache,without head –positioning –overhead.

  12. SWEEP AND PREFETCH ALGORITHM • Data blocks that are retrieved from the disk are called randomly retrieved blocks.This causes more head-positioning overhead • In this algorithm disk head prefetches blocks when it reads adjacent blocks and stores them in a cache.This cause no head-positioning overhead

  13. CONTD.. • In this algorithm maximum reandomly retrieved blocks are 25 in one round • The ratio of randomly retrieved block that can be exchanged with prefetched blocks is 5/8.It means for retrieving 8 prefetched blocks required as much time to retrieve 5 random blocks

  14. Contd.. • During round 1 and round2 of fig(a) all the blocks are randomly retrieved. • During round 1of fig(b) last five blocks are served through high level cache buffer • As the ratio of randomly retrieved blocks exchanged with prefetched blocks is 5/8 so we can exchange last 5 blocks by prefetching 8 blocks from the disk.

  15. Contd.. • During round2 of the fig(2) first 8 blocks are already prefetched so we don’t have to retrived these blocks from disk. • In place of these 8 preftched blocks we can add 3 randomly accessed blocks and 8 prefetched blocks

  16. Issues With Sweep & Prefetch: • Until maximum throughput is achieved S&P services without prefetch. • Requires 3 cache buffers for each stream that is cached at the higher level. • This requires 3 blocks from each stream. • 2 for the double buffer. • 1 for the multi-disk controller.

  17. Thus each streams block will be skipped in some round. • Creating extra startup latency.

  18. Gradual Prefetching: • Force the server to work under S&P all the time. Don’t consider the number of concurrent streams. • At any time disk head prefetches half of all supported streams. • For every 2 newly admitted streams, 1 will have its next block prefetched in the first round

  19. It works with maximum number of supported streams so always time to prefetch an additional block for half of all new streams. • Hence, no stream will have extra start-up latency and triple buffering is no longer required.

  20. Gradual prefetching Round 1 • Random retrievals (v) = 19 • Prefetched blocks (p) = 9 • v ~19+5.6 ~25 Strings Serviced = 19

  21. Round 2 Gradual Prefetching v = 17 p = 12 v ~17+7.5 ~25 Strings = 26

  22. Round 3 Gradual Prefetching v = 16 p = 14 v ~16+8.7 ~25 Strings = 28

  23. Round 4 Gradual Prefetching v = 16 p = 14 v ~16+8 ~25 Strings = 30

  24. Grouped Periodic Multi-ground Prefetching: • This algorithm temporarily stores prefetched blocks in the host’s cache. • epoch: epoch or virtual round is the total duration of fixed number of actual rounds. • At the system level, GPMP offers finer grained disk requests per stream, more flexible configuration and higher service quality, because of lower start-up latencies.

  25. During GPMP round, media server serves all streams. • Delivers all blocks sustaining playback of the supported streams to host. • These blocks don’t have to be retrieved from disk in each round.

  26. During a round, group containing fraction of the supported stream is randomly retrieved. • Time remains in round for u prefetches for each stream.(u=num. of blocks prefetched for one stream). • In next round, blocks sustaining playback of next group are read from disk with u prefetched blocks for each stream.

  27. All streams are served in each round as N-v rounds that were not retrieved from disk are in cache having been prefetched in previous rounds. • The u prefetched blocks sustain for u rounds. After which same streams read from disk in given round are read again.

  28. Epoch length is k = u+1 • Total number of supported streams are N=v(u+1) • “Played” blocks are immediately discarded.

  29. Performance evaluation • To analytically evaluate the performance of the algorithms the continuous media server is simulated for a pregenerated workload similar to that of a typical server use. • DiskSim is used. It does not address onboard cache issues like ability to enable disable caching , the ability to switch prefetching on and off and ability to change disk cache segmentation. • DiskSim code is modified to add these capabilities.

  30. The Experiment • Use of video library that holds videos that are 90 to 120 min. long which are ordered according to popularity and follow Zipfian distribution. • Newly videos are with a Poisson distribution. Disk partitioning is assumed. • S&P algorithms are implemented using trace generator to pass hints and evaluate algorithms.

  31. Results

  32. Testing Sweep and S&P for throughput for: • Disk Cache sizes (d): 2, 4, 8, 12 Mbytes • Rounds length (r): 0.5, 1, 1.5, 2, 3, 4 sec. • 20 to 70 % improvements in throughput for r= 0.5 to 1.5 sec and d=2 to 12 Mbytes. • Longer r explode memory requirements. • GPMP outperforms Sweep getting higher throughput but poor start-up latency.

  33. In this figure the evaluation of a single disk media server is shown. r= 0.25 to 1 sec. • Low request arrival rate=> low value of k=> lower start-up latencies. • As request arrival rate increases, queuing delays occur and GPMP configurations become more beneficial for higher throughput.

  34. Conclusion • The techniques presented introduce higher(60 to 70%) throughput compared to Sweep strategies for continuous blocks. • Sweep does not exploit on board buffers. • There are no prefetching techniques by disk manufacturers for media retrievals that account for concept of rounds. • Parallel transfer of I/O requests and other disk to buffer transfers.

  35. Expected future use: • The current technology trends suggest that these techniques will show even better results for future disk products because transfer rates will improve and more powerful controllers on bigger embedded caches are certain to follow.

  36. References: • Paper on “Hierarchical Caching and Prefetching for Continuous Media Servers with Smart disks” -Stavros Harizopoulos (Carnegie Mellon University) Costas Harizakis and Peter Triantafillou (Technical University of Crete) • http://en.wikipedia.org • http://www.howstuffworks.com

  37. Thank You !!! Queries ???

More Related