1 / 24

The Impact of Replacement Granularity on Video Caching

The Impact of Replacement Granularity on Video Caching. Elias Balafoutis, Antonis Panagakis, Nikolaos Laoutaris and Ioannis Stavrakakis Computer Networks Laboratory Department of Informatics & Telecommunications University of Athens, Greece. Overview. Introduction

Télécharger la présentation

The Impact of Replacement Granularity on Video Caching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Impact of Replacement Granularity on Video Caching Elias Balafoutis, Antonis Panagakis, Nikolaos Laoutaris and Ioannis Stavrakakis Computer Networks Laboratory Department of Informatics & Telecommunications University of Athens, Greece

  2. Overview • Introduction • An Intuitive (or simplistic) Approach • System Architecture • Proposed Scheme • Simulation • Future Work

  3. Introduction • Network Caching: Bring content close to demand points • Caching benefits: • Reduce access latency • Reduce network traffic • Reduce server load • Increase content availability • Rapid of growth of Multimedia applications over internet • Video Caching becomes particularly important

  4. Video Caching • Video Peculiarities • Large Size • Rate Variability • Structure • Large Size and Structure inspired Partial Video Caching • Partial Caching • Prefix Caching • Sliding Window • No existing partial caching schemes in which cache contents are dynamically updated based on demand

  5. An Intuitive (Simplistic?) Approach • Assumptions • Two objects, One cache • p1, p2 : request probabilities for objects 1 and 2 respectively • Request interarrival time > download time (i.e. replacement procedure is completed before the arrival of a new request) • Two Scenarios • Scenario 1 : • Partial Caching is not allowed • Replacement unit: A complete object • Scenario 2: • Partial Caching is allowed • Replacement unit: Half an object

  6. An Intuitive (Simplistic?) Approach • Scenario 2: Three possible states • Object 1 is in the cache • Object 2 is in the cache • Half of each object is in the cache • Cost on Cache Miss: 1 • Cost on partial Cache Hit: 0.5 • Total Cost: • Scenario 1: Two possible states • Object 1 is in the cache • Object 2 is in the cache • Cost on Cache Miss: 1 • Total Cost: C1≥C2

  7. System Description • Clients request videos from a group of servers • All servers and all videos have equal access costs • A proxy server is installed in a Local Area Network • There is abundant bandwidth in the LAN to support video streaming • Client requests are directed to the proxy • The proxy caches the most popular videos trying to reduce the requests that reach the server.

  8. Internal Proxy Architecture • Request manager • Schedules the transmission of the prefix (if any) • Forwards request for the suffix to the origin server • Cache manager (the focus of this work) • Allocates the storage resources to the requested videos • Note • Request pattern at the cache manager (and at the server) is different, in general, from the request pattern at the request manager (for example batching of request could be used)

  9. Cache Manager • Receives the missing segments of the requested video from the server and decides: • How much space to dedicate to the video • Which of the missing parts to cache • Which data to remove from the cache to make room for the new data. (Replacement Algorithm) • In traditional web-like caching schemes, only the replacement algorithm needs to be determined.

  10. Optimal Static Policy • Assumptions • N videos • Known popularity ranking (i.e. p1>p2 … > pN) • All Videos have the same size V • Cache Size : S • Cache portion for video i: ci • Static Policy: No replacement is performed • Performance metric: Byte Hit Ratio • Partial Knapsack problem • Maximize given that • Optimal solution: Highest Popularity First (HPF) • Highest Popularity First • The most popular object are cached until capacity is reached. • Only the last vide may be partially cached (the least popular that fits in the cache)

  11. Proposed Scheme • The Cache Manager uses a fixed Replacement Unit smaller than a complete video (Chunk) • When the requested video is not in the cache (Cache Miss) • It’s initial segment with the size of one chunk are stored in the cache • The replacement algorithm selects a video for removal. The last segment of the selected video with the size of one Chunk are removed • For each additional request for the same Video (Cache Hit or partial Hit) • An additional (consecutive) chunk is being cached • In that way, only prefixes (initial consecutive parts) of its video are cached. • The size of the chunk determines the replacement granularity and it is a design parameter

  12. Simulation Model (1/3) • Performance Metrics • Byte Hit Ratio • For each request: • BHR(x): Average BHRi on an interval x (x: number of requests or a time interval) • Steady State BHR (ss-BHR) = BHR(x), for large x, assuming that no popularity changes occur in x. • Responsiveness • The ability of the system to adapt to changes in popularity. • The time needed for BHR to reach 90% of its steady state value, starting with an empty cache.

  13. Simulation Model (2/3) • Request Pattern • Independent Reference Model : Each video i, is requested with probability pi, independently of previous requests • Request Distribution • Zipf-like distribution: • Arrival Pattern • Poisson with mean rate λ • Modeling of popularity changes • Transposition of the Zipf-like request distribution

  14. Simulation Model (3/3)

  15. Simulation Results • Cache State • The effect of Cache Size • The effect of Popularity Distribution • The effect of Chunk Size in Steady State BHR • The effect of Chunk Size in Response Time • BHR under changes in popularities • Dynamic Regulation of the Chunk size

  16. Optimal Static Allocation (HPF) Chunk Size = 2 units Chunk Size = 100 units Cache State • Assumptions • 5 videos • Zipf popularities • The cache is initially empty • Small Chunk Size • Long transient period • Steady state similar to the optimal • Large Chunk Size • Short transient period • Large oscillationsaround optimal state

  17. The effect of Cache Size

  18. The effect of Popularity Distribution

  19. The effect of Chunk Size in Steady State BHR • Small Chunk Size => higher BHR

  20. The effect of Chunk Size in Response Time • Small Chunk Size => higher BHR => very large response time

  21. BHR under changes in popularities • Rare popularity changes • Small Chunk Size => Higher Average BHR • Often popularity changes • Large Chunk Size => Higher Average BHR

  22. Dynamic Regulation of the Chunk size • Main Idea • Use small chunk at periods were BHR is stable (popularity remains the same) • Switch to a larger chunk size, when BHR reduces (a change in popularity occurs) • Switch back to a small chunk size, when steady state has been reached

  23. BHR under changes in popularities (revisited) • Rare popularity changes • Dynamic Chunk Selection => Higher Average BHR • Often popularity changes • Dynamic Chunk Selection => Higher Average BHR

  24. Future work • Study alternative segmentation schemes of video into chunks • Different chunk size for each video • Variable chunk sizes • Exploit segment based caching to differentiate based on: • Server-Proxy distance • Link costs • Content provider requests

More Related