310 likes | 450 Vues
This document explores the design and implementation of scalable continuous media servers, focusing on audio and video handling. It discusses concepts such as bandwidth requirements, data display techniques, prefetching methods, and the role of memory and CPU in minimizing display disruptions. The paper emphasizes efficient disk service time management, pipelining strategies, and the impact of block sizes on media display. Understanding these principles is critical for developing multimedia systems that can handle the increasing demands of continuous media content seamlessly.
E N D
Multimedia Information Systems Shahram GhandeharizadehComputer Science Department University of Southern California
Reading • First 11 (until Section 3.2) pages of: S. Ghandeharizadeh and R. Muntz, “Design and Implementation of Scalable Continuous Media Servers,” Parallel Computing, Elsevier 1998.
MULTIMEDIA • Multimedia now: • Multimedia in a few years from now: • Remaining:
Continuous Media: Audio & Video • Display of a clip as a function of time. Bytes Bytes Time Time Constant Bit Rate Variable Bit Rate
Continuous Media: Audio & Video • A clip has a fixed display time. Bytes Bytes Time Time Constant Bit Rate Variable Bit Rate Clip display time
Continuous Media: Audio & Video • A clip has a fixed size. Bytes Bytes Time Time Constant Bit Rate Variable Bit Rate Clip size
Continuous Media: Audio & Video • Average bandwidth for continuous display is clip size divided by the clip display time. Bytes Bytes BW = Line slope Time Time Constant Bit Rate Variable Bit Rate Display bandwidth requirements
Time and space • One may manipulate the bandwidth required to display a clip by prefetching a portion of the clip. Bytes Prefetch portion Time Startup latency Constant Bit Rate Media
Continuous display from magnetic disk • Target architecture Memory CPU System Bus Display
Continuous display • Once display is initiated, it should not starve for data. Otherwise, display will suffer from frequent disruptions and delays, termed hiccups. Memory CPU System Bus Display
Continuous display: using memory • Given the low latency between memory and display, stage the entire clip from disk onto memory and then initiate its display. Memory CPU System Bus Display
Continuous display: using memory • Limitations: • Forces the user to wait un-necessarily. • Requires a large memory module in the order of Gigabytes for 2 hour movies. Memory CPU System Bus Display
Continuous display: pipelining • Partition a clip X into n fixed size blocks: X1, X2, X3, …, Xn • Stage Xi in memory and initiate its display. • Stage Xi+1 in memory prior to completion of the display of Xi Disk X1 X2 Display Display X1 Display X2 Time Period
Pipelining: multiple displays • With multiple displays, disk is multiplexed between multiple requests, resulting in disk seeks. Seek + Rotational delay Disk Xi Wj Zk Xi+1 Wj+1 Zk+1 Display Display Xi Display Xi+1 Time Period
How to manage disk seeks? • Live with it: • Assume the worst seek time in order to guarantee hiccup-free display • Assume average seek time if hiccups are acceptable. • Use the elevator algorithm by delaying display of a block to the end of a time period, termed Group Sweeping Scheme (GSS): Zk+1 X2 Disk X1 Wj Zk Wj+1 Zk+2 Display Display X1 Time Period
Impact of block size • Disk service time with transfer-rate tfr and block size B is: • Tdisk = Tseek + TRotLatency + (B / tfr) • Number of simultaneous displays supported by a single disk is: N = Tp/Tdisk • Simple pipelining requires (N+1)B memory, GSS requires 2NB. • The observed transfer rate of a disk drive is a function of B and its physical characteristics: tfrobs = tfr ( B / [B + (Tseek + Tlatency) tfr] ) • Percentage of wasted disk bandwidth: 100 * (tfr – tfrobs) / tfr
Impact of block size • MPEG-1 clips with 1.5 Mbps bandwidth requirements • Target disk characteristics: Seek: max = 17 msec, min = 2 msec Rotational latency: Max = 8.3 msec, min = 4.17 msec Disk tfr = 68.6 Mbps • Throughput and startup latency as a function of block size:
Modern disks are multi-zoned • Each zone provides a different storage capacity (number of tracks and sectors per track) and transfer rate. • Outermost zone is typically twice faster than the innermost zone.
Seagate ST31200W • Consists of 2697 cylinders. One may model its seek characteristics as follows:
IBM’s Logical Track • Let Zmin denote the zone with fewest track, Tmin • A disk with Z zones is collapsed into a logical disk consisting of one zone with Tmin tracks. Size of each track is Z * Tavg • The size of a block must be a multiple of the logical track size Logical Track 1 Logical Track 2 Logical Track 3 • Disadvantage: Z+1 seeks to retrieve a logical track
HP’s Track Pairing • Let Zmin denote the zone with fewest track, Tmin • Pair outermost track with the innermost one and continue inward. • A disk with Z zones is collapsed into a logical disk consisting of one zone with (Z*Tmin)/2 tracks. • The size of a block must be a multiple of a track pair Logical Track 1 Logical Track 2 Logical Track 3 ... Logical Track 8 • Disadvantage: 2 seeks to retrieve a logical track
USC’s region-based approach • Partition the Z zones into R regions. A region may consist of 1 or more consecutive zones. The slowest participating zone dictates transfer rate of its assigned region. • Assign blocks of a clip to regions in a round-robin manner. • Display of clips requires visiting regions one at a time, multiplexing their bandwidth between N active requests. Both fix sized blocks and variable length blocks are supported. Region 1 Region 2
Multi-zone disk drives • With all 3 techniques, one may selectively drop zones: sacrifice storage for bandwidth! • Example: USC’s region-based approach Region 1 Region 2
FIXB • Partition a clip into fix sized blocks and assign them to the regions in a round-robin manner. • During a time period, retrieve blocks from one region at a time. • Display starts when sufficient data is in main memory.
FIXB • Amount of data produced during (1 maximum seek + TScan) is identical to the amount of data displayed during TScan.
VARB • Variable size blocks dictated by the transfer rate of each zone. • Amount of data produced during one TMUX is identical to the amount of data displayed during TMUX. • Limitation: complex to implement due to variable block sizes.
Comparison • FIXB and VARB waste space due to: • Round-robin assignment of blocks to zones • Different zones offer different storage capacities.