1 / 25

Outline

Outline. Introduction Characteristics of multimedia data Quality of service management Resource management Stream adaptation Case study: Tiger video file server. *. Distributed Multimedia Systems. Distributed Multimedia Systems. Applications:

Télécharger la présentation

Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Outline • Introduction • Characteristics of multimedia data • Quality of service management • Resource management • Stream adaptation • Case study: Tiger video file server * 1

  2. Distributed Multimedia Systems 2

  3. Distributed Multimedia Systems • Applications: • non-interactive: net radio and TV, video-on-demand, e-learning, ... • interactive: voice &video conference, interactive TV, tele-medicine, multi-user games, live music, ... * 3

  4. At the right timeand in the right quantities Characteristics of multimedia applications • Large quantities of continuous data • Timely and smooth delivery is critical • deadlines • throughput and response time guarantees • Interactive MM applications require low round-trip delays • Need to co-exist with other applications • must not hog resources • Reconfiguration is a common occurrence • varying resource requirements • Resources required: • Processor cycles in workstations • and servers • Network bandwidth (+ latency) • Dedicated memory • Disk bandwidth (for stored media) * 4

  5. Application requirements • Network phone and audio conferencing • relatively low bandwidth (~ 64 Kbits/sec), but delay times must be short ( < 250 ms round-trip) • Video on demand services • High bandwidth (~ 10 Mbits/s), critical deadlines, latency not critical • Simple video conference • Many high-bandwidth streams to each node (~1.5 Mbits/s each), high bandwidth, low latency ( < 100 ms round-trip), synchronised states. • Music rehearsal and performance facility • high bandwidth (~1.4 Mbits/s), very low latency (< 100 ms round trip), highly synchronised media (sound and video < 50 ms). * 5

  6. System support issues and requirements • Scheduling and resource allocation in most current OS’s divides the resources equally amongst all comers (processes) • no limit on load • \ can’t guarantee throughput or response time • MM and other time-critical applications require resource allocation and scheduling to meet deadlines • Quality of Service (QoS) management • Admission control: controls demand • QoS negotiation: enables applications to negotiate admission and reconfigurations • Resource management: guarantees availability of resources for admitted applications • real-time processor and other resource scheduling * 6

  7. Data rate Sample or frame (approximate) frequency size Telephone speech 64 kbps 8 bits 8000/sec CD-quality sound 1.4 Mbps 16 bits 44,000/sec Standard TV video 120 Mbps up to 640 x 480 24/sec (uncompressed) pixels x 16 bits Standard TV video 1.5 Mbps variable 24/sec (MPEG-1 compressed) HDTV video 1000–3000 Mbps up to 1920 x 1080 24–60/sec (uncompressed) pixels x 24 bits HDTV video 10–30 Mbps variable 24–60/sec MPEG-2 compressed) Characteristics of typical multimedia streams * 7

  8. PC/workstation PC/workstation Windowsystem Camera Component Bandwidth Latency Loss rate Resources required K H G A Codec Codec Out: 10 frames/sec, raw video Zero Camera L B Microphones 640x480x16 bits Mixer Network connections A Codec In: 10 frames/sec, raw video Interactive Low 10 ms CPU each 100 ms; Screen C Video file system Video store M D Out: MPEG-1 stream 10 Mbytes RAM Codec : multimedia stream Window system White boxes represent media processing components, many of which are implemented in software, including: B Mixer In: 2 44 kbps audio Interactive Very low 1 ms CPU each 100 ms; Out: 1 44 kbps audio 1 Mbytes RAM codec: coding/decoding filter mixer: sound-mixing component H Window In: various Interactive Low 5 ms CPU each 100 ms; system Out: 50 frame/sec framebuffer 5 Mbytes RAM K Network In/Out: MPEG-1 stream, approx. Interactive Low 1.5 Mbps, low-loss connection 1.5 Mbps stream protocol L Network In/Out: Audio 44 kbps Interactive Very low 44 kbps, very low-loss stream protocol connection Typical infrastructure components for multimedia applications Figures 15.4 & 15.5 • This application involves multiple concurrent processes in the PCs • Other applications may also be running concurrently on the same computers • They all share processing and network resources * 8

  9. Quality of service management • Allocate resources to application processes • according to their needs in order to achieve the desired quality of multimedia delivery • Scheduling and resource allocation in most current OS’s divides the resources equally amongst all processes • no limit on load • \ can’t guarantee throughput or response time • Elements of Quality of Service (QoS) management • Admission control: controls demand • QoS negotiation: enables applications to negotiate admission and reconfigurations • Resource management: guarantees availability of resources for admitted applications • real-time processor and other resource scheduling * 9

  10. The QoS manager’s task * 10 *

  11. Protocol version Maximum transmission unit The RFC 1363 Flow Spec Token bucket rate Token bucket size Bandwidth: burstiness Maximum transmission rate maximum rate Minimum delay noticed Delay: acceptable latency acceptable jitter Maximum delay variation percentage per T Loss: Loss sensitivity maximum consec-utive loss T Burst loss sensitivity value Loss interval Quality of guarantee QoS Parameters Bandwidth • rate of flow of multimedia data Latency • time required for the end-to-end transmission of a single data element Jitter • variation in latency :– dL/dt Loss rate • the proportion of data elements that can be dropped or delivered late * 11

  12. Protocol version Maximum transmission unit The RFC 1363 Flow Spec Token bucket rate Token bucket size Bandwidth: burstiness Maximum transmission rate maximum rate Delay: Minimum delay noticed acceptable latency acceptable jitter Maximum delay variation percentage per T Loss: Loss sensitivity maximum consec-utive loss T Burst loss sensitivity value Loss interval Quality of guarantee Managing the flow of multimedia data • Flows are variable • video compression methods such as MPEG (1-4) are based on similarities between consecutive frames • can produce large variations in data rate • Burstiness • Linear bounded arrival process (LBAP) model: • maximum flow per interval t = Rt + B (R = average rate, B = max. burst) • buffer requirements are determined by burstiness • Latency and jitter are affected (buffers introduce additional delays) • Traffic shaping • method for scheduling the way a buffer is emptied * 12

  13. (a) Leaky bucket Traffic shaping algorithms – leaky bucket algorithm analogue of leaky bucket: • process 1 places data into a buffer in bursts • process 2 in scheduled to remove data regularly in smaller amounts • size of buffer, B determines: • maximum permissible burst without loss • maximum delay process 1 process 2 * 13

  14. (b) Token bucket Token generator Traffic shaping algorithms – token bucket algorithm Implements LBAP • process 1 delivers data in bursts • process 2 generates tokens at a fixed rate • process 3 receives tokens and exploits them to deliver output as quickly as it gets data from process 1 Result: bursts in output can occur when some tokens have accumulated process 1 tokens: permits to place x bytes into output buffer process 2 process 3 * 14

  15. Admission control Admission control delivers a contract to the application guaranteeing: For each computer: • cpu time, available at specific intervals • memory Before admission, it must assess resource requirements and reserve them for the application • Flow specs provide some information for admission control, but not all - assessment procedures are needed • there is an optimisation problem: • clients don't use all of the resources that they requested • flow specs may permit a range of qualities • Admission controller must negotiate with applications to produce an acceptable result • For each network connection: • bandwidth • latency • For disks, etc.: • bandwifth • latency * 15

  16. Resource management • e.g. for each computer: • cpu time, available at specific intervals • memory • Scheduling of resources to meet the existing guarantees: Fair scheduling allows all processes some portion of the resources based on fairness: • E.g. round-robin scheduling (equal turns), fair queuing (keep queue lengths equal) • not appropriate for real-time MM because there are deadlines for the delivery of data Real-time scheduling traditionally used in special OS for system control applications - e.g. avionics. RT schedulers must ensure that tasks are completed by a scheduled time. Real-time MM requires real-time scheduling with very frequent deadlines. Suitable types of scheduling are: Earliest deadline first (EDF) Rate-monotonic EDF scheduling Each task specifies a deadline T and CPU seconds S to the scheduler for each work item (e.g. video frame). EDF scheduler schedules the task to run at least S seconds before T (and pre-empts it after S if it hasn't yielded). It has been shown that EDF will find a schedule that meets the deadlines, if one exists. (But for MM, S is likely to be a millisecond or so, and there is a danger that the scheduler may have to run so frequently that it hogs the cpu). Rate-monotonic scheduling assigns priorities to tasks according to tasks according to their rate of data throughput (or workload). Uses less CPU for scheduling decisions. Has been shown to work well where total workload is < 69% of CPU. * 16

  17. Source Targets High bandwidth Medium bandwidth Low bandwidth Scaling and filtering • Scaling reduces flow rate at source • temporal: skip frames or audio samples • spatial: reduce frame size or audio sample quality • Filtering reduces flow at intermediate points • RSVP is a QoS negotiation protocol that negotiates the rate at each intermediate node, working from targets to the source. * 17

  18. IPv6 header layout QoS and the Internet • Very little QoS in the Internet at present • New protocols to support QoS have been developed, but their implementation raises some difficult issues about the management of resources in the Internet. • RSVP • Network resource reservation • Doesn’t ensure enforcement of reservations • RTP • Real time data transmission over IP • need to avoid adding undesirable complexity to the Internet • IPv6 has some hooks for it * 18

  19. Tiger Clients Network Case Study Tiger Video File server • Microsoft NetShow Video file server • Design goals • Video on demand for a large number of users • Quality of service • Scalable and distributed • Low cost hardware • Fault tolerant * 19

  20. Tiger architecture • Storage organization • Striping • Mirroring • Distributed schedule • Tolerate failure of any single computer or disk • Network support • Other functions • pause, stop, start * 20

  21. Controller low-bandwidth network 2n+1 n+1 n+2 n+4 n+3 n 3 2 1 0 Cub 0 Cub 1 Cub 2 Cub 3 Cub n high-bandwidth ATM switching network Cubs and controllers are standard PCs Start/Stop requests from clients video distribution to clients Tiger hardware configuration Each movie is stored in 0.5 MB blocks (~7000 for a 2 hr movie) across all disks in the order of the disk numbers, wrapping around after n+1 blocks. Block i is mirrored in smaller blocks on disks i+1 to i+d where d is the decluster factor (typically 4 - 8) * 21

  22. Distributed Schedule • Scheduling of workload of cubs • Slots • Work required to serve one block of a movie • One slot for each viewer (client) • Viewer State • Address of client computer • Identity of file being played • Viewer’s position in file • Viewer’s play sequence number 22

  23. 2 1 0 slot 4 viewer 3 state slot 2 free slot 1 free slot 5 viewer 2 state slot 6 free slot 7 viewer 1 state slot 3 viewer 0 state slot 0 viewer 4 state Network address of client FileID for current movie Number of next block Viewer's next play slot viewer  client viewer state: Viewer state: Stream capacity of a disk = T/t (typically ~ 5) Stream capacity of a cub with n disks = n x T/t in time t Tiger schedule Cub algorithm: • Read the next block into buffer storage at the Cub. • Packetize the block and deliver it to the Cub’s ATM network controller with the address of the client computer. • Update viewer state in the schedule to show the new next block and play sequence number and pass the updated slot to the next Cub. • Clients buffer blocks and schedule their display on screen. block servicetime t block play time T * 23

  24. Tiger performance and scalability 1994 measurements: • 5 x cubs: 133 MHz Pentium Win NT, 3 x 2Gb disks each, ATM network. • supported streaming movies to 68 clients simultaneously without lost frames. • with one cub down, frame loss rate 0.02% 1997 measurements: • 14 x cubs: 4 disks each, ATM network • supported streaming 2 Mbps movies to 602 clients simultaneously with loss rate of < .01% • with one cub failed, loss rate <.04% The designers suggested that Tiger could be scaled to 1000 cubs supporting 30,000 clients. 24

  25. Summary • MM applications and systems require new system mechanisms to handle large volumes of time-dependent data in real time (media streams). • The most important mechanism is QoS management, which includes resource negotiation, admission control, resource reservation and resource management. • Negotiation and admission control ensure that resources are not over-allocated, resource management ensures that admitted tasks receive the resources they were allocated. • Tiger file server: case study in scalable design of a stream-oriented service with QoS. 25

More Related