1 / 65

DiffServ Aware Video Streaming over 3G Wireless Networks

DiffServ Aware Video Streaming over 3G Wireless Networks. Julio Orozco, David Ros Novembre Project Sophia Antipolis, 26/11/2004. Agenda. Context The DiffServ-aware streaming approach Quality Assessment Performance Evaluation. Agenda. Context Overview Technical challenges Requirements

Télécharger la présentation

DiffServ Aware Video Streaming over 3G Wireless Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DiffServ Aware Video Streaming over 3G Wireless Networks Julio Orozco, David Ros Novembre Project Sophia Antipolis, 26/11/2004

  2. Agenda • Context • The DiffServ-aware streaming approach • Quality Assessment • Performance Evaluation

  3. Agenda • Context • Overview • Technical challenges • Requirements • The DiffServ-aware streaming approach • Quality Assessment • Performance Evaluation

  4. Example Scenario • I’m at the airport and have a two-hour wait ahead … • Real Madrid faces Milan in the Champions League final … • After a simple procedure, I start watching the match … • Transmitted from an Internet server … • On my mobile terminal … • With decent quality and an affordable price.

  5. Example Scenario Internet UMTS Video Server Video Client (mobile terminal)

  6. Technical Challenges • UMTS radio link • High and variable delay/jitter • Variable and limited bandwidth • Heterogeneous architecture • Internet + UMTS • Business/billing models

  7. Requirements • Video compression • Highly efficient • Error resilient • Network architecture • Affordable QoS • Integration • Video information – Network • Internet - UMTS

  8. Agenda • Context • The DiffServ-aware streaming approach • Concept • H.264 • DiffServ architecture • Semantic mapping • Quality Assessment • Performance Evaluation

  9. Our Approach • DiffServ-aware streaming of H.264 video • Pseudo-subjective quality assessment • Goals • Reduce visual distortion caused by network losses (induced by variable bandwidth and delay) • Validate performance in terms of visual quality

  10. DiffServ Aware Streaming of H.264 Video • H.264 • State-of-the-art standard codec • High efficiency • Improved network adaptation and error resilience • DiffServ IP network • Simple, scalable QoS at the IP level • Mapping • Video semantics <-> DiffServ packet priorities

  11. H.264 Video Codec • State of the art (May 2003) • High compression efficiency • 50% rate gain against MPEG-2 • 30% against MPEG-4 • Designed for network applications • Network adaptation layer (NAL) • Novel error resilience features

  12. DiffServ Architecture • AF prioritized discard • Three packet priorities per class • Green • Yellow • Red • Under congestion: • Red packets get discarded first then yellow packets, and finally green packets.

  13. AF Prioritized Discard • RIO algorithm

  14. Semantic Mapping • Original idea (MPEG-2) • Video is transported in a single AF class • AF packet priorities <-> MPEG frame types • Reduces visual distortion caused by losses

  15. DiffServ Mapping of H.264 • General strategy • Map coarse syntax elements in a single AF class • Take advantage of H.264 advanced network adaptation and error resilience features • Slices • Flexible Macroblock Ordering • Data Partitioning

  16. Agenda • Context • The DiffServ-aware streaming approach • Quality Assessment • Motivation • Classical methods • Pseudo-subjective assessment • Performance Evaluation

  17. Motivation • Streaming in Internet/UMTS • Distorsion = f(compression, network losses) • Network losses = f (congestion, rate, delay, jitter)

  18. Motivation • We need to measure the quality of the streamed signals • Does DiffServ-Aware Streaming yield a better quality? • Which mapping strategy is better? (from a perceived-quality point of view)

  19. Subjective Reflects human perception Difficult Expensive Not feasible in real time Objective Automated Repeateble Does not necessarily reflects human perception Requires acces to the original signal Can be computer-intensive Classical Quality Assessment

  20. Pseudo-Subjective Assessment • Novel Approach • Based on Neural Networks • Link network and coding parameters to human perception • MOS = f (network, coding)

  21. Pseudo-Subjective Assesment • Methodology • Identification of the quality-affecting parameters • Generation of distorted samples and recording of parameter values • Subjective assessment of distorted samples • NN training and validation

  22. Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Performance Evaluation Specification

  23. Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Preliminary Evaluation Specification

  24. Preliminary Evaluation • Goal • Verify that our proposal effectively yields a better visual quality than plain best-effort • NS-2 simulation • Wireline scenario • Objective quality assessment • ITS impairment metric (ANSI standard)

  25. Preliminary Evaluation • Results: visual impairment

  26. Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Development of UMTS Simulation Models & Tools Specification

  27. Development of UMTS Simulation Models & Tools • Goal • NS link object with variable bandwidth and delay • Tradeoff between simplicity and realism

  28. Target Abstraction Mobiles GGSN SGSN RAN Video server Internet UMTS backbone video source DiffServ queue background sources rtr0 mobile terminal • UMTS link • Low multiplexing (1-5 flows) • Variable bandwidth • Variable delay

  29. Bandwidth Oscillation • A single mobile in HS-DSCH BW (bit/s) Time (s)

  30. Approach • Markov-chain model • One state per bandwidth level • Transitions possible between all states P(0,n) P(0,1) P(1,n) 0 1 n P(0,0) P(n,n) P(1,1) P(1,0) P(n,1) P(n,0)

  31. To state 0 1 2 n From state 0 P P P P 0,0 0,1 0,2 0,3 1 P P P P 1,0 1,1 1,2 1,n 2 P P P P 2,0 2,1 2,2 2,n n P P P P n,0 n,1 n,2 n,n Model Definition • We need to define: • Number and values of bandwidth levels • Transition period • Transition probability matrix BW (bit/s) Time (s)

  32. Model Solution • Trace-based measurement • Run simulations with the Eurane UMTS extensions • Measure transitions • Three « variability » scenarios • Low, medium, high • Combination of number of users, speed and distance • One transition matrix per scenario

  33. Transition period: 20 ms 12 bandwitdh levels (kbit/s) 0 208 318 400 680 920 Mean bandwidth: 290 kbit/s 1,272 1,848 2,136 2,632 3,040 3,600 First Model

  34. Model Implementation • Main issue in NS-2: • Packet scheduling when bandwidth goes to zero • Solved!

  35. Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Pseudo-Subjective Quality Assessment 1 Specification

  36. Quality Affecting Parameters • Per-color packet loss rate • Green • Yellow • Red packet • Mean green loss burst size • Coding/mapping strategy

  37. Example Generation • 100 distorted clips • Each clip is associated to a combination of parameter values • Markov-chain loss simulator

  38. Subjective Assessment • 20 assessors rated the 100 clips 4 2

  39. Examples Subjective grade Network and source parameters Training algorithm Learning: Training the Neural Network Random Neural Network

  40. Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network UMTS Scenario Specification

  41. Topology Clip: 318 kbit/s 10 s/CIF/15 fps 60 byte packets video source DiffServ RIO queue 10 Mbit/s 5ms mobile terminal 10 Mbit/s 10ms 10 Mbit/s 15ms UMTS link Mean rate: 1 Mbit/s background sources Downlink delay: 200 ms Pareto/TCP Uplink delay: 200 ms Varying bandwith

  42. Scenarios • Best Effort • AF • Two threshold models • Overlapped (G-RIO) • Scattered (RIO) • Three values of Wq • 0.0017 « Normal » • 0.5 • 1 (maximum reactiveness)

  43. Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Pseudo-Subjective Quality Assessment 2 Specification

  44. New network (simulation output) and source data New subjective score Prediction: Using the Neural Network Trained Neural Network

  45. Results

  46. Results

  47. Conclusions • DiffServ-aware streaming can help reduce visual distortion under drastic bandwidth variations in UMTS for H.264 video • RIO thresholds affect visual quality • RIO must be highly reactive • Increase Wq -> react more to instantaneous than to average queue size

  48. Perspectives • Introduce delay oscillations in the UMTS link model • Detailed study of RIO parameters • Introduce losses due to excessive delay

  49. Questions • Thank you!

  50. Quality assessment • There is a need to measure the quality of the streamed signals • Does DiffServ-Aware Streaming yield a better quality? • Which mapping strategy is better? (from a perceived-quality point of view)

More Related