1 / 23

University of Palestine

University of Palestine. Interactive Multimedia Application Development I.101. Presentation name : Video compression techniques. Prepare: Mohammed J. el-masre Mahmoud elqedra Mo3taz najy. Supervision: Mr. Nael A. Aburas. A genda. Introduction

cameo
Télécharger la présentation

University of Palestine

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. University of Palestine Interactive Multimedia Application Development I.101 Presentation name : Video compression techniques Prepare: Mohammed J. el-masre Mahmoud elqedra Mo3taz najy Supervision: Mr. Nael A. Aburas

  2. Agenda • Introduction • Achieving Compression • Video quality • Theory • Intraframe versus interframe compression • Video Compression Standards

  3. Introduction • A video consists of a time-ordered sequence of • frames ,i.e.,images. Video Compression Digital video compression is the enabling technology in many multi-media applications. These compression algorithms reduce the bit-rate requirements for transmitting digital video and reduce delivery costs. With these appealing properties, digital video is rapidly becoming an experience of everyday life

  4. Problem Raw video contains an immense amount of data. Communication and storage capabilities are limited and expensive. Example HDTV video signal:

  5. Video Compression: Why? Bandwidth Reduction

  6. Achieving Compression Reduce redundancy and irrelevancy. Sources of redundancy: Temporal – Adjacent frames highly correlated. Spatial – Nearby pixels are often correlated with each other. Color space – RGB components are correlated among themselves.

  7. Compressed (50)(22 KB, 12:1) Uncompressed(262 KB) Compressed (1)(6 KB, 43:1) Examples

  8. Video quality Most video compression islossy — it operates on the premise that much of the data present before compression is not necessary for achieving good perceptual quality. For example, DVDsuse a video coding standard calledMPEG-2that can compress around two hours of video data by 15 to 30 times, while still producing apicture qualitythat is generally considered high-quality forstandard-definitionvideo. Video compression is atradeoffbetweendiskspace, video quality, and the cost ofhardwarerequired to decompress the video in a reasonable time. However, if the video is over compressed in a lossy manner, visible (and sometimes distracting(artifactscan appear.

  9. Video quality Video compression typically operates on square-shaped groups of neighboringpixels, often calledmacro blocks. These pixel groups or blocks of pixels are compared from one frame to the next and thevideo compression codec)encode/decode scheme) sends only thedifferenceswithin those blocks. This works extremely well if the video has no motion. A still frame of text, for example, can be repeated with very little transmitted data. In areas of video with more motion, more pixels change from one frame to the next. When more pixels change, the video compression scheme must send more data to keep up with the larger number of pixels that are changing. If the video content includes an explosion, flames, a flock of thousands of birds, or any other image with a great deal of high-frequency detail, the quality will decrease, or thevariable bit rate must be increased to render this added information with the same level of detail.

  10. Theory Video is basically a three-dimensional array of color pixels. Two dimensions serve as spatial (horizontal and vertical) directions of the moving pictures, and one dimension represents the time domain. A data frame is a set of all pixels that correspond to a single time moment. Basically, a frame is the same as a still picture.

  11. Theory Video data contains spatial and temporal redundancy. Similarities can thus be encoded by merely registering differences within a frame (spatial), and/or between frames (temporal). Spatial encoding is performed by taking advantage of the fact that the human eye is unable to distinguish small differences in color as easily as it can perceive changes in brightness, so that very similar areas of color can be "averaged out" in a similar way to jpeg images . With temporal compression only the changes from one frame to the next are encoded as often a large number of the pixels will be the same on a series of frames.

  12. Intraframe versus interframe compression One of the most powerful techniques for compressing video is interframe compression. Interframe compression uses one or more earlier or later frames in a sequence to compress the current frame, while intraframe compression uses only the current frame, which is effectively image compression.

  13. Intraframe versus interframe compression The most commonly used method works by comparing each frame in the video with the previous one. If the frame contains areas where nothing has moved, the system simply issues a short command that copies that part of the previous frame, bit-for-bit, into the next one. If sections of the frame move in a simple manner, the compressor emits a (slightly longer) command that tells the decompressed to shift, rotate, lighten, or darken the copy — a longer command, but still much shorter than intraframe compression. Interframe compression works well for programs that will simply be played back by the viewer, but can cause problems if the video sequence needs to be edited

  14. Intraframe versus interframe compression (cont.) Since interframe compression copies data from one frame to another, if the original frame is simply cut out (or lost in transmission), the following frames cannot be reconstructed properly. Some video formats, such as DV, compress each frame independently using intraframe compression. Making 'cuts' in intraframe-compressed video is almost as easy as editing uncompressed video — one finds the beginning and ending of each frame, and simply copies bit-for-bit each frame that one wants to keep, and discards the frames one doesn't want. Another difference between intraframe and interframe compression is that with intraframe systems, each frame uses a similar amount of data. In most interframe systems, certain frames aren't allowed to copy data from other frames, and so require much more data than other frames nearby.

  15. Video Compression Standards

  16. Motivation forStandards • Goal of standards: • Ensuring interoperability – Enabling communication between devices made by different manufacturers. • Promoting a technology or industry. • Reducing costs.

  17. H.261 (1990) • Goal: real-time, two-way video communication • Key features • Low delay (150 ms) • Low bit rates (p x 64 kb/s) • Technical details • Uses I- and P-frames (no B-frames) • Full-pixel motion estimation • Search range +/- 15 pixels • Low-pass filter in the feedback loop

  18. H.263 (1995) • Goal: communication over conventional analog telephone lines (< 33.6 kb/s) • Enhancements to H.261 • Reduced overhead information • Improved error resilience features • Algorithmic enhancements • Half-pixel motion estimation with larger motion search range • Four advanced coding modes • Unrestricted motion vector mode • Advanced prediction mode ( median MV predictor using 3 neighbors) • PB-frame mode • OBMC

  19. MPEG-1 andMPEG-2 • MPEG-1 (1991) • Goal is compression for digital storage media, CD-ROM • Achieves VHS quality video and audio at ~1.5 Mb/sec • MPEG-2 (1993) • Superset of MPEG-1 to support higher bit rates, higher resolutions, and interlaced pictures • Original goal to support interlaced video from conventional television. Eventually extended to support HDTV • Provides field-based coding and scalability tools

  20. MPEG-4 (1993) • Primary goals: new functionalities, not better compression • Object-based or content-based representation – • Separate coding of individual visual objects • Content-based access and manipulation • Integration of natural and synthetic objects • Interactivity • Communication over error-prone environments • Includes frame-based coding techniques from earlier standards

  21. Comparing MPEG-1/2 and H.261/3 With MPEG-4 • MPEG-1/2 and H.261/H.263 – Algorithms for compression – • Basically describe a pipe for storage or transmission • Frame-based • Emphasis on hardware implementation • MPEG-4 – Set of tools for a variety of applications – • Define tools and glue to put them together • Object-based and frame-based • Emphasis on software • Downloadable algorithms, not encoders or decoders

  22. References • Digital Video Compression: Featuring JVT H 264 ...Peter D Symes • H 264 and MPEG 4 Video Compression: Video Iain EG Richardson • The VC 1 and H 264 Video Compression Standards ... - Hari Kalva, Jae Beom Lee • External links • Videsignline - Intro to Video Compression • Data Compression Basics (Vide) • MPEG 1&2 video compression intro (pdf format( • HD Greetings - 1080p Uncompressed source material for compression testing and research • Wiley - Introduction to Compression Theory[1] • Video compression technology - H.264 explained

  23. THANK YOU Hope It was Fun !!!!

More Related