1 / 22

Variable Bit Rate:

Full Bit Rate. Redundancy. Entropy. time (frames). Variable Bit Rate:. There are two variations on MPEG-2 encoding; variable bit rate and constant bit rate.

luigib
Télécharger la présentation

Variable Bit Rate:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Full Bit Rate Redundancy Entropy time (frames) Variable Bit Rate: There are two variations on MPEG-2 encoding; variable bit rate and constant bit rate. Variable Bit Rate (VBR) attempts to maintain a constant level of picture quality by letting the bit rate increase for pictures of increased coding complexity, while still meeting set constraints like the average and peak bit rates. VBR VBR Variable bit rate becomes important when consistent quality of service is an issue, storage space is at a premium or when multiple data streams must be multiplexed into a single, constant bit rate stream. VBR requires multiple passes to perform the encoding.

  2. Full Bit Rate Entropy time (frames) Constant Bit Rate: With Constant Bit Rate encoding, the encoder attempts to maintain a fixed bit rate. In doing so, it sometimes produces visible compression artifacts when the information content of the pictures is high compared to the established bit rate. Redundancy CBR CBR CBR VBR VBR Constant Bit Rate encoding can be done in a single pass.

  3. NRZ: 1 0 Digital data transferred or stored using signal level to store data are called NRZ (Non Return to Zero). Attenuation of levels can make data ambiguous. As pulses move closer to this point, performance ends abruptly, hence the name Cliff Effect. Losses of high frequencies round off the square pulses, leading to timing errors and jitter. 1 0 1 0

  4. 380 455 490 550 580 620 750 UV V B G Y O R IR 300 400 500 600 700 800 Reading Optical Discs: Hz. 1021 1018 1015 1012 109 • lasers produce single wavelength, coherent light (all waves are synchronous) • CDs are read with a laser at 780nm wavelength • DVDs are read with a laser at 650nm wavelength Cosmic Rays Gamma Rays X-Rays VISIBLE LIGHT { 650nm 780nm

  5. disc transducer half-silvered mirror laser 1/4 wavelength CDs: • CDs were jointly developed in the early 80s by Sony and Philips • digital data is stored as pits in an aluminum or gold disk encased in a clear plastic shell, storing up to 650MB • as the disc spins, a laser shoots through a half-silvered mirror, reflects off of the pitted, metallic surface and bounces to the transducer 195nm • at the edge of a pit, half the beam reflects off the flat area and half off the pit floor • the pit-floor portion of the beam is delayed by the extra distance traveled and cancels out the flat-reflected half beam due to comb filtering

  6. Digital Errors: OOPS! • sometimes things happen to the digital data • even though digital is relatively immune to outside interference, it is stored and transferred in an analog world as analog properties of high and low levels • magnetic disks and tapes will develop dropouts, foreign substances seek out and contaminate data • even voltages on cable and RF transmissions are susceptible to losses and interference; as levels fall, the “cliff effect” causes total loss of data • digital designs take some of this into account and utilize redundancy, checksums and data shuffling to protect data integrity • these are methods of digital Error Correction and Concealment

  7. Redundancy: • redundancy means data is stored twice • twice as much storage space is required • writing the data takes longer because twice as much is written • reading the data takes more than twice as long because both records must be read and compared • if one of the records is damaged, some processing power is used to determine which data is correct and which should be ignored • the data output is correct • when the data is copied to a new storage medium, the error is permanently corrected

  8. Checksums: • if data is lost, it can be recreated using the checksums to recalculate the missing value • the output is correct; copies are permanently corrected 383 -296 +37 51 212 -210 2 703 -574 +2 +90 37 428 -338 90

  9. Data Shuffling: • if the errors are spread out over time due to the shuffling of the data, the affect will be not as noticeable • missing data can be extrapolated, or guessed at, based on the surrounding valid data • the error is not corrected because the extrapolated data is not an exact duplicate of the original • the error has been concealed • copies of the data will treat the extrapolated data as valid

  10. E = Q = Bit Rate Reduction! • information theory states the intelligence of a message is contained in its least predictable parts; everything else is redundant • bit rate reduction, sometimes erroneously called compression, is a mathematical means of reducing the number of bits required to move or store data by reducing the the amount of predictable information • computer users have used BRR for many years to shrink files for easier storage or transport; PK-ZIP for Intel machines and Stuffit for MACs are the most common computer BRR methods • both use a technique called Huffman or Variable-Length Coding, where common patterns are detected in the files to be reduced and short-hand notations are substituted • Morse Code, used by telegraphers, is a VLC scheme - the most common letters in the English language are given the shortest codes • well reduced data appears to be totally random - like noise - as all patterns are replaced by shorter notations

  11. Lossless Reduction: • a reduction method is considered “lossless” if the reconstituted data is identical to the original data • Run Length Encoding is another “lossless” bit rate reduction method; common patterns are identified and their numbers and locations noted • an RLE coding of an American flag would code a blue square, a red rectangle, a white rectangle, a white star, and instructions indicating the number and locations of each • lossless reduction is often a slow process, requiring time to analyze the data, find the patterns, count them, make the short-hand substitutions AND to undo the process, but can be very efficient depending on the type of data • fast (real-time) lossless reduction is not very efficient; perhaps 2:1 at best (50% reduction of size)

  12. Lossy Reduction? • often, in order to be fast enough for meaningful audio or moving video, “lossy” reduction is used • lossy reduction does NOT return the exact data originally reduced • lossy reductions may loose detail, accuracy of reproduction and the robustness required to withstand further processing

  13. For Image BRR, The PEGs: • the Joint Photographic Experts Group devised a method of lossless and lossy still picture compression • J-PEG video reduction looks at data in 8 X 8 pixel blocks, and assigns a numeric value to each pixel

  14. J-PEG: • the amplitude values are reduced to an average level of the block - recorded in the upper left corner - plus numbers representing increasing frequency of change from that average in the block, both horizontally and vertically - Discrete Cosine Transform or DCT • the data in the block are then read out in a zig-zag pattern • this is still lossless and completely reversible

  15. average Horizontal changes DCT Block coding: image from Simon Fraser University, School of Computing Science, http://www.cs.sfu.ca/undergrad/CourseMaterials/CMPT479/material/notes/Chap4/Chap4.2/Chap4.2.html Vertical changes

  16. MPEG: • the Motion Picture Experts Group - MPEG - took M-JPEG one step farther; MPEG reduces redundancy over time • MPEG is a group of tools designed to provide a range of quality, data bit rates and frames per second for video and audio storage and delivery by many different media • in video, sequential frames have many redundant elements of shape, color, patterns of light and dark, etc • MPEG uses DCT for intra-frame data reduction and Motion - Compensated Prediction for inter-frame data reduction. • MPEG data reduction ranges from 2:1 to 200:1 (reconstruction of an image from 50% to 0.5% of the original data) • MPEG 1 is a progressive-scan video only toolkit designed specifically for 1X CD-ROM delivery; MPEG 2 provides for both progressive and interlaced video scan formats via any medium

  17. The “I” Frame: • like JPEG, every video frame is broken into blocks of 8 X 8 pixels of Y, R-Y and B-Y • each block is processed by the DCT formula and may have varying amounts of data after reduction • blocks are grouped two vertically by two horizontally into macroblocks of 16 X 16 pixels macroblock block slice • macroblocks are grouped horizontally into slices which have similar average block levels • multiple slices form a frame • frames reduced and encoded in this fashion are Intra-coded or “I” frames and average 7:1 reduction frame

  18. N N+1 N+2 N+3 P Frames: • if there is a difference between consecutive frames, an I frame is created and only the differences in the subsequent frames are encoded • P frames are Predicted based on prior I or P frames plus the addition of data for changed macroblocks • P frames average about 20:1 reduction, or about half the size of I frames I frame @ 7:1 = 39Mb P1 frame @18:1=15Mb P2 frame @20:1=13.5Mb P3 frame @28:1=9.6Mb same as I except same as P1 except same as P2 except

  19. I frame P1 frame P2 frame P3 frame same as I except same as P1 except same as P2 except Motion Vectors: • macroblocks that move from frame to frame but do not change internally are not reprocessed • motion vectors - horizontal and vertical movements - reposition macroblocks from past frames to new positions in the current frame • the time required for the analysis of each frame to determine which macroblocks can be simply repeated, which should be moved, and which must be reprocessed for use in following frames is a primary reason for the asymmetrical nature of MPEG encoding • real-time MPEG encoding is less efficient or more damaging (or both)

  20. B Frames: • B frames are Bidirectionally Predicted frames based on the appearance and positions of past and future frame’s macroblocks • B frames require less data than P frames, averaging about 50:1 • B frames require more decoder buffer memory because two frames are compared during the reconstruction process • B frames also require manipulation of the coding order - frames moving from the coder to the decoder are not in presentation sequence I frame @7:1=38.6Mb B frame @40:1=6.75Mb P1 frame@18:1=15Mb P2 frame @28:1=9.6Mb/s based on I and P1 same as I except same as P1 except

  21. I P B B P B B I P B B P 1 4 2 3 7 5 6 8 11 9 10 12 Coding Order: • order of frames at the time of image acquisition 1 2 3 4 5 6 7 8 9 10 11 12 • order of frames after MPEG coding image based on “Fast Forward”, TV Broadcast & Engineering, March, 1996

  22. I P B B P B B I P B B P 1 4 2 3 7 5 6 8 11 9 10 12 GOP: • a Group of Pictures, or GOP, begins with an I frame, followed by a number of P and/or B frames • each GOP is independent - all frames needed for predictions are contained within the GOP (except when they are not) • GOPs can be as small as a single I frame, or as large as desired, but usually no more than 15 frames in length • the longer the GOP, the more efficient but less robust the coding GOP GOP image based on “Fast Forward”, TV Broadcast & Engineering, March, 1996

More Related