1 / 23

Presented by Anant Joshi, Sales Engineering Director, EMEA

Eyeblaster University. Video Basics. Presented by Anant Joshi, Sales Engineering Director, EMEA. Agenda. Digital Video Compressing Video Audio Video Encoding in tools. Digital Video. Stage I – Sampling Video Converting real life video into a sequence of static images - Frames.

otis
Télécharger la présentation

Presented by Anant Joshi, Sales Engineering Director, EMEA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Eyeblaster University Video Basics Presented by Anant Joshi, Sales Engineering Director, EMEA

  2. Agenda • Digital Video • Compressing Video • Audio • Video Encoding in tools

  3. Digital Video Stage I – Sampling Video Converting real life video into a sequence of static images - Frames

  4. Stage II – Sampling Pictures Frame rate:The video sampling speed, or the number of frames per seconds (FPS) used to represent the video Standards Frame Rates: Cinema Film 24 FPSPAL TV 25 FPS (Europe) , NTSC TV 30 FPS (USA/Japan), More frames = Better quality (“smoother” motion)

  5. Stage II – Sampling Pictures Converting analog pictures to digital picture.

  6. Stage II – Sampling Pictures Color - Colors are represented digitally as unique numeric combination of base colors RGB (Red, Green and Blue). For example: (0,0,255) = blue(255,0,255) = purple(255,255,255) = white(0,0,0) = black Color depth - The number of bits used to describe the color range

  7. 1 bit – Monochrome 4 bit – 16 colors (EGA / VGA in high resolution) 8 bit – 256 colors (VGA/Super VGA) 24/32 bit – over 16M colors (True color) Stage II – Sampling Pictures Examples: More colors = better quality

  8. Stage II – Sampling Pictures Pixel - the smallest element in a digital image, holds the digital color representation of a specific image location Resolution – The number of pixels used in each dimension (Width X Height) to represent the picture Examples: Standard TV 640 X 480 (4:3 ratio) = 307,200 pixels Full Screen 800 X 600 (4:3 ratio) = 480,000 pixels HDTV 1920X1080 / 1280X720 (16:9 ratio) ~ 1-2M pixels 3M Digital Camera 2048X1536 (4:3 ratio) ~ 3M pixels More pixels = more details = better quality

  9. Digital video - Size issues I Digital Video File Size = Video duration (seconds) X Frame Rate (FPS) X Resolution (Height X Width) X Pixel size (Color depth) Example: 30 seconds X 30 FPS X (320X240) X 24bit = 207,360,000 Bytes = 207MB !!! Without including audio!

  10. Digital video – Size issues II Video Bit Rate – How much data (bits) is used to store one second of video and therefore what would be the minimum dedicated bandwidth requirement that would guarantee smooth video display in streaming Video Bit rate = (Video file size / video duration) = Frame Rate (FPS) X Resolution (Height X Width) X Pixel size (Color depth) In our example: 30 FPS X (320X240) X 24bit =~ 55Mbit/sec (vs. 1 to 5Mbit/sec for Broadband connection)

  11. Size issues - Summary Storage – Storing significant amounts of uncompressed digital video content is not only expensive – it is unpractical. A CD-R would store less than 2 minutes Remote access – Even in LAN conditions bandwidth is limited to 100Mbit/Sec. Serving uncompressed video over the internet would be out of the question Solution: Compression

  12. Compressing video Type of compressions "Lossless" compression– no data is lost, content can be restored into it’s original “decompressed” format perfectly. Mostly used on documents and other textual data (for example ZIP, but also GIF). RLE (Run Length Encoding) – encodes/stores sequences of data as a single value-count pairs.This is most useful on data containing many long sequences (e.g. simple graphic images, such as icons and line drawings). For example: ABBBBBBBBBCDEEEEF = A *8B C D *4E F Huffman coding - encodes often-repeated symbols with a few bits and rare ones with more bits LZW (Lempel-Ziv-Welch) – Similar to Huffman but with group of symbols (e.g. words)

  13. The efficiency of these algorithms is decreasing with the level of changes within the image (frame) Compressing video Type of compressions “Lossy” compression– some data is lost, hoping that it is insignificant and would not be noticed (quality). Mostly used on visual data / sound such as images , music and video (for example JPEG, MPEG, MP3 etc). DCT (Discrete Cosine Transformation)& Vector quantization are techniques that are used to eliminate “insignificant” information that is not expected to be detected by the human eye

  14. Compressing video Video Compression - Reduction of the size of files containing video images that are stored in digital form Encoding – The process of converting data from one format to another. Video encoding is used when analog video is converted to digital video and then again when digital video is compressed Decoding – The process of restoring the original format of “Encoded” data. Codec - Software or Hardware technology for encoding and decoding of digital video (short for compressor / decompressor or coder / decoder)

  15. Compressing video • Micro blocks – Any frame is divided into small micro blocks which are small matrixes of bits presenting part of the frame. Most of the compression techniques are working on those blocks. • Some video compression algorithm types: • Motion detection - Algorithm to detect movement of micro blocks from one position on certain frame to different position on the next frame. • Motion vector – Key matrix of bits by which certain micro block in a current frame can be predicted by a micro block from previous frame. In some cases, the old position of the predicted micro block should be stored within the same vector. The efficiency of these algorithms is decreasing with the level of changes between frames

  16. Encoding Workflow: Decoding Workflow: • Each Micro block is decompressed in the exact opposite process to the compression • “I” (Intra) frames are “Key” frames were compressed directly from a real source frame • “P” (Predicted) frames are created from an “I” frame + a motion vector • “B” (bi-directionally interpolated) frames are artificially created to smooth the motion and increase the frame rate

  17. Compressing – Data Rate Control Quality vs. Bit rate – Higher quality needs higher bit rate, and lower bit rate produces lower quality. 2 different data rate control:Bit rate control – limiting the maximum bit in any given time. The encoder can not exceed this value. Pros: No big varies in bit rate during the entire video Cons: Video quality can decrease in certain sections (fast changing complex scenes) Quality limited – limiting the minimum quality for each frame (percentage). The encoder can not decrease below this value. Pros: Same quality during the entire video Cons: Video bit rate can have very high peaks

  18. Bit rate time Compressing – Data Rate Control CBR (Constant Bit Rate encoding) - Is not really constant ! (there is fixed bit rate encoding such as DV). The CBR means that the average bit rate of the video in a certain time interval (e.g. 5 seconds), can not exceed the requested bit rate.

  19. Bit rate time Compressing – Data Rate Control • VBR (Variable Bit Rate encoding) - There is a lot of confusion about this definition, but it basically means: • Either the encoder needs to maintain an average bit rate on the whole video. • The encoder needs to maintain a minimum quality for each frame.

  20. Compressing – Encoding passes The number of times the encoder needs to “go over” the input data. 1 pass encoding – The encoder encode the video compressed file during one read process of the source file. This method is fast and used mainly for real time encoding (Internet broadcast, video conferencing, etc..) 2 pass encoding – The encoder read the source file twice. On the first read it gathers information about the movements and best key frames in order to optimize the compression during the 2nd read of the source.

  21. Digital Audio Stage I – Sampling Audio Converting analog audio to a series of digital samples. Sample rate – The number of samples per seconds (Hz). Influence the frequency range that can be compressed. • Standards: • 8,000 Hz - speech quality (telephone) • 11,025 Hz • 22,050 Hz – radio quality (minimum for music) • 44,100 Hz – CD quality (commonly used with audio in VCD & MP3) • 48,000 Hz – Digital TV • 96,000 or 192,400 Hz – DVD Audio More samples = Better sound

  22. Digital Audio Stage II – Storing samples Sample Bit size – The number of bits to represent a single sample. 16 bit is the commonly used size. Channels – The number of audio channels (1-mono, 2-stereo and more for high quality like in DVD). Bit rate – The number of bits used to store a second of audio data. Bit rate control - Similar to video (bit rate / quality)

  23. Video Encoding Tools • Flash Encoding: On2 Flix 8 Pro • Adobe Flash CS3 Video Encoder • Sorenson Squeeze • WMV Encoding: Window Media Encoder 9 • Raw Changes: Adobe Premier Pro

More Related