1 / 88

WELCOME

WELCOME. to the lecture on SATELLITE COMMUNICATION. K.R.Radhakrishnan Asst. Engineer,Dooradarshan. INTRODUCTION. Television Transmission & Dooradarshan. Public Television Broadcaster of India

olisa
Télécharger la présentation

WELCOME

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WELCOME

  2. to the lecture on SATELLITE COMMUNICATION K.R.Radhakrishnan Asst. Engineer,Dooradarshan

  3. INTRODUCTION Television Transmission & Dooradarshan • Public Television Broadcaster of India • One of the largest broadcasting organizations in the world in terms of the infrastructure of studios and transmitters • Digital Terrestrial Transmitters • On September 15 th 2009, Doordarshan celebrated its 50th anniversary

  4. Our Milestones Experimental Telecast 1959 Regular Daily Transmission 1965 National Telecasts (Colour) 1982 DD Direct Plus (DTH Service) 2004

  5. Today’s Topic Satellite Communication

  6. TRANSMISSION Terrestrial & Satellite

  7. Terrestrial Vs Satellite TERESTRIAL SATELLITE More coverage area Higher BW Less Power • Less coverage area • Lower BW • More Power

  8. Frequency Bands • The up-link is a highly directional, point topoint link • The down-link can have a footprint providing coverage for a substantial area "spot beam“.

  9. History of Satellite Communication • 1945 Arthur C. Clarke Publishes an Essay“Extra Terrestrial Relays“ • 1957 First satellite SPUTNIK of Russia • 1960 First reflecting communication satellite “ECHO” by NASA • 1963 First geostationary satellite “SYNCOM” by NASA • 1965 First commercial geostationary satellite “Early Bird“ (INTELSAT I): 240 duplex telephone channels or 1 TV

  10. ECHO SYNCOM

  11. Factors in Satellite Communication Elevation Angle: The angle of the horizontal of the earth surface to the center line of the satellite transmission beam.

  12. Factors in Satellite Communication …… • Coverage angle • A measure of the portion of earth surface visible to a satellite taking the minimum elevation angle into consideration • R/(R+h) = sin(π/2 - β - θ)/sin(θ + π/2) = cos(β + θ)/cos(θ) R = 6370 km (earth’s radius) h = satellite orbit height β = coverage angle θ = minimum elevation angle >>>>

  13. Factors in Satellite Communication…… • Other Impairments to Satellite Communication: • The distance between an earth station and a satellite (free space loss). • Satellite Footprint: The satellite transmission’s strength is strongest in the center of the transmission, and decreases farther from the center as free space loss increases. • Atmospheric Attenuation caused by air and water can impair the transmission. It is particularly bad during rain and fog. >>>>

  14. Satellite transmission Analog Vs Digital DIGITAL More programs per channel / Transponder i.e. spectrum efficient Noise-Free Reception CD quality sound & better than DVD quality picture Reduced transmission power Flexibility in service planning -quality / Bandwidth trade off Terrestrial free network ANALOG One program per channel / transponder Comparatively noisy Lower quality with respect to VCD, DVD digital media Fixed reception Limited coverage

  15. SIGNALS -Analog Vs Digital

  16. Satellite Transponders

  17. Block schematic of a transponder

  18. Polarization and Frequency Reuse • Most communications satellites transmit using two orthogonal (i.e., at right angles) senses of polarization in order to utilize the available satellite frequency spectrum twice. • Transponders with one sense of polarization are totally transparent to the second set of transponders using the opposite sense. • Twice the number of transponders can therefore occupy the same amount of frequency spectrum. This is called frequency reuse.

  19. EARTH STATION Earth Station is a uplink center from which the signals are fed to Satellite for distribution in a specified area covered by the Satellite. The signal is up-linked from the earth station and received by many down link centers in TV broad casting. It is a very important part of satellite communication system for broadcasting of signals.

  20. Digital Earth Station Major Components of Digital Earth Station • PDA (Parabolic Dish Antenna) • FEED • LNA / LNBC • Wave Guide / Low Loss Cable • HPA (TWTA, SSPA, Klystrons) • Up converter • Modulator • Encoder • Multiplexer • IRD (Integrated Receiver Decoder)

  21. DVB - Digital Video Broadcasting Digital Video Broadcasting (DVB) is being adopted as the standard for digital television Main forms of DVB >>>>

  22. Main differences between DVB-S/DSNG and DVB-S2 DVB-S/DSNG DVB-S2 • Meant for broadcast only • Fixed 188 byte/packets • One TS / carrier • RS and Viterbi coding • Need of high Rx margin • QPSK /QPSK-8PSK-16QAM • Consumer LNB’s work in QPSK only • Fully transparent to all data • • Baseband in 16 or 64 kb/s • • Can work within noise floor • • QPSK-8PSK -16APSK -32APSK • • Pilot tones for extra synch in 8PSK

  23. Signal Processing -Video Compression

  24. Need of VIDEO COMPRESSION • Uncompressed video (and audio) data are huge. • It is a big problems for storage and communications. • Multimedia files are large and consume lots of hard disk space. • The files size makes it time-consuming to move them from place to place. Compression shrinks files, making them smaller and more practical to store and share

  25. Definitions • Bitrate • Information stored/transmitted per unit time • Usually measured in Mbps (Megabits per second) • Ranges from < 1 Mbps to > 40 Mbps • Resolution • Number of pixels per frame • Ranges from 160x120 to 1920x1080 • FPS (frames per second) • Usually 24, 25, 30, or 60 • Don’t need more because of limitations of the human eye

  26. COMPARISON of DIGITAL FORMATS

  27. Video Compression….. • Main goal of MPEG-2 standard is to define the format of video data to be transmitted • This data format is the result of compression and encoding • Compression technique in MPEG-2 based on human perception of vision. >>>>

  28. Video Compression….. • Images are described and structured in digital equipment using color spaces • RGB : Computer environments • YUV/YCrCb : related to TV world

  29. Video Compression….. • Y,Cr,Cb color space splits color information • into Y ,Cr ,Cb components. • Y, Cr, Cb generated out of R, G, B • Each pixel carries color information in the form of color component values Y,Cr&Cb >>>>

  30. Video Compression….. SAMPLING OF CHROMINANCE AND LUMINANCE Sampling Ratio 4:4:4 - Y, Cr and Cb are present for every pixel 4:2:2 - Y present for every pixel Cr and Cb are present for every second pixel 4:2:0 - Y for every pixel Cr and C b for every forth pixel >>>>

  31. Video Compression….. Mpeg-2 Video part deals with the basic objects used to structure video information >>>>

  32. Video Compression….. Video sequence : Group of video pictures Frame or picture: Contains color and brightness information required to display a picture on the screen >>>>

  33. Video Compression….. • PICTURE • Very important object of MPEG-2 video • Picture divided in to blocks • Blocks grouped into macro blocks • One block contains 64 chrominance or luminance pixels • Each block contains 8 lines • Each line holds 8 samples of luminance or chrominance pixels • The number of chrominance blocks in a macro block depends on the sampling format used to digitize the video material. >>>>

  34. Video Compression….. 4:2:0 - 4 blocks of luminance and 2 blocks of 4:2:2 - 4 blocks of luminance and 4 blocks of chrominance information 4:4:4 - 4 blocks of luminance and 8 blocks of chrominance >>>>

  35. Video Compression….. • There are three types of coded pictures • Intra coded picture ( I pictures ) • Predictive coded pictures ( P picture ) • Bi directionally coded pictures ( B pictures ) >>>>

  36. Video Compression….. • Intra coded pictures • These are coded in such a way that they can be decoded without knowing anything about other pictures in video sequence. • Blocks or macro blocks forming I pictures are called Intra blocks or Intracodedmacroblocks. >>>>

  37. Video Compression….. • Predictive coded pictures • These pictures are decoded by using information from previous pictures ( reference picture) displayed earlier. • Information used from earlier pictures ( I or P )is determined by motion estimation and is coded what is called Inter coded macroblocks. • Information that cannot be borrowed is coded as Intracoded (I macroblocks ) • P pictures are 50-30 % size of I picture. >>>>

  38. Video Compression….. • Bi directionally predicted pictures • Uses information from pictures occurred before and that coming in the future. • Encoding time encoder has access to the following pictures. • Information that are not available from preceding or following pictures are intracoded.

  39. Group of pictures

  40. Video Compression….. • Data Compression used in MPEG-2 video • Achieved by combining three technique • Removing picture information that is invisible to human eye. • Using variable length coding tables. • Motion estimation.

  41. Human eye less sensitive to high frequencies in color changes. Uses DCT to approximate the original chrominance and luminance in each block.

  42. Video Compression….. After DCT process , the coefficients for increasing values are arranged in zig zag manner. This zig zag order is matched by a Quantization Matrices Quantization , process delivers a large number of zeroes in high frequency range.

  43. Motion Estimation Areas of twosuccessive pictures are compared in order to determine the direction anddistance of relative motion between the frames.

  44. Video Compression….. • MPEG-2 SYSTEM PART • This describes the specification how the encoded audio-video bit stream should be multiplexed together to form actual programs and how it can be made suitable for different media and network applications. • It is self consistent containing all necessary information to decode the audio and video • Bit streams belonging to a specific program.

  45. Video Compression….. • It is independent of the networks physical implementation and should be suitable for error-prone and error free environments. • The key functionally addressed by MPEG-2 systems is multiplexing • It is referred to as MPEG-2 multiplex.

  46. Video Compression….. • MPEG-2 systems using data structures called packets • Packets consists packet header and the packet payload. • It is fixed or variable size. • Packets concept create a flexible mechanism to transport data • Packet Header contains necessary information to process this data in the packet payload.

  47. Video Compression….. MPEG-2 standard defines two basic tools to support media and network delivery systems. The Program Stream : CD ROM and Hardware Media The Transport Stream : Network Environment

More Related