1 / 8

5th Meeting

5th Meeting. “BITS AND BYTES”. BIT.

tareq
Télécharger la présentation

5th Meeting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 5th Meeting “BITS AND BYTES”

  2. BIT A bit or binary digit is the basic unit of information in computing and telecommunications. It is the amount of information that can be stored by a digital device or other physical system that can usually exist in only two distinct states. These may be the two stable positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, etc. Indeed, the term "bit" is a contraction of binary digit. In information theory, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.

  3. Multiples of bits

  4. There are several units of information which are defined as multiples of bits, such as byte (8 bits), kilobit (either 1000 or 210 = 1024 bits), megabyte (either 8000000 or 8×220 = 8388608bits), etc. The symbol for binary digit should be "bit", and this should be used in all multiples, such as "kbit" (for kilobit).However, the letter "b" (in lower case) is widely used too. The letter "B" (upper case) is both the standard and customary symbol for byte. `

  5. BYTE The byte, coined from "bite", but respelled to avoid accidental mutation to "bit", is a unit of digital information in computing and telecommunications. It is an ordered collection of bits, in which each bit denotes the binary value of 1 or 0. The size of a byte is typically hardware dependent, but the modern de facto standard is eight bits, as this is a convenient power of two. Many types of applications use variables representable in eight or fewer bits, and processor designers optimize for this common usage. The byte size and byte addressing are often used in place of longer integers for size or speed optimizations in microcontrollers and CPUs.

  6. Unit Symbol Or Abbreviation Common Uses The byte is also defined as a data type in certain programming languages. The C and C++programming languages, for example, define byte as an "addressable unit of data large enough to hold any member of the basic character set of the execution environment".

  7. Unit symbol or abbreviation Prefixes for bit and byte multiples

  8. Group Discussion • How many digits does a binary system use? • What is the difference between binary notation and the decimal system? Give some examples. • What is a collection of eight bits called? • One kilobyte (1K) equals 1,024 bytes. Can you work out the value of these units? (kilo-: one thousand). 1 megabyte = ........ Bytes/1,024 kilobytes (mega-:one million). 1 gigabyte = .........bytes/1,024 megabytes (giga-: onethousand million). 5. What does the acronym ‘ASCII’ stand for? What is the purpose of this code?

More Related