1 / 41

How Computers Work: Bits and Bytes

How Computers Work: Bits and Bytes. Chapter 1 CSC 180 Dr. Adam Anthony. Overview. How do computers work? Understanding bits Electronic Logic Memory and Storage Data Representation Text Numbers Images Sound. The World’s Simplest Computer.

morrie
Télécharger la présentation

How Computers Work: Bits and Bytes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How Computers Work: Bits and Bytes Chapter 1 CSC 180 Dr. Adam Anthony

  2. Overview • How do computers work? • Understanding bits • Electronic Logic • Memory and Storage • Data Representation • Text • Numbers • Images • Sound

  3. The World’s Simplest Computer • This computer “knows” when B1 is pressed or not (input) • It Lights up, or stays dark, to let us know what it computed (output) • Amount of information: 1 Bit • Discussion: • What types of problems can we solve with this computer? • Is this actually a computer? B1

  4. About Bits • Bits are great because they are easy to represent mechanically • A black line on paper (or lack of one) • Water flowing through a tube (or not) • Electricity flowing over a wire (or not) • Electricity flowing over a wire at a high/low voltage level • Now we just need some coding theory and some physics to get a “real” computer

  5. Coding Theory • Examples of Bit-wise or Binary code: • Alarms or buzzers • “Light one if by land, two if by sea” • Paul Revere • Morse Code: • A = .- B = -… C = -.-. • Boolean Logic: • AND: 1 AND 1 = 1; 1 AND 0 = 0; 0 AND 0 = 0 • OR: 1 OR 1 = 1; 1 OR 0 = 1; 0 OR 0 = 0 • NOT: NOT 1 = 0; NOT 0 = 1

  6. Physics • Basics of Electricity: • Once generated, electricity is either transmitted or absorbed • Electricity that is partially absorbed is said to be meeting resistance • Generates heat • Voltage is a ratio between the power (watts) and current flow (amps) of an electrical source • Measurable • Can be altered with very little resistance • Since Voltage = Watts/Amps, reducing the voltage also reduces the power

  7. Logic Gates • Mechanized Boolean logic, usually electric • A simple, inefficient, and out-dated AND gate: • Four basic types of logic gates in computers • AND (symbol ) means both inputs are true • OR () means one OR the other is true (could be both) • NOT () provides the opposite of the single input • XOR () means EXCLUSIVELY (X) one input is true OR the other is true Diodes allow electricity to pass in one direction, and only if there is capacity on the line for it to pass

  8. Truth Tables and Diagram Primitives

  9. Logic Diagrams • One if by land, two if by sea: SEA!!! LAND!!!

  10. Truth Tables Revisited • What would a truth table look like for the expression: (A  B)  (B  C)

  11. Boolean Expressions • What would a truth table look like for the expression: (A  B)  (B  C)

  12. Boolean Expressions • What would a truth table look like for the expression: (A  B)  (B  C)

  13. Creating Chips • What would a computer chip look like for the expression: (A  B)  (B  C) • Each operator will get a gate (4 total here) • Respect order of operations • (), , , ,  • Often, () will denote exactly one order • Operations that are “earlier” will be “closer” to the input; “later” operations closer to output

  14. Creating Chips • What would a computer chip look like for the expression: X = (A  B)  (B  C) A X B C

  15. Chips Summary • Anything we can reason with logic can be implemented as a circuit, such as: • Controllers for devices (keyboard, mouse, screen, etc.) • Addition/Subtraction • Multiplication/Division • Along with a few other operations, we have a whole microprocessor made just with logic gates! • But wait, there’s more! • What about memory?

  16. The Point of Memory • Most processors have the power to add pairs of numbers • How do we compute 9 + 8 + 4? • Answer: • Compute 9 + 8 and store the result in memory • Add 4 to the result stored in memory • Memory is also implemented using gates! • There are many types/levels of memory

  17. Circuits that Remember • Behold, the Flip-Flop!

  18. Setting the Output to 1

  19. Setting the Output to 1 (cont.)

  20. Setting the Output to 1 (cont.)

  21. What Else Can We Do? What happens if we put a zero on both inputs? …a one on the upper input and a zero on the lower input? …a zero on the upper input and a one on the lower input? …a one on both inputs?

  22. Storing Data with Flip Flops • One flip-flop stores a bit, which can be abstracted to mean 0/1, ‘yes/no’ or ‘true/false’ • Two flip-flops store two bits • Using a predetermined code, we can now represent 4 things instead of 2 • 00 10 01 11 • We could make up a code for numbers: • 00 = 0, 10 = 1, 01 = 2, 11 = 3 • But really, we don’t need a code at all!

  23. Binary Integers • Number systems are completely arbitrary • Our favorite is Base 10: • First place = 1’s (100), Second place = 10’s (101) • Third place = 100’s (102), Fourth place = 1000’s (103) • 17 base 10 = 1 * 101 + 7 * 100 • Need 10 digits (0 – 9) to count correctly in each place • Why do we like base 10 so much? • How about base 2?: • First place = 1’s (20) • Second place = 2’s (21) • Third place = ??? Fourth place = ??? • How many digits do we need? • Base 2 is convenient for computers

  24. Binary Integer Practice • How do we write 0 in base-2? • 00 • 1? • 01 • 2? • 10 • 3? • 11 • Wait…isn’t that familiar? • How many different numbers can we store with 3 flip flops? • 000 – 111 on board • What about 8? • What’s the biggest number we can count to? • What about 32? • 64??

  25. Base 10 to Base 2 conversion • Divide the number by 2 and write down the remainder • As long as the quotient obtained is not zero, continue to divide by 2 and write down the remainder to the LEFT of the last number • 78 • 78/2 = 39 R 0  0 • 39/2 = 19 R 1  10 • 19/2 = 9 R 1  110 • 9/2 = 4 R 1  1110 • 4/2 = 2 R 0  01110 • 2/2 = 1 R 0  001110 • 1/2 = 0 R 1  1001110

  26. Memory & Abstraction • Flip Flops can be used to implement memory • These are sometimes called SRAM (Static Random Access Memory) • …meaning that once the circuit is “set” to 1 or 0, it will stay that way until a new signal is used to re-set it • DRAM (Dynamic Random Access Memory): • Use a capacitor to store the charge (has to be refreshed periodically) • Faster • BUT… • Abstraction tells us that (for most purposes) it really doesn’t matter how we implement memory -- we just know that we can store (and retrieve) “a bit” at a time

  27. Scaling Up • One Flip-Flop: 1 bit (3 gates on chip) • 8 bits = 1 Byte (24 gates on chip) • 1024 Bytes = 1 Kilobyte (24,000 gates on chip) • 1024 Kilobytes = 1 Megabyte (24,000,000 gates on chip) • 1024 Megabytes = 1 Gigabyte (24,000,000,000 gates on chip!) • Cheapest laptop on BestBuy.com in 2010: 2 GB RAM (48,000,000,000 gates!)

  28. Accessing Memory • Memory is stored in rows of Bytes (8 bits) • Each row has an address represented by a unique binary number • i.e. memory row number 23 is addressed as 10111 (1*24 + 0*23 + 1* 22 + 1* 21 + 1* 20) • Amount of memory in a machine is limited by the biggest number the processor can represent: • 32 bit processors (still most common): 4,294,967,295 ~ 4 GB MAX • 64 bit processors (growing in popularity): 18,446,744,073,709,551,615 ~ 18.45 million TB

  29. Hexadecimal • It would be very inconvenient to read and write a 64-bit address in binary:0010100111010110111110001001011000010001110011011110000011100000 • Instead, we group each set of 4 bits together into a hexadecimal (base 16) digit: • The digits are 0, 1, 2, …, 9, A (10), B (11), …, E (14), F (15) 0010 1001 1101 0110 1111 1000 1001 0110 0001 0001 1100 1101 1110 0000 111000002 9 D 6 F 8 9 6 1 1 C D E 0 E 0 • …which we write, by convention, with a “0x” preceding the number to indicate it’s heXadecimal: • 0x29D6F89611CDE0E0

  30. Other Memory Concepts(read the book!!) • Mass storage: hard disks, CDs, USB/flash drives… •  Stores information without a constant supply of electricity •  Larger than RAM •  Slower than RAM •  Often removable •  Physically often more fragile than RAM • CDs, hard drives, etc. actually spin and have tracks divided into sectors, read by a read/write head • Seek time: Time to move head to the proper track • Latency: Time to wait for the disk to rotate into place • Access time: Seek + latency • Transfer rate: How many bits/second can be read/written once you’ve found the right spot • Flash memory: high capacity, no moving parts, but less reliable for long-term storage

  31. Representing Data • We’ve seen how to (basically) process different forms of data • We’ve seen how to store it • But how do we represent something that is not a number?

  32. Representing Text • Use a simple code: 0 = A, 1 = B, 2 = C … • ASCII encoding translates this idea to binary, using a byte (actually 7 bits for 128 total values): • Unicode encoding uses 2 bytes and allows for more characters, like é, ö and Щ

  33. Representing Images • A computer screen is technically a dense grid of tiny multi-colored light bulbs • An image on the screen is a setting of each light bulb to a specific color • A data representation of such an image is called a bit-map • Just a row-by-row of color values: red redredred blue blueblueblue red redredred green greengreengreen • If this seems like it takes up a lot of memory, you are right • But memory-saving techniques must still be converted to a bitmap for display

  34. Representing Colors • Three Primary Colors: Red, Green, Blue • RGB: “Mixing” system, similar to mixing paints • Choose the ‘amount’ of each color to mix in • Use 1 byte for each primary (16 bits) • Minimum amount? Maximum? • Pure Red = (255,0,0) = 11111111 00000000 00000000 • “Cleveland” Brown = (56,43,28) = 00111000 00101011 00100110 • Remember hexadecimal? • Pure Red = 0xFF0000 • “Cleveland” Brown = 0x382B26 • Common in HTML (as we’ll learn this year!)

  35. Representing Sound The sound wave represented by the sequence 0, 1.5, 2.0, 1.5, 2.0, 3.0, 4.0, 3.0, 0

  36. In-Class Exercise • Learn how to add two binary integers by hand • Learn how to add two one-bit integers using logic gates • Combine the one-bit adder into an 8-bit adder

  37. Representing Integers Revisited • So we learned a really slick way to count on a computer from 0 – 2k where k is the number of bits we can use • And we learned that we can use logic gates to add them • But that’s only positive numbers! • We can do better

  38. Two’s Complement • Two’s complement is a clever representation that allows binary addition to be performed in an elegant way Left-most bit is called the ‘sign bit’

  39. Two’s Complement cont. This works in both directions!

  40. Adding in Two’s Complement • Turns out, we can do this with one easy modification to our original algorithm: • If the left-most addition results in a carry, throw that bit away to get the correct result • Work with a friend to: • Convert the numbers to 4-bit two’s complement • Add them • Verify the result • Solve: (1 + 5), (6 + -2), (3 + -5)

  41. Floating Point Numbers • Non-integers are a problem… • Remember that any rational number can be represented as a fraction • …but we probably don’t want to do this, since • (a) we’d need to use two words for each number (i.e., the numerator and the denominator) • (b) fractions are hard to manipulate (add, subtract, etc.) • Irrational numbers can’t be written down at all, of course • Notice that any representation we choose will by definition have limited precision, since we can only represent 232 different values in a 32-bit word • 1/3 isn’t exactly 1/3 (let’s try it on a calculator!) • In general, we also lose precision (introduce errors) when we operate on floating point numbers • You don’t need to know the details of how “floating point” numbers are represented, but you should know they are represented differently

More Related