1 / 16

Lecture 7: Midterm Review and Floating Point

This lecture covers the midterm format, computer abstractions and technology, performance metrics, instruction set architecture, arithmetic for computers, and floating-point representation.

paulaallen
Télécharger la présentation

Lecture 7: Midterm Review and Floating Point

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 7:Midterm Review andFloating Point Professor Mike Schulte Computer Architecture ECE 201

  2. Midterm Format • Open book, open note • Know the material well enough so that you do not need your book or notes • Bring calculator and scratch paper. • About five problems (with multiple parts) • Short answers • Explanations • Problem solving • Based on lecture notes, book, and homeworks (Lectures 1 - 6, Chapters 1 - 4, Homeworks 1&2). No division or floating point, until midterm. • 75 minutes for midterm, in class on February 21st. . • Try sample test and 1998 midterm on course homepage

  3. Chapter 1 : Computer Abstractions and Technology • Instruction Set Architecture and Machine Organization • Levels of abstraction • Interface (outside view) • implementation (inside view) • Current trends in capacity and performance • processor, memory, and I/O • Predicting improvements in performance • Processor performance improves by 50% per year • How much faster will the processor be in 5 years • Types of computer components • datapath, control, memory, input, and output

  4. Chapter 2: The Role of Performance • Execution time (seconds) vs. Performance (tasks per second) • X is n times faster than Y" means performance(X) execution_time(Y) n = ----------------------- = ------------------------- performance(Y) execution_time(X) • Calculating CPU execution time CPU time = Instruction count x CPI x clock cycle time CPU time = Instruction count x CPI / clock rate What affects each of the above factors? • Computer Benchmarks (SPEC Benchmarks) • Summarizing performance: arithmetic mean vs. geometric mean • Poor performance metrics: MIPS and MFLOPS • Amdahl’s law: 1 ExTimeold ExTimenew Speedup= = (1 - Fractionenhanced) + Fractionenhanced Speedupenhanced

  5. Chapter 3: Instruction Set Architecture • The MIPS architecture • registers and memory addressing • instruction formats and fields • instructions supported • Differences between x86 and MIPS • Using MIPS instructions to accomplish tasks • Evaluating instruction set alternatives • Going from C to MIPS assembly and from MIPS assembly to machine code • Pseudo-instructions

  6. Chapter 4: Arithmetic For Computers • Unsigned and two’s complement number systems • Negating two’s complement numbers • Binary addition and subtraction • MIPS logical operations • ALU design • Full adders, multiplexors, 1-bit ALUs • Addition/subtraction, logic operations, set less than • Overflow and zero detection. • Carry Lookahead Addition • Binary multiplication and booth’s algorithm

  7. Floating-Point • What can be represented in N bits? • Unsigned 0 to 2N-1 • 2s Complement - 2N-1 to 2N-1 - 1 • But, what about? • very large numbers? 9,349,398,989,787,762,244,859,087,678 1.23 x 1067 • very small number? 0.0000000000000000000000045691 2.98 x 10-32 • fractional values? 0.35 • Mixed numbers? 10.28 • Irrationals? p

  8. Recall Scientific Notation exponent decimal point Sign, magnitude • Issues: • Representation • Arithmetic operations(+, -, *, / ) • Range and Precision • Rounding • Exceptions (e.g., divide by zero, overflow, underflow) • Errors • On most general purpose computers, these issues are addressed by the IEEE 754 floating point standard. 23 -24 6.02 x 10 1.673 x 10 radix (base) Mantissa Sign, magnitude e - 127 IEEE F.P. ± 1.M x 2

  9. IEEE-754 Single Precision Floating-Point Numbers 1 8 23 single precision (32 bits, float in C) S E sign M Mantissa or significand sign + magnitude, normalized binary significand w/ hidden one bit: 1.M exponent: excess 127 binary integer actual exponent is e = E - 127 0 < E < 255 S E-127 X = (-1) 2 (1.M) Magnitude of numbers that can be represented is in the range: -126 127 23 ) 2 (1.0) (2 - 2 to 2 which is approximately: -38 38 to 3.40 x 10 1.8 x 10 Why use a biased exponent? Why are floating point numbers normalized?

  10. IEEE-754 Double Precision Floating-Point Numbers 1 11 52 double precision (64 bits, double in C) S E sign M Mantissa or significand sign + magnitude, normalized binary significand w/ hidden one bit: 1.M exponent: excess 1023 binary integer actual exponent is e = E - 1023 0 < E < 2048 S E-1023 X = (-1) 2 (1.M) Magnitude of numbers that can be represented is in the range: -1022 1025 52 ) 2 (1.0) (2 - 2 to 2 which is approximately: -308 308 to 1.8 x 10 2.2 x 10 The IEEE 754 standard also supports extended single-precision (more than 32 bits) and extended double-precision (more than 64 bits). Special values for the exponent and mantissa are used to indicate other values, like zero and infinity.

  11. Converting from Binary to Decimal Floating Point • What is the decimal single-precision floating point number that corresponds to the bit pattern 01000100010010010000000000000000? • Use the equation X = (-1)S x 2E-127 x (1.M) where S = 0 E = 100010002 = 1362 1.M = 1. 10010010000000000000000 = 1 + 2-1 + 2-4 + 2 -7 = 1.5703125 so X = (-1)0x2136-127 x1.5703125 = 804 = 8.04 x 102

  12. Converting from Decimal to Binary Floating Point • What is the binary representation for the single-precision floating point number that corresponds to X = -12.2510? • What is the normalized binary representation for the number? -12.2510 = -1100.012 = -1.100012 x 23 • What are the sign, stored exponent, and normalized mantissa? S = 1 (since the number is negative) E = 3 + 127 = 130= 128 + 2 = 100000102 M = 100010000000000000000002 X = 110000010100010000000000000000002 • What is the binary representation for the double-precision floating point number that corresponds to X = -12.2510? 11000000001010001000000000000000000000000000000000000000000000000002

  13. Denormalized Numbers and zero 2-bias denorm gap 1-bias -bias 2 2 2 0 normal numbers with hidden bit B = 2, p = 4 The gap between 0 and the next representable number is much larger than the gaps between nearby representable numbers. IEEE standard uses denormalized numbers to fill in the gap, making the distances between numbers near 0 more alike. 2-bias 1-bias -bias 2 2 0 2 p-1 bits of precision p bits of precision Denormalized numbers have an exponent of zero and a value of S -bias+1 X = (-1) 2 (0.M) NOTE: Zero is represented using 0 for the exponent and 0 for the mantissa. Either, +0 or -0 can be represented, based on the sign bit.

  14. result of operation overflows, i.e., is larger than the largest number that can be represented overflow is not the same as divide by zero (raises a different exception) S 1 . . . 1 0 . . . 0 +/- infinity It may make sense to do further computations with infinity e.g., X/0 > Y may be a valid comparison Infinity and NaNs Not a number, but not infinity (e.q. sqrt(-4)) invalid operation exception (unless operation is = or =) S 1 . . . 1 non-zero NaN HW decides what goes here NaNs propagate: f(NaN) = NaN

  15. Basic Addition Algorithm For addition (or subtraction) this translates into the following steps: (1) compute Ye - Xe (getting ready to align binary point) (2) right shift Xm that many positions to form Xm 2 (3) compute Xm 2 + Ym if representation demands normalization, then normalization step follows: (4) left shift result, decrement result exponent (e.g., 0.001xx…) right shift result, increment result exponent (e.g., 11.1xx…) continue until MSB of data is 1 (NOTE: Hidden bit in IEEE Standard) (5) if result is 0 mantissa, may need to zero exponent by special step Note: Book also gives algorithm for floating point multiplication - look over. and see http://www.ecs.umass.edu/ece/koren/arith/simulator/. Xe-Ye Xe-Ye

  16. Rounding Digits normalized result, but some non-zero digits to the right of the significand --> the number should be rounded E.g., Base = 10, p = 3: 2-bias 0 2 1.69 = 1.6900 * 10 = - .0785 * 10 = 1.6115 * 10 2-bias 0 0 7.85 - 2-bias 0 2 1.61 IEEE Standard 754: four rounding modes: round to nearest (default) round towards plus infinity round towards minus infinity round towards 0 See http://www.ecs.umass.edu/ece/koren/arith/simulator/.

More Related