1 / 21

Floating Point vs. Fixed Point for FPGA

Floating Point vs. Fixed Point for FPGA. Applications. Digital Signal Processing Encoders/Decoders Compression Encryption Control Automotive/Aerospace Industrial Space. Floating Point?. Data Structure Mantissa: Numerical portion of number

jaden-rosa
Télécharger la présentation

Floating Point vs. Fixed Point for FPGA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Floating Point vs. Fixed Point for FPGA

  2. Applications Digital Signal Processing • Encoders/Decoders • Compression • Encryption Control • Automotive/Aerospace • Industrial • Space

  3. Floating Point? Data Structure Mantissa: Numerical portion of number Exponent: Signed exponent to vary range of mantissa Sign: Sign of mantissa Simple Representation: -1sign * Mantissa * BaseExponent IEEE 754 Representation (7 Digit Mantissa) -1sign * 1.Mantissa * 2Exponent - 127

  4. What About Fixed Point? Fixed point assumes constant scaling (radix) • No standard • Smaller range of numbers • Generally base 2 for fast radix conversion • Programmer must determine number ranges offline Classic Fixed Point Bit Representation (Savich, 2007)

  5. Fixed Point Operations Fixed-Point Addition/Subtraction: Sum = A + B Multiplication: Product = A x B Note: - Numbers have to have same radix - With base 2 scaling radix conversion is << or >> - Programmer must account for radix differences Precision Implications • If the result is outside of the expected format then overflow can occur • Programmer must account for the potential ranges of operands to avoid precision problems

  6. Floating Point Operations Floating Point (By Steps) Addition/Subtraction: • Normalize Exponents - Fixed point add/subtract • Round Multiplication: • Add Exponents • Multiply Mantissa Precision Implications • If the result is outside of the digits of the mantissa, the result must be rounded - Dynamic range means that programmer has less control, but easier to handle unknown ranges of numbers • Different options for rounding.

  7. Fixed Point or Floating Point? Fixed Point • Very fast when base 2 • No complicated logic • Radix point not encoded • Fixed Accuracy • Can only represent small number set Floating Point - Slower • Accuracy Varies • Represent very large number set • Radix point encoded • Complex logic required

  8. FPGA Floating Point Parallel Implementation • HDTV needs 20 GFLOPS/Sec • Current DSP’s cannot achieve this (Dido, 2002) Optimized Format for Application • Different bit formats optimize operation speed, accuracy • If FPGA targets single application, IEEE does not need to be followed. (Connors, 1999) Size vs Speed Issue • Full feature FPGA units that are parallelized require many resources

  9. FPGA Floating Point vs. CPU FPGA versus CPU Performance for 32 bit FP Addition Over Time (Underwood, 2004)

  10. FPGA Architectures Standard and 2-Path Floating Point FPGA Adders (Liang, 2003)

  11. FPGA Architectures LOP Floating Point FPGA Adder (Liang, 2003)

  12. Resources and Performance FP Adder Area and Latency versus Mantissa Size and (Liang, 2003) Spartan III Resource Table (Xilinx, 2009)

  13. Fixed vs. Floating Application Neural Networks - Use of log-sigmoid function • Calculation of small error values • Known number ranges • Two inputs, two neuron hidden, one output MLP Neural Network (Savich, 2007) Parallel Neuron (Savich, 2007)

  14. Size Utilization MLP-BP 2,2,1 NN with Parallel Neurons Design Size vs Manitssa Size (Savich, 2007)

  15. Fixed Point Performance MLP-BP 2,2,1 NN with Parallel Neurons Fixed Point Training Performance (Savich, 2007)

  16. Floating Point Performance MLP-BP 2,2,1 NN with Parallel Neurons Floating Point Training Performance (Savich, 2007)

  17. When do we NEED Floating Point? • Accuracy is paramount • Accuracy at small numbers while operating on large numbers • Range of numbers unpredictable • - Fixed point programs must anticipate number ranges or errors will occur • Development time is very short • - Time must be spent to analyze algorithm on a low level to determine number ranges

  18. Floating Point Application Military Radar • Compute complex integral at a high speed - Accuracy is required due to obvious safety implications - Floating point lowers noise introduction while executing FFT High Performance DSP • More favorable signal-to-noise ratio due to high accuracy at low values • Signal-to-noise for floating point is 30x106 to 1 versus 30,000 to 1 for fixed point • High resolution ADC (20 bits plus) requires floating point, fixed point registers are too small for accuracy

  19. Conclusions Fixed Point is preferable for most applications • Low Resources • Low gate delays • Simple implementation of HW components Floating point is useful when: • Accuracy over a large range of numbers is required • Impossible or too hard to estimate number ranges • Programming time is severely limited • The floating point architecture is best customized via FPGA

  20. References Dido, J., Geraudie, N., Loiseau, L., Payeur, O., Savaria, Y., & Poirier, D. (2002). A flexible floating-point format for optimizing data-paths and operators in FPGA based DSPs. FPGA '02: Proceedings of the 2002 ACM/SIGDA Tenth International Symposium on Field-Programmable Gate Arrays, Monterey, California, USA. 50-55. Liang, J., Tessier, R., & Mencer, O. (2003). Floating point unit generation and evaluation for FPGAs. FCCM '03: Proceedings of the 11th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, 185. Savich, A. W., Moussa, M., & Areibi, S. (2007). The impact of arithmetic representation on implementing MLP-BP on FPGAs: A study. Neural Networks, IEEE Transactions on, 18(1), 240-252 Underwood, K. (2004). FPGAs vs. CPUs: Trends in peak floating-point performance. FPGA '04: Proceedings of the 2004 ACM/SIGDA 12th International Symposium on Field Programmable Gate Arrays, Monterey, California, USA. 171-180. Xilinx. (2009). Xilinx DS099 spartan-3 FPGA family data sheet. Retrieved 02/20, 2010, from www.xilinx.com/support/documentation/data_sheets/ds099.pdf Yoji, D. C., Connors, D. A., Yamada, Y., & Hwu, W. W. (1998). A software-oriented floating-point format for enhancing automotive control systems. Control Systems, Workshop on Compiler and Architecture Support for Embedded Computing Systems,

  21. Thank You Questions?

More Related