1 / 13

Conclusions on CS3014

Conclusions on CS3014. David Gregg Department of Computer Science University of Dublin, Trinity College. Interesting Times. In the mid 2000’s power became the main limiter of processor performance No longer possible to keep scaling clock speed Performance must come from other sources

wares
Télécharger la présentation

Conclusions on CS3014

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Conclusions on CS3014 David Gregg Department of Computer Science University of Dublin, Trinity College

  2. Interesting Times • In the mid 2000’s power became the main limiter of processor performance • No longer possible to keep scaling clock speed • Performance must come from other sources • After decades of rejecting parallel computing as too hard for most purposes the industry switched to multicore almost overnight in 2005

  3. Moore’s Law • Moore’s law is still alive and well • At least for the moment • Feature densities will continue to double around every 18-24 months • More sophisticated, faster cores will still arrive • But the rate of improvement of single core has been much, much slower than between 1984 and 2005 • Also have lots of parallelism on chip

  4. Parallel Architectures • Many parallel architectures have been proposed over the years • Instruction-level parallel • Vector • Multi-threaded • Shared-memory multiprocessor • Distributed memory multiprocessor • GPU, FPGA, hardware accelerators, etc.

  5. Parallel Architectures • Existing types of architecture, which were mostly originally designed for supercomputing, will be implemented on single chips • The same ideas that worked in 1970’s supercomputers will work in 21st century single chip, and stacked chip processors

  6. Software • The big problem of parallel computing has always been software “Any steps that are programmed by the operator, who sets up the machine, should be set up only in a serial fashion. It has been shown over and over again that any departure from this procedure results in a system that is much too complicated to use.” J. P. Eckert 1946

  7. Software • The big unknown with parallel computing is how we will build software at an acceptable cost • Large diversity of proposed new programming models and languages • OpenMP, Intel Array Building Blocks, X10, stream processing languages, etc. • Also domain-specific languages • E.g. Halide for image processing

  8. Software • Over time the successful languages and programming models will emerge • The most influential languages are hardly ever the most widely used ones • Historical examples such as Lisp, Algol 60, Simula 67, Occam • This will probably leave a large legacy of • Programs in languages that are forgotten • Single-threaded COBOL code

  9. This time it’s different • There are some differences this time • Industry sees no alternative to parallelism • We must build parallel software if we want improved performance • For all kinds of applications • Even if we don’t want to

  10. But how different? • Moore’s law continues to provide a doubling of transistors every 24 months or so • But the number of cores tends not to double • Companies like Intel still make lots of processors with 4 cores • Some with 18 cores, hardly any with more • Predictions of doubling of cores every two years have not happened

  11. But how different? • The big difference has been the move to mobile, battery-powered computing • Demand for low power computing • Lower energy leads to real improvements in battery life • It’s not clear that people upgrade their desktop machine very often • If they have one

  12. More differences • Hardware trade-offs are also different • Relative costs of communication, shared memory, etc. very different on a single chip compared to multiple chips • Memory locality is really important • And almost entirely application dependent • Energy is often a key limit • And influenced by memory locality • 3D stacking will produce some surprises • Embedded computing ever more important

  13. Cheap Science Fiction • Never forget wisdom of cheap sci fi: “All this has happened before; all this will happen again.” • The same parallel architectures ideas appear again and again • We have been failing to build parallel software for decades

More Related