1 / 18

The Convergence of HPC and Deep Learning

Hear about the revolution of AI, the synergy of deep learning and HPC, NVIDIA's role and more.

nvidia
Télécharger la présentation

The Convergence of HPC and Deep Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Convergence of HPC and Deep Learning Bill Dally, SC16

  2. AN OVERVIEW… 1. The Revolution in AI 2. Synergy of Deep Learning and HPC 3. Capabilities of Handling Deep Learning and HPC 4. NVIDIA’s work in HPC and Deep Learning 5. Concluding Thoughts

  3. The Revolution in AI 2006: Launched CUDA at Supercomputing 2008: NVIDIA first Top 500 system 2009: Designed Fermi as a high performance computing GPU 2013: Andrew Ng and Bryan Catanzaro work together on deep brain project running GPUs 2016: Created the NVIDIA SATURNV, showcasing NVIDIA’s capability as a system vendor and took #1 spot on Green 500 list • • • • • Image Source: NVIDIA Content Source: Bill Dally, SC16 Talk

  4. TODAY, PEOPLE WHO DISCOVER THE BEST SCIENCE ARE THE PEOPLE WITH THE BIGGEST SUPERCOMPUTERS

  5. The Revolution in AI | Supercomputing Science is being enabled by supercomputing, whether it’s climate science, combustion science, or understanding the fundamentals of how the human body works to develop more medications. What’s exciting is that the same technology enabling this powerful science is also enabling the revolution in deep learning, and it’s all enabled by GPUs. Image Source: NVIDIA Content Source: Bill Dally, SC16 Talk

  6. The Revolution in AI | Big Data Last year, a deep neural network defeated one of the best human players in a game of ‘Go.’ This is a game with an enormous optimization space. There’s no way to search over all possible combinations. The graph below, shown by Jeff Dean a year earlier, highlights the number of individual projects at Google that use Deep Learning. Content Source: Bill Dally, SC16 Talk

  7. Synergy of Deep Learning and HPC There is an interesting synergy between deep learning and HPC. The technology originally developed for HPC has enabled deep learning, and deep learning is enabling many usages in science. For example, it’s good at recognizing images and providing classification. Content Source: Bill Dally, SC16 Talk

  8. Synergy of Deep Learning and HPC Deep Learning can also apply to more traditional HPC applications. These applications can use the deep network to learn by taking a lot of the cases that have been simulated and training the network to identify similar cases. Then take a new case, feed it into the deep network, and it will predict what the output will be. Content Source: Bill Dally, SC16 Talk

  9. Synergy of Deep Learning and HPC Both need arithmetic performance and performance tends to be judged in terms of performance per watt. All of our machines are constrained by a fixed number of watts whether it’s deep learning or HPC. Content Source: Bill Dally, SC16 Talk

  10. Differences between HPC and Deep Learning There are some differences, but they’re small. If the machines are built and provisioned in the right way, then one machine can meet both. For HPC double precision 64-bits of floating point arithmetic is needed to get numerically stable solutions to a lot of problems. For deep learning training, you can get by with 32 bits. In addition, Deep Learning needs more memory per flops. But it’s just a question of how to provision that memory. HPC is more demanding of the network bandwidth, deep learning less so. Content Source: Bill Dally, SC16 Talk

  11. Capabilities of Handling HPC and Deep Learning The HPC market is not big enough to fund the billion dollar a year investment it takes to develop chips like Pascal. So it’s not sustainable to build a chip just for HPC What’s great about GPUs is that they have many successful markets that have convergent requirements. Content Source: Bill Dally, SC16 Talk

  12. Our Work in HPC and Deep Learning We are working in collaboration with a number of the national laboratories and with Stanford University on a system called the Legion programming system, which is an example of what I call target independent programming. With target independent programming, the programmer does what they’re good at, which is describing all of the parallelism in the program, not just how much to exploit. Content Source: Bill Dally, SC16 Talk Image Source

  13. Our Work in HPC and Deep Learning Using this data model, which is what distinguishes this from a lot of the other task-based runtimes, it maps it onto a system in a way that maximizes use of the memory hierarchy and the use of the compute resources that can remap from one machine to another quickly. Content Source: Bill Dally, SC16 Talk Image Source

  14. CONCLUDING THOUGHTS

  15. Concluding Thoughts It’s really exciting watching this deep learning revolution going on because it is very synergistic with HPC. They need the same things and building solutions for HPC map exactly right for deep learning. The deep learning techniques get turned around and are applied to predictive methods that complement the simulation methods being used for scientific things also for automatically analyzing data sets. Content Source: Bill Dally, SC16 Talk

  16. Concluding Thoughts There are some gaps left, but I’m confident that if we continue plugging away at some of the research lines we’re looking at, that we will be able to build an exascale machine at something close to 20 megawatts in 2023, if not sooner. GPUs are viable not just for HPC but also for deep learning and graphics. We have an economic model that works. We can sustain the engineering effort needed to bring you a new GPU every generation. Content Source: Bill Dally, SC16 Talk

  17. About the Speaker: Bill Dally Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. He has published over 200 papers, holds over 50 issued patents, and is an author of two textbooks. Dally received a bachelor's degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech. He is a cofounder of Velio Communications and Stream Processors. FOR THE FULL RECORDING: WATCH HERE

  18. LEARN MORE ABOUT THE INTERSECTION OF AI AND HPC INSIDEBIGDATA GUIDE

More Related