1 / 19

HPC Opportunities in Deep Learning- Greg Diamos

Read how deep learning has impacted HPC, see some published results, and learn about future opportunities with this slideshare.

nvidia
Télécharger la présentation

HPC Opportunities in Deep Learning- Greg Diamos

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HPC Opportunities In Deep Learning Greg Diamos, SC16

  2. AN OVERVIEW … 1. Why is Deep Learning Important Now in HPC? 2. Published Results with ImageNet, Google DeepMind, Baidu AI Lab. 3. Getting Started with Deep Learning in HPC. 4. Future Direction and Opportunities for Growth with HPC and Deep Learning. Source: Greg Diamos SC16 Talk

  3. Why is Deep Learning Important Now in HPC? Before, we had no idea how to train neural networks. The prevailing opinion, at the time, was that they were impossible to train. But now, we have powerful tools that can start applying to problem after problem and making progress on those that are really incredible inherently difficult. Content Source: Greg Diamos SC16 Talk Image Source: NVIDIA

  4. THE PUBLISHED EVIDENCE SPEAKS FOR ITSELF…

  5. The ImageNet Challenge We first found success in the ImageNet challenge, in which ImageNet was given images and had to produce a corresponding label. The challenge encompassed over a very large dataset of images and then classified into a thousand different categories. We’ve approached human-level accuracy with deep learning algorithms for these systems. Content Source: Greg Diamos SC16 Talk Image Source: NVIDIA, Greg Diamos SC16 Talk

  6. THIS PROGRESS ONLY CONTINUED GROWING EXPONENTIALLY…

  7. DeepMind at Google Just last year, a deep neural network defeated one of the best players, the best human players in a game of ‘Go.’ This is a game with an absolutely enormous optimization space. There’s no way to search over all possible combinations. Content Source: Greg Diamos SC16 Talk Image Source: Greg Diamos SC16 Talk Image Source

  8. Baidu’s AI Lab At our lab, we can approach human-level accuracy on many test sets. For example, when you build a speech recognition system, you would hand-design all of these components. You would not have one neural network. You would have five or six components all hand-designed by linguists, speech, signal processing, and mathematicians. Image Source We cut all that out. Content Source: Greg Diamos SC16 Talk Image Source

  9. Baidu’s AI Lab Cont. We can now take a team of five people who don’t speak any Mandarin, and produce a speech recognition system that beats all of the existing systems that we have and actually does better than a human grader. Image Source But these things are incredibly computationally intensive to train. Content Source: Greg Diamos SC16 Talk

  10. AND WE’RE NOW GETTING INTO THE RELATIONSHIP BETWEEN DEEP LEARNING AND HIGH PERFORMANCE COMPUTING SYSTEMS.

  11. GETTING STARTED WITH DEEP LEARNING IN HPC What do you need in order to get started solving a new problem that you want to apply deep learning to? There are three simple, but high-level factors: 1.Big Model 2.Big Data 3.Big Computer Source: Greg Diamos SC16 Talk

  12. 1. Big Model First, you need a big model. Your model has to be able to approximate the function that you’re trying to represent. For example, the function that maps images to text is complicated. Many parameters are needed to actually represent it. The model must be big in order to capture a extremely intricate function. Content Source: Greg Diamos SC16 Talk Image Source

  13. 2. Big Data Deep Learning doesn’t perform very well with small datasets. This was another reason as to why people might not have thought Deep Learning was important before. Deep Learning on smaller datasets would easily be beaten out by simpler, more explicit methods. But, as the datasets get larger, they start to surpass other methods.” Content Source: Greg Diamos SC16 Talk Image Source

  14. 3. Big Computer And when you have a big network and big data, you need a powerful supercomputer to run it. If you don’t have a fast enough computer, one can be stuck waiting years or decades for a result. So we come to this need for speed. And this is really the most important point in the talk. Content Source: Greg Diamos SC16 Talk Image Source

  15. WHAT ARE THE OPPORTUNITIES OF GROWTH FOR HPC IN DEEP LEARNING?

  16. Opportunities For Growth First, we need to figure out a way of scaling up models. Currently, the biggest model that runs at a high percent of efficiency is about 100 processors, which is large from a machine learning perspective but it is small from an HPC perspective. Second, we are far away from the power limit in CMOS. Right now, we’re around ten teraflops per processor. I think we can get to 20 petaflops before we hit the power limit. You can make progress on speech, vision, and language problems by making faster computers. Image Source Content Source: Greg Diamos SC16 Talk

  17. Future Directions of HPC and Deep Learning The two big directions that we see are around speech powered interfaces and self- driving cars. Speech powered interfaces are really three different components: recognition, human-level accuracy, and computer generation. Self-driving cars are also highly valuable as they leverage a lot of vision technology that’s already been developed. Both areas are significant directions going forward, but there are definitely even more application beyond these that are close to becoming possible using deep learning. Image Source Content Source: Greg Diamos SC16 Talk Image Source

  18. About the Speaker: Greg Diamos Greg Diamos is a senior researcher at Baidu’s Silicon Valley AI Lab (SVAIL). Previously, he was on the research team at NVIDIA. Greg holds a PhD from the Georgia Institute of Technology, where he contributed to the development of the GPU-Ocelot dynamic compiler, which targeted CPUs and GPUs from the same program representation. FOR THE FULL RECORDING: WATCH HERE

  19. LEARN MORE ABOUT THE INTERSECTION OF AI AND HPC INSIDEBIGDATA GUIDE

More Related