1 / 11

HPC Top 5 Stories: September 22, 2017

Check out weekly insights into the world of HPC and AI with this HPC Top 5 Stories.

nvidia
Télécharger la présentation

HPC Top 5 Stories: September 22, 2017

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HPC TOP 5 STORIES Weekly Insights into the World of High Performance Computing

  2. HPC AND AI HAVE PAVED THE WAY FOR GROUNDBREAKING DISCOVERIES IN SCIENCE, MEDICINE, AND OTHER FIELDS…

  3. PROVING THAT AI IS THE FUTURE OF SUPERCOMPUTING…

  4. TOP 5 HERE ARE THE “TOP FIVE’ STORIES HIGHLIGHTING WHAT’S HOT IN HPC AND AI

  5. TOP 5 1. Introducing Faster GPUs for Google Compute Engine 2. GPUs Accelerate Population Distribution Mapping Around the Globe 3. Numba: High-Performance Python with CUDA Acceleration 4. Object Detection for Visual Search in Bing 5. The Astonishing Engineering Behind America’s Latest, Greatest Supercomputer

  6. 1 INTRODUCING FASTER GPUS FOR GOOGLE COMPUTE ENGINE Today, we're happy to make some massively parallel announcements for Cloud GPUs. First, Google Cloud Platform (GCP) gets another performance boost with the public launch of NVIDIA P100 GPUs in beta. Second, NVIDIA K80 GPUs are now generally available on Google Compute Engine. Third, we're happy to announce the introduction of sustained use discounts on both the K80 and P100 GPUs. Cloud GPUs can accelerate your workloads including machine learning training and inference, geophysical data processing, simulation, seismic analysis, molecular modeling, genomics and many more high performance compute use cases. GOOGLE BLOG

  7. 2 GPUS ACCELERATE POPULATION DISTRIBUTION MAPPING AROUND THE GLOBE Mapping settlements involves advanced algorithms capable of extracting, representing, modeling and interpreting satellite image features. A decade ago, automated feature extraction algorithms on CPU-based architectures helped speed the identification of settlements. But identifying quick shifts in population—such as migration or changes after a natural disaster—required more computing power. Using GPUs, AI technology and ORNL’s LandScan high- definition global population data, the ORNL team can now quickly process high-resolution satellite imagery to map human settlements and changing urban dynamics. The parallel-processing capability of NVIDIA Tesla GPUs allowed researchers to develop and use the computationally expensive feature descriptor algorithms to process imagery at dramatic speed-ups of up to 200x. ARTICLE

  8. 3 NUMBA: HIGH-PERFORMANCE PYTHON WITH CUDA ACCELERATION With Numba, it is now possible to write standard Python functions and run them on a CUDA-capable GPU. Numba is designed for array-oriented computing tasks, much like the widely used NumPy library. The data parallelism in array-oriented computing tasks is a natural fit for accelerators like GPUs. Numba understands NumPy array types, and uses them to generate efficient compiled code for execution on GPUs or multicore CPUs. The programming effort required can be as simple as adding a function decorator to instruct Numba to compile for the GPU. For example, the @vectorize decorator in the following code generates a compiled, vectorized version of the scalar function Add at run time so that it can be used to process arrays of data in parallel on the GPU. BLOG

  9. 4 OBJECT DETECTION FOR VISUAL SEARCH IN BING Luckily for us our partners at Azure were just testing new Azure NVIDIA GPU instances. We measured that the new Azure instances running NVIDIA cards accelerated the inference on detection network by 3x! Additionally, analyzing traffic patterns, we determined that a caching layer could help things even further. Our Microsoft Research friends had just the right tool for the job: a fast, scalable key-value store called ObjectStore. With the cache to store the results of object detection in place we were not only able to further decrease the latency but also save 75% of GPU cost. BLOG

  10. 5 THE ASTONISHING ENGINEERING BEHIND AMERICA’S LATEST, GREATEST SUPERCOMPUTER If you want to do big, serious science, you’ll need a serious machine. You know, like a giant water-cooled computer that’s 200,000 times more powerful than a top-of-the-line laptop and that sucks up enough energy to power 12,000 homes. You’ll need Summit, a supercomputer nearing completion at the Oak Ridge National Laboratory in Tennessee. When it opens for business next year, it'll be the United States’ most powerful supercomputer and perhaps the most powerful in the world. Because as science gets bigger, so too must its machines, requiring ever more awesome engineering, both for the computer itself and the building that has to house it without melting. ARTICLE

  11. HOW CAN HPC IMPACT YOUR BUSINESS? LEARN MORE

More Related