1 / 11

IDC HPC User Forum Conference Appro Product Update

IDC HPC User Forum Conference Appro Product Update. Anthony Kenisky, VP of Sales. Company Introduction. Innovative Technology Won SC Online 2009 Product Of the Year Award Price/Performance Leadership Consistently offers the best price/performance solution in the marketplace

qamar
Télécharger la présentation

IDC HPC User Forum Conference Appro Product Update

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IDC HPC User Forum ConferenceAppro Product Update Anthony Kenisky, VP of Sales

  2. Company Introduction • Innovative Technology • Won SC Online 2009 Product Of the Year Award • Price/Performance Leadership • Consistently offers the best price/performance solution in the marketplace • Brand & Reputation • Impeccable reputation among both customers and competitors • Flexibility • Internal, engineering capabilities to cater solutions to specific customer’s problems

  3. Innovative Solutions::Based on GPU’s & FLASH • Hybrid solutions based on the latest CPU & GPU Technologies • Flash solutions for I/O enhancements or Global memory • GPU Blade & Rack mount solutions

  4. Innovative Design Wins :: Recent Major Design Wins DASH – A Winner of the SC09 Storage Challenge is available on the Terragrid network today. A 5.7TF cluster with SSD’s and 768GB of global shared memory space per node. Trestles – will have 10,368 processor cores, a peak speed of 100 teraflop/s, and 38 terabytes of flash memory. Like Dash, Trestles will be available to users of the TeraGrid, the nation’s largest open-access scientific discovery infrastructure. DASH & Trestles

  5. Data Intensive Design Wins :: Recent Major Design Wins Gordon – will feature 245TF of total compute power, 64TB of DRAM, 256 TB of flash memory, and four petabytes of disk storage. A key feature of Gordon will be the availability of the supernode. Each of the systems supernodes (a group of 32 nodes) has the potential of 7.7 TF of compute power and 10 TB of memory. Gordon will become a key part of a network of next-generation high-performance computers (HPC) being made available to users of the TeraGrid. Next-generation Supercomputer to solve “data-intensive” scientific problems in 2011

  6. Data Intensive Design Wins :: Recent Major Design Wins LLNL Testbed Cluster–Over 40,800,000 IOPS and 320GB/s aggregated bandwidth in two racks. Designed to enable computational scientists to have I/O test beds for scalable parallel file systems such as Lustre and CEPH. Secondly, it will allow the evaluation of large scale checkpoint restart mechanisms that don't depend on global scalable file systems. Thirdly, it will facilitate investigation of cloud base file systems and analysis tools such as Hadoop and MapReduce.

  7. Appro GPU Computing Tetra :: GPU Solutions • Supports 1:1, 1:2, 1:4 CPU:GPU ratio combinations

  8. Appro 1U Tetra Solution Tetra :: 2CPU & 4GPU in one Server • Platform specifications • Integrated 2P x86 host server + 4x “Fermi” GPUs – M2050 or M2070 • Host server agnostic – can support either Intel or AMD host boards • Supports 6x hot-swappable 2.5” HDDs • Support 1 additional PCIe expansion slot • Intelligent power control – GPUs can be powered down independent of host to save overall system power • Integrated IPMI 2.0 Remote Management First 1U server to achieve over 1 TeraFLOP on Linpack

  9. Appro GreenBlade System :: CPU/GPU Compute Blades • Direct PCIe bus slot to slot connection • No need for external PCIe cables • Host/GPU Pair is a single GPU blade system • Each GPU system is hot-swappable & easily serviceable • All monitoring sensors and data are integrated between host and GPU • Host and GPU module can be upgraded independently + GPU Compute Blade Compute Host Blade GPU Expansion Blade

  10. Hybrid Computing :: Optimum Performance/Density SAME FLOOR SPACE 4x Racks of Hybrid Servers 4x Racks of 2P x86 Servers SAME POWER/COOLING Up to8xMORE PERFORMANCE Traditional 2P Servers Combination X86 host + GPUs

  11. Do More with Less with Appro

More Related