1 / 15

IBM System Blue Gene®

IBM System Blue Gene® . Ultrascaling and Power Efficiency for Computational Advantage in Financial Services . Collaborative Innovation. Collaborative Innovation. Better integrate business processes with IT. Introduce new applications and systems into existing IT infrastructure more easily.

Télécharger la présentation

IBM System Blue Gene®

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IBM System Blue Gene® Ultrascaling and Power Efficiency for Computational Advantage in Financial Services

  2. Collaborative Innovation Collaborative Innovation Better integrate business processes with IT Introduce new applications and systems into existing IT infrastructure more easily Openness Openness Maintain a flexible infrastructure to improve IT systems utilization and productivity Virtualization Virtualization Provide better access to information Reduce or mitigate business operations risk Technology Innovation that Matters Technology Innovation that Matters IBM Systems AgendaA strategic design for delivering innovative technology, resources and skills

  3. System 64 Racks, 64x32x32 Rack 32 Node Cards, 8x8x16 Node Card (32 Chips, 4x4x2) 16 compute, 0-2 I/O cards 180/360 TF/s 32 TB Dual PowerPC® System-on-Chip Compute Card 2 Chips, 1x2x1 2.8/5.6 TF/s 512 GB Chip 2 processors 90/180 GF/s 16 GB 5.6/11.2 GF/s 1.0 GB 2.8/5.6 GF/s 4 MB IBM System Blue Gene® Optimized for scalability, bandwidth, and massive data handling while consuming a fraction of the power and floor space required by today’s high performance systems • Highest performance supercomputer! (Top500 – Nov. 2005) • Ultra scalable performance • Ultra performance per kW of power • Ultra floor space density • Innovative architecture and system design • Familiar programmer/user environments Available on demand! Networks 3D Torus for point-to-point communications Global Collective Network for one–to-all communications Gigabit Ethernet for file I/O, host interface, control and monitoring Up to 131,072 PowerPC® CPUs Up to 360 peak TeraFLOPS

  4. Technique: Monte Carlo simulation • Window Tolerance: 4-5 hours • Previous SMP implementation: • 100,000 scenarios • ~5,700 min. = 95 hours (data + compute) • IBM Blue Gene Result • (128 compute nodes) • 100,000 scenarios • ~23 min. (15 min. data, 8 min. compute) Smoothed Expected Profit-Loss distributionover time horizon 95 hours Current Portfolio value Expected Portfolio meanover time horizon VaR Quantileover time horizon ~23 min Value at Risk Example: Value at Risk in near real-timeUsing Monte Carlo simulation including data pre-staging, computation, and data post-staging Value at Risk is a measure of the maximum potential change in the valueof a financial portfolio with a given probability over a given time period • Value at Risk Calculation: • Generate a set of simulated Market scenariosfrom the joint distribution of market changes. • Price the portfolio for all scenarios in thissimulated set. • Estimate the VaR from the empirical profit-lossdistribution.

  5. Highly parallelized and efficient I/O-intensive data file broadcasts GPFS or Database Two-phase algorithm for collective I/O Data File for Broadcast Sequential Read Phase I – A subset of “access nodes” read non-overlapping sections of the required file segment “Access nodes” - The number and location of these compute nodes is customized to maximize parallelism and avoid I/O contention Phase II – Collective communication distributes data to remaining compute nodes A2A – All to All exchangeMBC – Multiple broadcasts Blue Gene Torus Network

  6. Time t BG 1 BG 1 BG 1 BG 2 BG 2 BG 2 BG 3 BG 3 BG 3 BG 4 BG 4 BG 4 • Calculate Approx. Risk Neutral value of VA portfolio • Monte Carlo simulation of asset valuations to horizon • VA cash flows contingent on account valuation and market returns Time t+1 Time t+2 Example: Variable Annuities Capital Reserve calculation Using Nested Stochastics for 360 time steps Each Blue Gene node performs “Simulation Service” for Nested Stochastics • Receive Time t state. • Calculate optional flows to horizon along simulation. • Return result. • Valuations feed Hedging model for portfolio cash flows across time and scenarios • Value at Risk for cash flows feeds regulatory capital model

  7. BG BG BG BG BG BG BG BG BG BG BG BG BG BG BG BG Services Implementation client scatter • Initialize • Broadcast VA data • Prime each BG node w/ one simulation • Broadcast Market data • Calculate Greeks • single scatter-gather per outer timestep • “simulation services” by node • gather and store greeks by outer timestep • VA Porfolio • Market States gather • VA Greeks by timestep

  8. Optimized Analytics Infrastructure • - Client adapter requests Service • - Service Mgr starts/stops/queries Service Instance • Service Instance connects to Client App

  9. Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 570 GFlops 570 570 570 GFlops GFlops GFlops 570 570 GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops Intel® Xeon® 20 Rack System 11.4 Peak TFLOPs 315 sq. ft. 210 KW Peak Power Consumption Intel Xeon Twenty Rack System 11.4 Peak TFlops 315 Square Feet 210 KW Peak Power Consumption Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Intel Xeon Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Dual Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 3.4 GHz 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 42 Nodes 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops 570 570 570 GFlops GFlops GFlops Grid Setting: 0.05x0.05; Scale: 1.0" / grid unit Grid Setting: 0.05x0.05; Scale: 1.0" / grid unit IBM System Blue Gene® - Floor Space Efficiency Advantagevs. Commodity Linux Cluster Equivalent TFLOPs ¼ floor space ¼ power consumption Blue Gene 2 Rack System 11.4 Peak TFLOPs 76.5 sq. ft. 56 KW Peak Power Consumption

  10. IBM System Blue Gene Solution - Conclusions • Blue Gene represents an innovative way to scale to multi-teraflops capability • Leadership performance and price/performance • Massive scalability • Efficient packaging drives low power, cooling and floor space requirements • Unique in the market for its balance between massive scale-out capacity and preservation of familiar programming environments • Blue Gene is applicable to a wide range of computationally intensive workloads such as financial instruments pricing, risk analytics, monte carlo simulations, etc. • Programs are in place to ensure Blue Gene technology is easily accessible, including availability in IBM’s Deep Computing Capacity on Demand center • Ongoing Blue Gene R&D ensures the vitality of the offering

  11. Standards and Open Interfaces Virtual Access Virtual Management Servers Networks Storage Service Oriented Infrastructure Leveraging Open Standards and Partners IBM Virtualization Engine™ Platform • IBM portfolio of differentiating virtualization capabilities • Unifying framework for building and managing a heterogeneous environment • Does not require “rip and replace” hardware and software upgrades • Built to be: • Comprehensive • Open • Heterogeneous • Common skills

  12. Grid Middleware Grid Computing: Enabling an On Demand IT Infrastructure Before Grid After Grid Before Grid “Siloed” Architecture: • Higher (capital + operational) costs through limited pooling of IT assets …across silos • Challenging cross organization collaboration • Limited responsiveness due to more manual scheduling and provisioning “Virtualized” Infrastructure: • Creates a virtual application operating, storage & collaboration environment • Virtualizes application services execution • Dynamically fulfills requests over a virtual pool of system resources • Offers an adaptive, self-managed, high availability operating environment

  13. PFA monitoring Migrate! app MetaCluster Dynamic, application-centric workload management • Optimization of resource usage • Servers can be consolidated dynamically based on usage criteria • Optimization of performance • Applications can be “scaled up” (moved to a more potent server) • Optimization of service availability • Workloads can be moved pro-actively to a healthy resource, upon detection of degradation of system resources Using IBM Director’sPredictive Failure Analysis PFA monitoring app app app app App

  14. IBM Global Services … xSeries zSeries/z9 pSeries/p5 BladeCenter iSeries/i5 OpenPower Why Linux™ is important to customers • Linux is an excellent path to On Demand Business • Integration • Linux can run alongside the existing operating system • Linux enables the same application to run on many different platforms • Virtualization • Linux supports one-to-many virtualization – software and microcode • Linux supports many-to-one virtualization - Clusters, Grids • Automation - Linux delivers a robust, manageable distributed platform • Linux is about choice and flexibility • Linux is secure • Linux is reliable • Linux drives business goals • Reduce costs • Simplification • Improve application service levels • Promotes innovation Source: IDC Directions 2005

  15. Collaborative Innovation Collaborative Innovation Cell – specific solutions for industries with high performance requirements. Linux, Apache, Globus -- engaging with the open-source community to deliver leading-edge business solutions. Openness Openness OAI, MetaCluster, Xen –robust, reliable, flexible computing environments to respond to changing needs. Virtualization Virtualization GPFS -- parallel filesystems for fastest IO Blue Gene – supercomputer technologies for real business needs Technology Innovation that Matters Technology Innovation that Matters IBM Systems AgendaA strategic design for delivering innovative technology, resources and skills

More Related