1 / 18

science.doe/ASCR/

Mathematical, Information and Computational Sciences Program - An Introduction – 3 rd Doe/NSF Meeting on LHC and Global Computing “Infostructure”. www.science.doe.gov/ASCR/. February 7, 2003. Walter M. Polansky walt.polansky@science.doe.gov.

cerise
Télécharger la présentation

science.doe/ASCR/

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mathematical, Information and Computational Sciences Program- An Introduction –3rd Doe/NSF Meeting on LHC and Global Computing “Infostructure” www.science.doe.gov/ASCR/ February 7, 2003 Walter M. Polansky walt.polansky@science.doe.gov

  2. Mathematical, Information and Computational Sciences • Supports fundamental research in applied mathematics, computer science, and computer networking; • Integrates the results of fundamental research into software tools that scientists can adapt to meet high performance computational and simulation needs; • Establishes partnerships to field-test these software tools with users as well as to identify needs for future fundamental research; • Operates High Performance Computing and Network Facilities- NERSC, ESnet, and Advanced Computing Research Testbeds. Features

  3. Distributed Resources, Distributed Expertise Pacific Northwest National Laboratory Idaho National Engineering and Environmental Laboratory Ames Laboratory Argonne National Laboratory Brookhaven National Laboratory Fermi National Accelerator Laboratory Lawrence Berkeley National Laboratory Stanford Linear Accelerator Center Princeton Plasma Physics Laboratory Lawrence Livermore National Laboratory Thomas Jefferson National Accelerator Facility Oak Ridge National Laboratory Major User Facilities Institutions supported by SC Los Alamos National Laboratory DOE Specific-Mission Laboratories Sandia National Laboratories DOE Program-Dedicated Laboratories National Renewable Energy Laboratory DOE Multiprogram Laboratories

  4. Application Simulation Need Sustained Computational Capability Needed (Tflops) Significance Climate Science Calculate chemical balances in atmosphere, including clouds, rivers, and vegetation. > 50 Provides U.S. policymakers with leadership data to support policy decisions. Properly represent and predict extreme weather conditions in changing climate. Magnetic Fusion Energy Optimize balance between self-heating of plasma and heat leakage caused by electromagnetic turbulence. > 50 Underpins U.S. decisions about future international fusion collaborations. Integrated simulations of burning plasma crucial for quantifying prospects for commercial fusion. Combustion Science Understand interactions between combustion and turbulent fluctuations in burning fluid. > 50 Understand detonation dynamics (e.g. engine knock) in combustion systems. Solve the “soot “ problem in diesel engines. Environmental Molecular Science Reliably predict chemical and physical properties of radioactive substances. > 100 Develop innovative technologies to remediate contaminated soils and groundwater. Astrophysics Realistically simulate the explosion of a supernova for first time. >> 100 Measure size and age of Universe and rate of expansion of Universe. Gain insight into inertial fusion processes. Simulation Capability NeedsFY2004-05 Timeframe

  5. High-Performance Computing and Networking… • Computing capabilities 10 to 100 times greater than those provided by commercial systems designed for business applications. • Computing systems with more sophisticated architectures and higher performance components than current commercial systems. • Mathematical and computer science techniques to enable a scientific application to effectively use 1,000s of processors simultaneously and effectively exploit sophisticated architectures. • Networks and software to move hundreds to thousands of gigabytes of data between targeted science locations. • Software “glue” to link computer and network components together with performance levels 1,000 to 1,000,000 times higher than commercial solutions. …needs exceed commercial market capabilities

  6. …simulation…distributed teams, of complex systems remote access to facilities BasicResearch National Energy Research Scientific Computing Center (NERSC) Advanced Computing Research Testbeds Energy Sciences Network (ESnet) Program Execution Research to enable… BES, BER, FES, HEP, NP • • Materials • • Chemistry • • Combustion • • Accelerator • • HEP • • Nuclear • • Fusion • • Climate • • Astrophysics • Biology • Integrated Software Infrastructure Centers SciDAC! Teams- mathematicians, computer scientists, application scientists, and software engineers • Grid enabling research • Nanoscience • Scientific Application Pilots • Collaboratory Tools • • Applied Mathematics • Computer Science • Networking • Collaboratory Pilots High Performance Computing and Network Facilities for Science

  7. Budgets $ in millions FY2003 Request- $163.557 FY2004 Request- $170.490

  8. Applied Mathematical Sciences Optimization, requires: Advanced optimization theory, error estimation, run ensembles, uncertainty quantification, parameter estimation Mathematical model, must be: Well-posed, accurate, with proper boundary conditions, computable Discretization, requires: Advanced meshing technology, numerical theory, robustness, computability Computational solution requires: High-performance computing, and accurate, robust, modular, tunable, extensible, flexible, fast numerical algorithms

  9. Applied Mathematical Sciences Objectives Accomplishments Advance our understanding of science and technology by supporting research in basic applied mathematics and in computational research that facilitates the use of the latest high-performance computer systems. Robust High-Performance Numerical Libraries Adaptive Mesh Refinement (AMR) Sustained Teraflop/s simulations Level Set / Fast Marching Methods Investment in Education Computational Sciences Graduate Fellowship Research Opportunities Ongoing Projects Applied Mathematics Research: • Ultrascalable Algorithms • (up to millions of PEs) • Mathematical Microscopy • These opportunities will be explored through • Genomes to Life (with BER) • Comp. Nanoscience (with BES) • Fusion Energy (FESAC-ASCAC workshop) Linear Algebra Fluid Dynamics Differential Eqs. Optimization Grid Generation Predictability Analysis & Uncertainty Quantification Automated Reasoning Advanced Numerical Algorithms: PETSc Aztec TAO ADIFOR / ADIC Hypre CHOMBO SuperLU PICO

  10. Challenge – HPC for Science is (still after fifteen years!) Hard to use Inefficient Fragile An unimportant vendor market Vision A comprehensive, integrated software environment which enables the effective application of high performance systems to critical DOE problems Goal– Radical Improvement in Application Performance Ease of Use Time to Solution System Admin Software Development Scientific Applications Res. Mgt Framewrks PSEs Scheduler Compilers Viz/Data Chkpt/Rstrt Debuggers Math Libs File Sys Perf Tools Runtme Tls User Space Runtime Support OS Bypass OS Kernel Node and System Hardware Arch HPC System Elements Computer Science

  11. High-Performance Network Research • Goals and Objectives • To develop secure and scalable high-performance networks to support wide area distributed high-end applications. • To accelerate the adoption of emerging network technologies into production networks through testing and advanced deployment. • To provide leadership in the research and development of advanced networks services that have direct impact on DOE science mission. • Recent Accomplishments • High-performance TCP for high-speed (Gbits/sec) data transfer widely adopted in the Internet community. • Scalable network performance monitoring toolkit for end-to-end network performance predictions and network diagnosis (Net100, Netlogger, Pathchar, NCDS, etc) • HIPPI 64 - High-speed interconnects for interconnecting supercomputers and high-speed storage systems

  12. National Collaboratories • The nature of how large scale science is done is changing • Distributed data, computing, people, instruments • Instruments integrated with large-scale computing • Human resources are seldom collocated with the resources needed for their science • Additional drivers • Large and international collaborations • Management of unique national user facilities • Large multi-laboratory science and engineering projects

  13. COLLABORATORIES S C I E N T I F I C SIMULATION M A T H E M A T I C S O P E R A T I N G D A T A G R I D S S Y S T E M CODES MICS- $45.4M Scientific Discovery through Advanced ComputingAn Integrated Program Throughout the Office of Science Software Infrastructure COMPUTING SYSTEMS SOFTWARE Data Analysis & Visualization Programming Environments Scientific Data Management Problem-solving Environments BES, BER FES, HENP $20.7M

  14. Genomes to LifeComputational & Systems Biology • Clean Energy - Increased biology-based energy sources; major new bioenergy industry. • Reduced Carbon Dioxide in the Atmosphere – Advance understanding of earth’s carbon cycle; Identify mechanisms to enhance carbon capture. Stabilize atmospheric carbon dioxide to counter global warming. • Cleanup of the Environment – Develop cost-effective ways for environmental cleanup. Expected savings- billions in waste cleanup/disposal costs. A Partnership with Biological and Environmental Research

  15. Computational Nanoscience A Partnership with Basic Energy Sciences May 2002- “Theory and Modeling in Nanoscience” workshop convened by BES and ASCR Advisory Committees FY 2004 ASCR-BES partnership will focus on providing the computational tools needed for nanoscale science. - $3M (SciDAC Activity)

  16. UltraScale Simulation Planning activity • Deliver leadership class computers for science. • Extend the SciDAC model to couple applications scientists, mathematicians, and computational and computer scientists with computer architects, engineers, and semiconductor researchers. • Structure partnerships with domestic computer vendors to ensure that leadership class computers are produced with science needs as an explicit design criterion. • Build the science case, e.g. http://www.ultrasim.info

  17. National Energy ResearchScientific Computing Center NERSC - Provides capability resources, expert consulting, and professional user friendly services to computational scientists on projects within the missions of the Department of Energy • Began in 1974 at LLNL as computing resource for magnetic fusion researchers • Transferred to LBNL in 1996; moved to Oakland in 2000 • Provides open computing environment for nearly 2,400 users • A nominal 5.0 Teraflop MPP facility • Allocates compute resources competitively

  18. Energy Sciences Network Advanced network capabilities and services to enable seamless collaborations for DOE and its researchers • ESnet • Nationwide high-performance research network • Advanced network services to support science in DOE • Extensive structure of domestic and international interconnects • Advanced Technology Research • Coordination with other Federal Agencies and Internet II

More Related