150 likes | 166 Vues
U.S. Department of Energy’s Office of Science. Washington Update July 21, 2004 ESCC Meeting. Mary Anne Scott Program Manager scott@er.doe.gov. “The times, they are a-changin” Bob Dylan, 1963. ASCR and MICS Staff. ASCR
E N D
U.S. Department of Energy’sOffice of Science Washington Update July 21, 2004 ESCC Meeting Mary Anne Scott Program Manager scott@er.doe.gov
“The times, they are a-changin” Bob Dylan, 1963
ASCR and MICS Staff • ASCR • Ed Oliver, Associate Director for Advanced Scientific Computing Research • Dan Hitchcock, Senior Technical Advisor • Linda Twenty, Program Analyst • MICS • Michael Strayer, Acting Director MICS • David Goodwin, NERSC • Fred Johnson, Computer Science, CS ISICs • Gary Johnson, ACRT, SAPP, Applied Math (acting) • Thomas Ndousse-Fetter, Network Research • Mary Anne Scott, Collaboratories, ESnet (acting) • George Seweryniak, HBCU • John van Rosendale, Visualization and Data Management, Math ISICs • Jane Hiegel- Secretary • Beverly Foltz - Secretary (temp) Phone- 301-903-5800 Fax- 301-903-7774
ASCR/MICS Mission Discover, develop, and deploy the computational and networking tools that enable researchers in the scientific disciplines to analyze, model, simulate, and predict complex physical, chemical, and biological phenomena important to the Department of Energy (DOE). Research: Foster and support fundamental research in advanced scientific computing – applied mathematics, computer science, and networking Facilities: Operate supercomputers, a high performance network, and related facilities
ASCR in Relationship to Office of Science Shows OneSC, Phase 1
ASCR in Relationship to Federal IT Research National Coordination Office for Computing, Information and Communications NCO/CIC Interagency Working Group on IT R&D IWG/ITR&D Participating Agencies AHRQ, DARPA, DOE, EPA, NASA, NIST, NOAA, NSA, NSF, OSD/URI Network Research Team (NRT) Joint Engineering Team (JET) Middleware and Grid Infrastructure Coordination (MaGIC) White House U.S. Congress President’s Information Technology Advisory Committee (PITAC) OSTP/OMB National Science & Technology Council (NSTC) Senior Principal’s Group for IT HECC High End Computing and Communication Coordinating Group LSN Large Scale Networking Coordinating Group HCI&IM Human Computer Interface & Information Management Coordinating Group HCSS High Confidence Systems & Software Coordinating Group SDP Software Design & Productivity Coordinating Group SEW Social, Economic & Workforce Implications of IT and IT Workforce Development Coordinating Group == DOE Direct Involvement
Planning Workshops • High Performance Network Planning Workshop, August 2002 http://www.doecollaboratory.org/meetings/hpnpw • DOE Workshop on Ultra High-Speed Transport Protocols and Network Provisioning for Large-Scale Science Applications, April 2003 http://www.csm.ornl.gov/ghpn/wk2003 • Science Case for Large Scale Simulation, June 2003 http://www.pnl.gov/scales/ • DOE Science Networking Roadmap Meeting, June 2003 http://www.es.net/hpertext/welcome/pr/Roadmap/index.html • Workshop on the Road Map for the Revitalization of High End Computing, June 2003 http://www.cra.org/Activities/workshops/nitrd http://www.sc.doe.gov/ascr/20040510_hecrtf.pdf • ASCR Strategic Planning Workshop, July 2003 http://www.fp-mcs.anl.gov/ascr-july03spw
Roadmap – Requirements/Business Case • Over 40% of federal support to the physical sciences is in the Office of Science • Supports over 15,000 PhDs, PostDocs and graduate students. • A similar number of PhDs, PostDocs and graduate students funded by other federal, state and private agencies, and international institutions are users/collaborators of DOE facilities. • Most of these users/collaborators and many of the DOE funded users are at universities; many of them are at international locations. • Effective end-to-end (E2E) networking and middleware that interfaces to university researchers and international collaborators is critical for the success of the DOE Science Mission.
Roadmap – Requirements/Business Case • Achieving the DOE science mission for the next five years requires continuing advancements in networking and middleware. • THE #1 DRIVER – Petabyte scale experimental and simulation data systems will be increasing to exabyte scale data systems. • Examples: Bioinformatics, Climate, LHC, etc. • Computational systems that process or produce the data continue to advance with Moore’s Law thereby driving network requirements. • The combination of increases in data and computational power are projected to at least continue the historical trend of doubling network requirements every year.
Roadmap – Requirements/Business Case • The sources of the data, the computational resources and the scientists consumers of the data are often not collocated. This is due to two main reasons: • The experimental facilities, data facilities and computational facilities are extremely expensive and consequently not replicated. They are also distributed. Sharing them is often the only cost effective solution. • The scientists are highly distributed, many of them being located at universities. • Due to the distribution of experimental facilities, data facilities, computational facilities and scientists, networking and middleware are essential to achieve the science.
Network Issues – Technology • Single wavelengths in the optical fiber transport can currently only carry 10 Gbps and this limit is not anticipated to change within the next five years. At least 40 Gbps will be required by 2008. • The current transmission protocol, TCP, will not at present efficiently support speeds above a few Gbps per data stream. • The technologies to concurrently control multiple multi Gbps data streams do not at present exist. • The technologies to perform effective cybersecurity above several Gbps do not currently exist.
…what now??? VISION - A scalable, secure, integrated network environment for ultra-scale distributed science is being developed to make it possible to combine resources and expertise to address complex questions that no single institution could manage alone. It is creating the means for research teams to integrate unique and expensive DOE research facilities and resources for remote collaboration, experimentation, simulation and analysis. • Network Strategy Production network • Base TCP/IP services; +99.9% reliable High-impact network • Increments of 10 Gbps; switched lambdas (other solutions); 99% reliable Research network • Interfaces with production, high-impact and other research networks; start electronic and advance towards optical switching; very flexible • Revisit governance model • SC-wide coordination • Advisory Committee involvement
What is ESCC today? • Standing committee • Members appointed by ESnet site organizations • Advisory to ESSC, providing a forum for the consideration of a broad range of technical issues • Forum for information interchange • ESnet-wide activities and plans • Site-specific requirements and plans • Forum for interactions with the ESnet manager and staff • Forum for interactions with SC programs that use or would like to use ESnet facilities Where to next?
Leadership-Class Computing • FY2004 $30M appropriation • for “the Department [of Energy] to acquire additional advanced computing capability to support existing users in the near term and to initiate longer-term research and development on next generation computer architectures.” • May 12, 2004 announcement • ORNL partnered with Cray Inc, IBM Corp, and Silicon Graphics Inc • $25M to begin to build a 50 teraflop science research supercomputer • End station concept proposed • FY2005 President’s budget requests additional $25M to continue
Other Recent HQ activities • Committee of Visitors • Validate the effectiveness of the way Scientific Research in managed in DOE Office of Science • March ’04 considered CS, Math, NC • March ’05 will consider facilities, Net Research • Advisory Committee • New members coming • JET Roadmapping Workshop • April 13-15, JLAB (Newport News, VA)