1 / 89

HENP Networks and Grids for Global VOs: from Vision to Reality

HENP Networks and Grids for Global VOs: from Vision to Reality. Harvey B. Newman California Institute of Technology GNEW2004 March 15, 2004. ICFA and Global Networks for HENP.

primo
Télécharger la présentation

HENP Networks and Grids for Global VOs: from Vision to Reality

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HENP Networks and Grids for Global VOs: from Vision to Reality Harvey B. Newman California Institute of TechnologyGNEW2004March 15, 2004

  2. ICFA and Global Networks for HENP • National and International Networks, with sufficient (rapidly increasing) capacity and seamless end-to-end capability, are essential for • The daily conduct of collaborative work in both experiment and theory • Detector development & construction on a global scale • Grid systems supporting analysis involving physicists in all world regions • The conception, design and implementation of next generation facilities as “global networks” • “Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”

  3. The Challenges of Next Generation Science in the Information Age • Flagship Applications • High Energy & Nuclear Physics, AstroPhysics Sky Surveys: TByte to PByte “block” transfers at 1-10+ Gbps • eVLBI: Many real time data streams at 1-10 Gbps • BioInformatics, Clinical Imaging: GByte images on demand • HEP Data Example: • From Petabytes in 2004, ~100 Petabytes by 2008, to ~1 Exabyte by ~2013-5. • Provide results with rapid turnaround, coordinating large but limited computing and data handling resources,over networks of varying capability in different world regions • Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs • With reliable, quantifiable high performance Petabytes of complex data explored and analyzed by 1000s of globally dispersed scientists, in hundreds of teams

  4. Four LHC Experiments: The Petabyte to Exabyte Challenge • ATLAS, CMS, ALICE, LHCBHiggs + New particles; Quark-Gluon Plasma; CP Violation 6000+ Physicists & Engineers; 60+ Countries; 250 Institutions Tens of PB 2008; To 1 EB by ~2015 Hundreds of TFlops To PetaFlops

  5. Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center LHC Data Grid Hierarchy CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1 ~PByte/sec ~100-1500 MBytes/sec Online System Experiment CERN Center PBs of Disk; Tape Robot Tier 0 +1 Tier 1 ~10 Gbps FNAL Center IN2P3 Center INFN Center RAL Center 2.5-10 Gbps Tier 2 ~2.5-10 Gbps Tier 3 Institute Institute Institute Institute Tens of Petabytes by 2007-8.An Exabyte ~5-7 Years later. Physics data cache 0.1 to 10 Gbps Tier 4 Workstations Emerging Vision: A Richly Structured, Global Dynamic System

  6. ICFA Standing Committee on Interregional Connectivity (SCIC) • Created by ICFA in July 1998 in Vancouver • CHARGE: Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe • As part of the process of developing theserecommendations, the committee should • Monitor traffic • Keep track of technology developments • Periodically review forecasts of future bandwidth needs, and • Provide early warning of potential problems • Representatives: Major labs, ECFA, ACFA, North and South American Physics Community

  7. SCIC in 2003-2004http://cern.ch/icfa-scic • Strong Focus on the Digital Divide Since 2002Three 2004 Reports; Presented to ICFA Feb. 13 • Main Report: “Networking for HENP” [H. Newman et al.] • Includes Brief Updates on Monitoring, the Digital Divide and Advanced Technologies [*] • A World Network Overview (with 27 Appendices): Status and Plans for the Next Few Years of National and Regional Networks, and Optical Network Initiatives • Monitoring Working Group Report [L. Cottrell] • Digital Divide in Russia [V. Ilyin] [*] Also See the 2003 SCIC Reports of the Advanced Technologies and Digital Divide Working Groups

  8. ICFA Report: Networks for HENPGeneral Conclusions (1) • Bandwidth Usage Continues to Grow by 80-100% Per Year • Current generation of 2.5-10 Gbps backbones and major Int’l links used by HENP arrived in the last 2 Years [US+Europe+Japan+Korea] • Capability Increased from ~4 to several hundred times, i.e. much faster than Moore’s Law • This is a direct result of the continued precipitous fall of network prices for 2.5 or 10 Gbps links in these regions • Technological progress may drive BW higher, unit price lower • More wavelengths on a fiber; Cheap, widespread Gbit Ethernet • Grids may accelerate this growth, and the demand for seamless high performance • Some regions are moving to owned or leased dark fiber • The rapid rate of progress is confined mostly to the US, Europe, Japan , Korea, and the major TransAtlantic and Pacific routes • This may worsen the problem of the Digital Divide

  9. History of Bandwidth Usage – One Large Network; One Large Research Site ESnet Accepted Traffic 1/90 – 1/04Exponential Growth Since ’92;Annual Rate Increased from 1.7 to 2.0X Per Year In the Last 5 Years SLAC Traffic ~300 Mbps; ESnet LimitGrowth in Steps: ~ 10X/4 YearsProjected: ~2 Terabits/s by ~2014

  10. Internet Growth in the World At Large Amsterdam Internet Exchange Point Example 75-100% Growth Per Year 5 MinuteMax 20 G Avg 10 G Some Growth Spurts;Typically In Summer-Fall The Rate of HENP Network Usage Growth (~100% Per Year) is Similar to the World at Large

  11. Bandwidth Growth of Int’l HENP Networks (US-CERN Example) • Rate of Progress >> Moore’s Law. (US-CERN Example) • 9.6 kbps Analog (1985) • 64-256 kbps Digital (1989 - 1994) [X 7 – 27] • 1.5 Mbps Shared (1990-3; IBM) [X 160] • 2 -4 Mbps (1996-1998) [X 200-400] • 12-20 Mbps (1999-2000) [X 1.2k-2k] • 155-310 Mbps (2001-2) [X 16k – 32k] • 622 Mbps (2002-3) [X 65k] • 2.5 Gbps  (2003-4) [X 250k] • 10 Gbps  (2005) [X 1M] • A factor of ~1M over a period of 1985-2005 (a factor of ~5k during 1995-2005) • HENP has become a leading applications driver, and also a co-developer of global networks;

  12. Pan-European Multi-Gigabit Backbone (33 Countries)February 2004 Note 10 Gbps Connections to Poland, Czech Republic, Hungary Planning Underway for “GEANT2” (GN2), Multi-Lambda Backbone, to Start In 2005

  13. Core Capacity on Western European NRENs 2001-2003 10G                1G 100M Log Scale TERENA Compendiumwww.terena.nl 15 European NRENs have made a step up to 1, 2.5 or 10 Gbps core capacity in the last 3 years

  14. SuperSINET in JAPAN: Updated Map Oct. 2003 SuperSINET 10 Gbps Int’l Circuit ~ 5-10 Gbps Domestic Circuit 30 – 100 Mbps • SuperSINET • 10 Gbps IP; Tagged VPNs • Additional 1 GbE Inter-University Waves For HEP • 4 X 2.5 Gb to NY; 10 GbE Peerings: to Abilene, ESnet and GEANT

  15. SURFNet5 in the Netherlands Fully Optical 10G IP Network • Fully dual stack: IPv4 + IPv6 • 65% of Customer Base Connects with Gigabit Ethernet

  16. Germany: 2003, 2004, 2005 • GWIN Connects 550 Universities, Labs, Other Institutions GWIN: Q4/04 Plan XWIN: Q4/05(Dark Fiber Option) GWIN: Q4/03

  17. Abilene - Upgrade Completed!

  18. PROGRESS in SE Europe (Sk, Pl, Cz, Hu, …) 1660 km of Dark Fiber CWDM Links, up to 112 km. 1 to 4 Gbps (GbE) August 2002:First NREN in Europe to establish Int’l GbE Dark Fiber Link, to AustriaApril 2003 to Czech Republic. Planning 10 Gbps Backbone; dark fiber link to Poland this year.

  19. Romania: Improved to 34 to 155 Mbps in 2003; GEANT-Bucharest Link Improved: 155 to 622 Mbps. Note: Inter-City Links Were Only 2 to 6 Mbps in 2002 GEANT connection Timişoara RoEduNet2004 Plans for IntercityDark Fiber Backbone

  20. Australia (AARnet): SXTransport Project in 2004 • Connect Major Australian Universities to 10 Gbps Backbone • Two 10 Gbps Research Links to the US • Aarnet/USLIC Collaboration on Net R&D Starting Soon Connect Telescopesin Australia, Hawaii (Muana Kea)

  21. GLORIAD: Global Optical Ring (US-Russia-China) “Little Gloriad” (OC3) Launched January 12; to OC192 ITER Distributed Ops.;Fusion-HEP Cooperation Also Important for Intra-Russia Connectivity;Education and Outreach

  22. National Lambda Rail (NLR) SEA POR SAC BOS NYC CHI OGD DEN SVL CLE WDC PIT FRE KAN RAL NAS STR LAX PHO WAL ATL SDG OLG DAL JAC 15808 Terminal, Regen or OADM site Fiber route Transition beginning now to optical, multi-wavelength Community owned or leased fiber networks for R&E • NLR • Coming Up Now; Initially 4 10G Wavelengths • Full Footprint by ~4Q04 • Internet2 HOPI Initiative (w/HEP) • To 40 10G Waves in Future

  23. 18 State Dark Fiber InitiativesIn the U.S. (As of 3/04) California (CALREN), Colorado (FRGP/BRAN)Connecticut Educ. Network,Florida Lambda Rail, Indiana (I-LIGHT), Illinois (I-WIRE), Md./DC/No. Virginia (MAX),Michigan, Minnesota, NY + New England (NEREN), N. Carolina (NC LambdaRail), Ohio (Third Frontier Net) Oregon, Rhode Island (OSHEAN), SURA Crossroads (SE U.S.), Texas,Utah, Wisconsin The Move to Dark Fiber is Spreading FiberCO

  24. CA*net4 (Canada)Two 10G Waves Vancouver- Halifax • Interconnects Regional Nets at 10G • Acts as Parallel Discipline-Oriented Nets; 650 Sites • Connects to US Nets at Seattle, Chicago and NYC • Third Nat’l Lambda Later in 2004 • User Controlled Light Path Software(UCLP) for “LambdaGrids”

  25. SURFNet6 in the Netherlands 3000 km of Owned Dark Fiber 40M Euro ProjectScheduled Start Mid-2005;Support Hybrid Grids

  26. HEP is Learning How to Use Gbps Networks Fully: Factor of ~500 Gain in Max. Sustained TCP Thruput in 4 Years, On Some US+Transoceanic Routes • 9/01 105 Mbps 30 Streams: SLAC-IN2P3; 102 Mbps 1 Stream CIT-CERN • 5/20/02 450-600 Mbps SLAC-Manchester on OC12 with ~100 Streams • 6/1/02 290 Mbps Chicago-CERN One Stream on OC12 • 9/02 850, 1350, 1900 Mbps Chicago-CERN 1,2,3 GbE Streams, 2.5G Link • 11/02 [LSR] 930 Mbps in 1 Stream California-CERN, and California-AMS FAST TCP 9.4 Gbps in 10 Flows California-Chicago • 2/03 [LSR] 2.38 Gbps in 1 Stream California-Geneva (99% Link Use) • 5/03 [LSR] 0.94 Gbps IPv6 in 1 Stream Chicago- Geneva • TW & SC2003 [LSR]: 5.65 Gbps (IPv4), 4.0 Gbps (IPv6) GVA-PHX (11 kkm) • 3/04 [LSR] 6.25 Gbps (IPv4) in 8 Streams LA-CERN *

  27. Transatlantic Ultraspeed TCP TransfersThroughput Achieved: X50 in 2 years • Terabyte Transfers by the Caltech-CERN Team: • Across Abilene (Internet2) Chicago-LA, Sharing with normal network traffic • Oct 15: 5.64 Gbps IPv4 Palexpo-L.A. (10.9 kkm) • Peaceful Coexistence with a Joint Internet2- Telecom World VRVS Videoconference • Nov 18: 4.00 Gbps IPv6 Geneva-Phoenix (11.5 kkm) • March 2004: 6.25 Gbps in 8 Streams (S2IO Interfaces) Juniper, HPLevel(3)Telehouse Nov 19: 23+ Gbps TCP: Caltech, SLAC, CERN, LANL, UvA, Manchester

  28. HENP Major Links: Bandwidth Roadmap in Gbps Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade;A new DOE Science Network Roadmap: Compatible

  29. HENP Lambda Grids:Fibers for Physics • Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores • Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007-8)requires that each transaction be completed in a relatively short time. • Example: Take 800 secs to complete the transaction. Then Transaction Size (TB)Net Throughput (Gbps) 1 10 10 100 100 1000 (Capacity of Fiber Today) • Summary: Providing Switching of 10 Gbps wavelengthswithin ~2-4 years; and Terabit Switching within 5-8 years would enable “10G Lambda Grids with Terabyte transactions”,to fully realize the discovery potential of major HENP programs, as well as other data-intensive research.

  30. ICFA Report: Networks for HENPGeneral Conclusions (2) • Reliable high End-to-end Performance of networked applications such as large file transfers and Data Grids is required. Achieving this requires: • Removing local, last mile, and nat’l and int’l bottlenecks end-to-end, whether technical or political in origin.While National and International backbones have reached 2.5 to 10 Gbps speeds in many countries, the bandwidths across borders, the countrysideor the city may be much less. This problem is very widespread in our community, with examples stretching from China to South America to the Northeastern U.S. Root causes for this vary, from lack of local infrastructure to unfavorable pricing policies. • Upgrading campus infrastructures.These are still not designed to support Gbps data transfers in most of HEP centers. One reason for the under-utilization of National and International backbones, is the lack of bandwidth to groups of end-users inside the campus. • End-to-end monitoring extending to all regions serving our community.A coherent approach to monitoring that allows physicists throughout our community to extract clear, unambiguous and inclusive information is a prerequisite for this.

  31. ICFA Report: Networks for HENPGeneral Conclusions (3) • We must Remove Firewall Bottlenecks [Also at some Major HEP Labs] • Firewall systems are so far behind the needs that they won’t match the data flow of Grid applications. The maximum throughput measured across available products is limited to a few X 100 Mbps ! • It is urgent to address this issue by designing new architectures that eliminate/alleviate the need for conventional firewalls. For example, Point-to-point provisioned high-speed circuits as proposed by emerging Light Path technologies could remove the bottleneck. • With endpoint authentication [as in Grid AAA systems], for example, the point-to-point paths are private, intrusion resistant circuits, so they should be able to bypass firewalls if the endpoint sites trust each other.

  32. HENP Data Grids, and Now Services-Oriented Grids • The original Computational and Data Grid concepts are largely stateless, open systems: known to be scalable • Analogous to the Web • The classical Grid architecture had a number of implicit assumptions • The ability to locate and schedule suitable resources, within a tolerably short time (i.e. resource richness) • Short transactions with relatively simple failure modes • HENP Grids are Data Intensive & Resource-Constrained • 1000s of users competing for resources at dozens of sites • Resource usage governed by local and global policies • Long transactions; some long queues • HENP Stateful, End-to-end Monitored and Tracked Paradigm • Adopted in OGSA [Now WS Resource Framework]

  33. The Move to OGSA and then Managed Integration Systems App-specific Services ~Integrated Systems Stateful; Managed Open Grid Services Arch Web ServicesResrc Framwk Web services + … Increased functionality, standardization GGF: OGSI, … (+ OASIS, W3C) Multiple implementations, including Globus Toolkit Globus Toolkit X.509, LDAP, FTP, … Defacto standards GGF: GridFTP, GSI Custom solutions Time

  34. Managing Global Systems: Dynamic Scalable Services Architecture MonALISA: http://monalisa.cacr.caltech.edu • “Station Server” Services-engines at sites host many“Dynamic Services” • Scales to thousands of service-Instances • Servers autodiscover and interconnect dynamically to form a robust fabric • Autonomous agents

  35. Grid Analysis Environment Analysis Client Analysis Client Analysis Client HTTP, SOAP, XML/RPC Grid Services Web Server Scheduler Catalogs Fully- Abstract Planner Metadata Partially- Abstract Planner Virtual Data Applications Data Management Monitoring Replica Fully- Concrete Planner Grid Execution Priority Manager Grid Wide Execution Service CLARENS: Web Services Architecture • Analysis Clients talk standard protocols to the “Grid Services Web Server”, a.k.a. the Clarens data/services portal, with a simple Web service API • The secure Clarens portal hides the complexity of the Grid Services from the client • Key features: Global Scheduler, Catalogs, Monitoring, and Grid-wide Execution service;Clarens servers forma Global Peer to peer Network

  36. SEA POR SAC NYC CHI OGD DEN SVL CLE PIT WDC FRE KAN RAL NAS National Lambda Rail STR PHO LAX WAL ATL SDG OLG DAL JAC UltraLight Collaboration:http://ultralight.caltech.edu • Caltech, UF, FIU, UMich, SLAC,FNAL,MIT/Haystack,CERN, UERJ(Rio), NLR, CENIC, UCAID,Translight, UKLight, Netherlight, UvA, UCLondon, KEK, Taiwan • Cisco, Level(3) • Integrated hybrid experimental network, leveraging Transatlantic R&D network partnerships; packet-switched + dynamic optical paths • 10 GbE across US and the Atlantic: NLR, DataTAG, TransLight, NetherLight, UKLight, etc.; Extensions to Japan, Taiwan, Brazil • End-to-end monitoring; Realtime tracking and optimization; Dynamic bandwidth provisioning • Agent-based services spanning all layers of the system, from the optical cross-connects to the applications.

  37. GLIF: Global Lambda Integrated Facility “GLIF is a World Scale Lambda based Lab for Application & Middleware development, where Grid applications ride on dynamically configured networks based on optical wavelengths ... GLIF will use the Lambda network to support data transport for the most demanding e-Science applications, concurrent with the normal best effort Internet for commodity traffic.” 10 Gbps Wavelengths For R&E Network Development Are Proliferating, Across Continents and Oceans

  38. SCIC Report 2004 The Digital Divide • As the pace of network advances continues to accelerate, the gap between the economically “favored” regions and the rest of the world is in danger of widening. • We must therefore workto Close the Digital Divide • To make Physicists from All World Regions Full Partners in Their Experiments; and in the Process of Discovery • This is essential for the health of our global experimental collaborations, our plans for future projects, and our field.

  39. PingER SCIC Monitoring WG PingER (Also IEPM-BW) Monitoring Sites • Measurements from • 33 monitors in 12 countries • 850 remote hosts in 100 Countries; 3700 monitor-remote site pairs • Measurements go back to ‘95 • Reports on link reliability, quality • Aggregation in “affinity groups” • Countries monitored • Contain 78% of world population • 99% of Internet users Affinity Groups (Countries) Anglo America (2), Latin America (14), Europe (24), S.E. Europe (9), Africa (21), Mid East (7), Caucasus (3), Central Asia (8), Russia includes Belarus & Ukraine (3), S. Asia (7), China (1)  and Australasia (2).

  40. SCIC Monitoring WG - Throughput Improvements 1995-2004 Bandwidth of TCP < MSS/(RTT*Sqrt(Loss)) (1) 60% annual improvement Factor ~100/10 yr Some Regions ~5-10 Years Behind SE Europe, Russia, Central Asia May be Catching Up (Slowly);India Ever-Farther Behind Progress: but Digital Divide is Mostly Maintained (1) Matthis et al., Computer Communication Review 27(3), July 1997

  41.           

  42. Inhomogeneous Bandwidth Distributioin Latin America. CAESAR Report (6/02) Int’l Links0.071 Gbps Used 4,236 Gbps Capacity to Latin America Need to Pay Attentionto End-point connections

  43. DAI: State of the World

  44. DAI: State of the World

  45. Digital Access Index Top Ten +  Pakistan 0.030.54 0.41 0.2 0.01 0.24

  46. Work on the Digital Dividefrom Several Perspectives • Work on Policies and/or Pricing: pk, in, br, cn, SE Europe, … • Share Information: Comparative Performance and BW Pricing • Exploit Model Cases: e.g. Poland, Slovakia, Czech Republic • Find Ways to work with vendors, NRENs, and/or Gov’ts • Inter-Regional Projects • GLORIAD, Russia-China-US Optical Ring • South America: CHEPREO (US-Brazil); EU ALICE Project • Virtual SILK Highway Project (DESY): FSU satellite links • Help with Modernizing the Infrastructure • Design, Commissioning, Development • Provide Tools for Effective Use: Monitoring, Collaboration • Participate in Standards Development; Open Tools • Advanced TCP stacks; Grid systems • Workshops and Tutorials/Training Sessions • Example: ICFA Digital Divide and HEPGrid Workshop in Rio • Raise General Awareness of the Problem; Approaches to Solutions • WSIS/RSIS; Trieste Workshop

  47. Dai Davies SERENATE Workshop Feb. 2003 www.serenate.org Ratio to 114 If Include Turkey, Malta;Correlated with the Number of Competing Vendors

  48. Virtual Silk Highway The SILK Highway Countriesin Central Asia & the Caucasus • Hub Earth Station at DESY with access to the European NRENs and the Internet via GEANT • Providing International Internet access directly • National Earth Station at each Partner site • Operated by DESY, providing international access • Individual uplinks, common down-link, using DVB • Currently 4 Mbps Up; 12 Down; for ~$ 350k/Year Note: Satellite Links are a Boon to the Region, but Unit Costs are Very High compared to Fiber.There is a Continued Need for Fiber Infrastructure

More Related