1 / 20

Grid Computing @ NIKHEF

Grid Computing @ NIKHEF. The (data) problem to solve beyond meta-computing: the Grid realizing the Grid at NIKHEF towards a national infrastructure. David Groep NIKHEF PDP 2004.07.14. A Glimpse of the Problem in HEP. Place event info on 3D map Trace trajectories through hits

stell
Télécharger la présentation

Grid Computing @ NIKHEF

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid Computing @ NIKHEF • The (data) problem to solve • beyond meta-computing: the Grid • realizing the Grid at NIKHEF • towards a national infrastructure David Groep NIKHEF PDP 2004.07.14 Grids at NIKHEF 2004.07.14

  2. A Glimpse of the Problem in HEP • Place event info on 3D map • Trace trajectories through hits • Assign type to each track • Find particles you want • Needle in a haystack! • This is “relatively easy” case Grids at NIKHEF 2004.07.14

  3. The HEP reality Grids at NIKHEF 2004.07.14

  4. 40 MHz (40 TB/sec) level 1 - special hardware 75 KHz (75 GB/sec) level 2 - embedded processors 5 KHz (5 GB/sec) level 3 - PCs 100 Hz (100 MB/sec) data recording & offline analysis HEP Data Rates • Reconstruct & analyze 1 event takes about 90 s • Maybe only a few out of a million are interesting. But we have to check them all! • Analysis program needs lots of calibration; determined from inspecting results of first pass. •  Each event will be analyzed several times! • Raw data rate ~ 5PByte/yr/expt. • total volume: ~20 Pbyte/yr • per major centre: ~2 PByte/yr The ATLAS experiment Grids at NIKHEF 2004.07.14

  5. Data handling and computation event filter (selection & reconstruction) detector processed data event summary data raw data batch physics analysis event reprocessing analysis objects (extracted by physics topic) event simulation interactive physics analysis Grids at NIKHEF 2004.07.14

  6. HEP is not unique in generating data • LOFAR: 200 MHz,12 bits,25k antennas: 60Tbit/s • Envisat GOME: ~ 5TByte/year • Materials analysis (mass spectroscopy, &c): ~ 2GByte/10min • fMRI, PET/MEG, … • LHC data volume necessitates ‘provenance’ and meta-data • information/data ratio even higher in other disciplines • both data and information ownership distributedaccess right for valuable data, add privacy for medical data Grids at NIKHEF 2004.07.14

  7. Beyond meta-computing: the Grid How can the Grid help? via resource accessibility and via sharing A grid integrates resources that are • not owned or administered by one single organisation • speak a common, open protocol … that is generic • working as a coordinated, transparent system And … • can be usedby many people from multiple organisations • that work together in one Virtual Organisation Grids at NIKHEF 2004.07.14

  8. Virtual Organisations • A VO is a temporary alliance of stakeholders • Users • Service providers • Information Providers A set of individuals or organisations, not under single hierarchical control, temporarily joining forces to solve a particular problem at hand, bringing to the collaboration a subset of their resources, sharing those at their discretion and each under their own conditions. Grids at NIKHEF 2004.07.14

  9. Applications Common and open protocols Application Toolkits Condor-G DUROC MPICH-G2 VLAM-G Grid Services Information Replica GridFTP GRAM Grid Security Infrastructure (GSI) Grid Fabric FARMS Supers Desktops TCP/IP Apparatus DBs Grids at NIKHEF 2004.07.14

  10. Standard protocols • New Grid protocols based on popular Web ServicesWeb Services Resource Framework (WSRF) • Grid adds concept of ‘stateful resources’, likegrid-jobs, data elements & data bases, … • Ensure adequate and flexible standards todayvia the Global Grid Forum • Future developments taken up by industry Grids at NIKHEF 2004.07.14

  11. Access in a coordinated way • Transparently crossing of domain boundariessatisfying constraints of • site autonomy • authenticity, integrity, confidentiality • single sign-on to all services • ways to address services collectively • APIs at the application level • every desktop, laptop, disk is part of the Grid Grids at NIKHEF 2004.07.14

  12. Realization: projects at NIKHEF • Virtual Lab for e-Science (BSIK) • 2004-2008 • Enabling Grids for e-Science in Europe (FP6) • 2004-2005/2007 • GigaPort NG Network (BSIK) • 2004-2008 • NL-Grid Infrastructure (NCF) • 2002-… • EU DataGrid (FP5, finished) • 2001-2003 Grids at NIKHEF 2004.07.14

  13. Research threads • end-to-end operation for data-intensive sciences (DISc): • data acquisition – ATLAS Level-3 • wide-area transport, on-line and near-line storage – LGC SC • data cataloguing and meta-data – D0 SAM • common API and application layer for DISc – EGEE App+VL-E • design scalable and generic Grids • grid software scalability research, security • deployment and certification • large-scale clusters, storage, networking Grids at NIKHEF 2004.07.14

  14. End to End – the LCG Service Challenge slide: Alan Silverman, CERN • 10 Pbyte per year exported from CERN (ready in 2006) • Targets for end 2004 – • SRM-SRM (disk) on 10 Gbps links between CERN, NIKHEF/SARA, Triumf, FZK, FNAL  500 Mb/sec sustained for days • Reliable data transfer service • Mass storage system <-> mass storage system • SRM v.1 at all sites • disk-disk, disk-tape, tape-tape • Permanent service in operation • sustained load (mixed user and generated workload) • > 10 sites • key target is reliability • load level targets to be set Grids at NIKHEF 2004.07.14

  15. Networking and security • 2x10Gbit/s Amsterdam-Chicago • 1x10Gbit/s Amsterdam-CERN • ATLAS 3rd level trigger (distributed DACQ) • protocol tuning and optimization • Monitoring and micro-metering • LCG service challenge: sustained high-throughput • collaboration with Cees de Laat (UvA AIR) + SURFnet • ideal laboratory for our security thread (many domains) Grids at NIKHEF 2004.07.14

  16. Building the Grid • The Grid is not a magic source of power! • Need to invest in storage, CPUs, networks • LHC needs per major centre (assume 10 per expt.): ~ 3 PByte/yr, ~40 Gbit/s WAN, ~15 000 P4-class 2GHz • … more for a national multi-disciplinary facility • Collaborative build-up of expertise:NIKHEF, SARA, NCF, UvA, VU, KNMI, ASTRON, AMOLF, ASCI, … • Resources: NIKHEF resources + NCF’s NL-Grid initiative + … Grids at NIKHEF 2004.07.14

  17. Resources today (the larger ones) • 1.2 PByte near-line StorageTek • 36 node IA32 cluster ‘matrix’ • 468 CPU IA64 + 1024 CPU MIPS • multi-Gbit links to 100TByte cache • 7 TByte cache • 140 nodes IA32 • 1Gbit link SURFnet • multiple links with SARA only resources with either GridFTP or Grid job management Grids at NIKHEF 2004.07.14

  18. A Facility for e-Science • Many (science) application with large data volumes: • Life Sciences: micro-arrays (Utrecht, SILS Amsterdam) • Medical imaging: functional MRI (AMC), MEG (VU) • ‘omics’ and molecular characterization: sequencing (Erasmus), mass spectroscopy (AMOLF), electron microscopy (Delft, Utrecht) • today such groups are not yet equipped to deal with their >1TByte data sets, our DISc experience can help • Common need for multi-Pbyte storage • ubiquitous networks for data exchange • sufficient compute power, accessible from anywhere Grids at NIKHEF 2004.07.14

  19. Common needs and solutions? • VL-E Proof of Concept environment for e-Science • grid services address the common needs (storage, computing, indexing) • application can rely on a stable infrastructure • valuable experience as input to industry (mainly industrial research) • can increasingly leverage emerging industry tools the Grid will be a household term like the Web by pushing on the PByte leading edge, TByte-sized storage will be an e-Science commodity Grids at NIKHEF 2004.07.14

  20. NIKHEF PDP Team in no particular order: • End-to-end applications: Templon, Bos, Grijpink, Klous • Security: Groep, Steenbakkers, Koeroo, Venekamp • Facilities: Salomoni, Heubers, Damen, Kuipers, v.d. Akker, Harapan • Scaling and certification: Groep, Starink embedded in both the physics and the computing groups Grids at NIKHEF 2004.07.14

More Related