1 / 31

title.open ( ); revolution {execute};

Teamwork. title.open ( ); revolution {execute};. LHC Computing Challenge Methodology? H ierarchical I nformation in a G lobal G rid S upernet Aspiration? HIGGS DataGRID-UK Aspiration? ALL Data Intensive Computation. Starting Point The LHC Computing Challenge Data Hierarchy

Télécharger la présentation

title.open ( ); revolution {execute};

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Teamwork title.open ( ); revolution {execute}; LHC Computing Challenge Methodology? Hierarchical Information in a Global Grid Supernet Aspiration? HIGGS DataGRID-UK Aspiration? ALL Data Intensive Computation Tony Doyle - University of Glasgow

  2. Starting Point The LHC Computing Challenge Data Hierarchy DataGRID Analysis Architectures GRID Data Management Industrial Partnership Regional Centres Today’s World Tomorrow’s World Summary Outline Tony Doyle - University of Glasgow

  3. Starting Point Tony Doyle - University of Glasgow

  4. Starting Point “Current technology would not be able to scale data to such an extent, which is where the teams at Glasgow and Edinburgh Universities come in. The funding awarded will enable the scientists to prototype a Scottish Computing Centre which could develop the computing technology and infrastructure needed to cope with the high levels of data produced in Geneva, allowing the data to be processed, transported, stored and mined. Once scaled down, the data will be distributed for analysis by thousands of scientists around the world. The project will involve participation from Glasgow University's Physics & Astronomy and Computing Science departments, Edinburgh University's Physics & Astronomy department and the Edinburgh Parallel Computing Centre, and is funded by the Scottish Higher Education Funding Council's (SHEFC Joint Research Equipment Initiative). It is hoped that the computing technology developed during the project will have wider applications in the future, with possible uses in astronomy, computing science and genomics observation, as well as providing generic technology and software for the next generation Internet.” Tony Doyle - University of Glasgow

  5. The LHC Computing Challenge Detector forLHCb experiment Detector for ALICE experiment Tony Doyle - University of Glasgow

  6. A Physics Event • Gated electronics response from a proton-proton collision • Raw data: hit addresses, digitally converted charges and times • Marked by a unique code: • Proton bunch crossing number, RF bucket • Event number • Collected, Processed, Analyzed, Archived…. • Variety of data objects become associated • Event “migrates” through analysis chain: • may be reprocessed; • selected for various analyses; • replicated to various locations. Tony Doyle - University of Glasgow

  7. LHC Computing Model • Hierarchical, distributed tiers • GRID ties distributed resources together Universities Tier-2 ScotGRID RAL Tier-1 CERN Dedicated or QoS Network Links Tier-0 Tony Doyle - University of Glasgow

  8. Physics Models Monte Carlo Truth Data Detector Simulation MC Raw Data Reconstruction MC Event Summary Data MC Event Tags Data Structure Trigger System Data Acquisition Run Conditions Level 3 trigger Calibration Data Raw Data Trigger Tags Reconstruction Event Summary Data ESD Event Tags coordination required at collaboration and group levels Tony Doyle - University of Glasgow

  9. Analysis Object Data Analysis Object Data Analysis Object Data AOD Physics Analysis ESD: Data or Monte Carlo Event Tags Tier 0,1 Collaboration wide Event Selection Calibration Data Analysis, Skims INCREASINGDATAFLOW Raw Data Tier 2 Analysis Groups Physics Objects Physics Objects Physics Objects Tier 3, 4 Physicists Physics Analysis Tony Doyle - University of Glasgow

  10. ATLAS Parameters • Running conditions at startup: • Raw event size ~2 MB (recently revised upwards...) • 2.7x109 event sample  5.4 PB/year, before data processing • “Reconstructed” events, Monte Carlo data  ~9 PB/year (2PB disk) • CPU: ~2M SpecInt95 • CERN alone can handle only 1/3 of these resources Tony Doyle - University of Glasgow

  11. ESD Pseudo-physical information: Clusters, track candidates (electrons, muons), etc. Reconstructed information ~100 kB/event Physical information: Transverse momentum, Association of particles, jets, (best) id of particles, Physical info for relevant “objects” AOD Selected information ~10 kB/event Analysis information TAG Relevant information for fast event selection ~1 kB/event Data Hierarchy “RAW, ESD, AOD, TAG” RAW Recorded by DAQ Triggered events ~2 MB/event Detector digitisation Tony Doyle - University of Glasgow

  12. ... PEventObj PTruthVertex PCaloRegion PSiDetector PMDT_Detector PEvent PSiDigit PTruthTrack PEventObjVector PMDT_Digit PCaloDigit b b b b b b b b b b b Raw Data DB2 System DB Raw Data DB1 Event Container Raw Data Container PEvent #1 PEventObjeVector PEventObjVector : PEvent #2 PEventObjVector PEventObjVector : PSiDetector PSiDigit ... PTRT_Detector PTRTDigit ... PMDT_Detector PMDT_Digit ... PCaloRigion PCaloDigit ... PTruthVertex PTruthTrack ... : Testbed DataBase • Object Model: Atlas Simulated Raw Events Tony Doyle - University of Glasgow

  13. ScotGRID++ ~1 TIPS LHC Computing Challenge 1 TIPS = 25,000 SpecInt95 PC (1999) = ~15 SpecInt95 ~PBytes/sec Online System ~100 MBytes/sec Offline Farm~20 TIPS • One bunch crossing per 25 ns • 100 triggers per second • Each event is ~1 Mbyte ~100 MBytes/sec Tier 0 CERN Computer Centre >20 TIPS ~ Gbits/sec or Air Freight HPSS Tier 1 RAL Regional Centre US Regional Centre Italian Regional Centre French Regional Centre HPSS HPSS HPSS HPSS Tier 2 Tier2 Centre ~1 TIPS Tier2 Centre ~1 TIPS Tier2 Centre ~1 TIPS Tier 3 ~Gbits/sec Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Data for these channels should be cached by the institute server Institute ~0.25TIPS Institute Institute Institute Physics data cache 100 - 1000 Mbits/sec Tier 4 Workstations Tony Doyle - University of Glasgow

  14. Database Access Benchmark Many applications require database functionality • e.g. MySQL database daemon • Basic 'crash-me' and associated tests • Access times for basic insert, modify, delete, update database operations • e.g. (on 256Mbyte, 800MHz Red Hat 6.2 linux box) 350k data insert operations 149 seconds 10k query operations 97 seconds Currently favoured HEP DataBase application e.g. BaBar, ZEUS software Tony Doyle - University of Glasgow

  15. CPU Intensive Applications Numerically intensive simulations: • Minimal input and output data • ATLAS Monte Carlo (gg H bb) 228 sec/3.5 Mb event on 800 MHz linux box Compiler Tests: Compiler Speed (MFlops) Fortran (g77) 27 C (gcc) 43 Java (jdk) 41 Standalone physics applications: 1. Simulation of neutron/photon/electron interactions for 3D detector design 2. NLO QCD physics simulation Tony Doyle - University of Glasgow

  16. Network Monitoring Prototype Instantaneous CPU Usage Tools: Java Analysis Studio over TCP/IP Scalable Architecture Individual Node Info. Tony Doyle - University of Glasgow

  17. Converter Converter Application Manager Converter Transient Event Store Data Files Message Service Persistency Service Event Data Service JobOptions Service Algorithm Algorithm Algorithm Data Files Transient Detector Store Particle Prop. Service Persistency Service Detec. Data Service Other Services Data Files Transient Histogram Store Persistency Service Histogram Service Analysis Architecture • The Gaudi Framework - developed by LHCb • - adopted by ATLAS (Athena) Tony Doyle - University of Glasgow

  18. GRID Services • Grid Services • Resource Discovery • Scheduling • Security • Monitoring • Data Access • Policy • Athena/Gaudi Services • Application manager • “Job Options” service • Event persistency service • Detector persistency • Histogram service • User interfaces • Visualization • Database • Event model • Object federations Extensible interfaces and protocols being specified and developed: Tools: 1. UML 2. Java Protocols: 1. XML 2. MySQL DataGRID Toolkit 3. LDAP } Tony Doyle - University of Glasgow

  19. GRID Data Management !!! Virtual Data Scenario • Example analysis scenario: • Physicist issues a query from Athena for a Monte Carlo dataset • Issues: • How expressive is this query? • What is the nature of the query: declarative • Creating new queries and language • Algorithms are already available in local shared libraries • An Athena service consults an ATLAS Virtual Data Catalog • Consider possibilities: • TAG file exists on local machine (e.g. Glasgow) • Analyze it • ESD file exists in a remote store (e.g. Edinburgh) • Access relevant event files, then analyze that • RAW File no longer exists (e.g. RAL) • Regenerate, re-reconstruct, re-analyze Tony Doyle - University of Glasgow

  20. Globus Tony Doyle - University of Glasgow

  21. Globus Data GRID Tool Kit Tony Doyle - University of Glasgow

  22. Q u e r y O p t i m i s a t i o n & R e p l i c a M a n a g e r A c c e s s P a t t e r n M a n a g . D a t a M o v e r D a t a A c c e s s o r D a t a L o c a t o r S t o r a g e M a n a g e r M e t a D a t a M a n a g e r C a s t o r H P S S L o c a l F i l e s y s t e m GRID Data Management • Goal: develop middle-ware infrastructure to manage petabyte-scale data H i g h L e v e l S e r v i c e s Identify Key Areas Within Software Structure M e d i u m L e v e l S e r v i c e s Service levels reasonably well defined C o r e S e r v i c e s S e c u r e R e g i o n Tony Doyle - University of Glasgow

  23. Identifying Key Areas • 5 areas for development • Data Accessor - hides specific storage system requirements. Mass Storage Management group. • Replication - improves access by wide-area caching. Globus toolkit offers sockets and a communication library, Nexus. • Meta Data Management - data catalogues, monitoring information (e.g. access pattern), grid configuration information, policies. MySQL over Lightweight Directory Access Protocol (LDAP) being investigated. • Security - ensuring consistent levels of security for data and meta data. • Query optimisation - “cost” minimisation based on response time and throughputMonitoring Services group. RAL Identifiable UK Contributions RAL Tony Doyle - University of Glasgow

  24. WP1 PROJECT MANAGEMENT WP2 REQUIREMENTS ANALYSIS : existing functionality and future requirements; community consultation WP3 SYSTEM ARCHITECTURES: benchmark and implement WP4 GRID-ENABLE CURRENT PACKAGES : implement and test performance WP5 DATABASE SYSTEMS : requirements analysis and implementation; scalable federation tools. WP6 DATA MINING ALGORITHMS : requirements analysis, development and implementation WP7 BROWSER APPLICATIONS : requirements analysis and software development WP8 VISUALISATION : concepts and requirements analysis, software development. WP9 INFORMATION DISCOVERY : concepts and requirements analysis, software development WP10 FEDERATION OF KEY CURRENT DATASETS : e.g.. SuperCOSMOS, INT-WFS, 2MASS, FIRST, 2dF WP11 FEDERATION OF NEXT GENERATION OPTICAL-IR DATASETS : esp. Sloan, WFCAM WP12 FEDERATION of HIGH ENERGY ASTROPHYSICS DATASETS : esp. Chandra, XMM WP13 FEDERATION of SPACE PLASMA and SOLAR DATASETS : esp. SOHO, Cluster, IMAGE WP14 COLLABORATIVE DEVELOPMENT OF VISTA, VST, and TERAPIX PIPELINES WP15 COLLABORATION PROGRAMME WITH INTERNATIONAL PARTNERS WP16 COLLABORATION PROGRAMME WITH OTHER DISCIPLINES WP 1 Grid Workload Management A.Martin-QMW (0.5) WP 2 Grid Data Management A.Doyle-Glasgow (1.5) WP 3 Grid Monitoring services R.Middleton-RAL (1.8) WP 4 Fabric Management A.Sansum-RAL (0.5) WP 5 Mass Storage Management J.Gordon-RAL (1.5) WP 6 Integration Testbed D.Newbold-Bristol (3.0) WP 7 Network Services P.Clarke-PPNCG/UCL (2.0) WP 8 HEP Applications N/A (?) (4.0) WP 9 EO Science Applications ( c/o R.Middleton-RAL ) (0.0) WP 10 Biology Applications ( c/o P.Jeffreys-RAL ) (0.1) WP 11 Dissemination P.Jeffreys-RAL (0.1) WP 12 Project Management R.Middleton-RAL (0.5) Emphasis on Low Level Services etc Replication Fragmentation Emphasis on High Level GUIs etc AstroGrid Tony Doyle - University of Glasgow

  25. SRIF Expansion +Cloning = expansion of open source ideas “GRID Culture” Testbed = Learning by Example Tony Doyle - University of Glasgow

  26. mission to accelerate the exploitation of simulation by industry, commerce and academia • 45 staff, £2.5M turnover - externally funded • solve business problems - not sell technology Partnership Important Tony Doyle - University of Glasgow

  27. ping ping WAN service monitor ping LAN Industrial Partnership Adoption of OPEN Industry Standards +OO Methods Research Council Industry Inspiration: Data-Intensive Computation Tony Doyle - University of Glasgow

  28. Regional Centres Local Perspective: Consolidate Research Computing Optimisation of Number of Nodes? 4-5? Relative size dependent on funding dynamics SRIF Infrastructure Grid Data Management Security Monitoring Networking Global Perspective: V. Basic Grid Skeleton Regional Expertise Model? Tony Doyle - University of Glasgow

  29. Today’s World Helsinki Institute of Physics Science Research Council Istituto Trentino Di Cultura SARA Tony Doyle - University of Glasgow

  30. CO CR3 Istituto Trentino Di Cultura AC7 Helsinki Institute of Physics AC8 Science Research Council AC9 CR2 CR4 CR5 AC10 AC12 AC15 AC18 AC11 AC16 AC13 AC19 SARA AC14 AC17 Tomorrow’s World CR6 AC20 AC21 Tony Doyle - University of Glasgow

  31. General Engagement (£=OK) Mutual Interest (ScotGRID Example) Emphasis on DataGrid Core Development (e.g. Grid Data Management) “CERN” lead + Unique UK Identity Extension of Open Source Idea “Grid Culture” = Academia + Industry Multidisciplinary Approach = University + Regional Basis Use of Existing Structures (e.g. EPCC, RAL) Hardware Infrastructure via SRIF + Industrial Sponsorship Now LHC Detector forLHCb experiment Grid Data Management ScotGRID Security Monitoring Networking Detector for ALICE experiment Summary Tony Doyle - University of Glasgow

More Related