1 / 18

Grid Team

Grid Team. See David.. # working. System. Middleware. Applications. Hardware. Software. Hardware. LHC Computing at a Glance. The investment in LHC computing will be massive LHC Review estimated 240MCHF 80MCHF/y afterwards These facilities will be distributed

verlee
Télécharger la présentation

Grid Team

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid Team See David.. # working System Middleware Applications Hardware Software Hardware

  2. LHC Computing at a Glance • The investment in LHC computing will be massive • LHC Review estimated 240MCHF • 80MCHF/y afterwards • These facilities will be distributed • Political as well as sociological and practical reasons Europe: 267 institutes, 4603 users Elsewhere: 208 institutes, 1632 users

  3. Rare Phenomena –Huge Background All interactions 9 orders of magnitude! The HIGGS

  4. CPU Requirements • Complex events • Large number of signals • “good” signals are covered with background • Many events • 109 events/experiment/year • 1- 25 MB/event raw data • several passes required • Need world-wide: 7*106 SPECint95 (3*108 MIPS)

  5. ScotGRID++ ~1 TIPS LHC Computing Challenge 1 TIPS = 25,000 SpecInt95 PC (1999) = ~15 SpecInt95 ~PBytes/sec Online System ~100 MBytes/sec Offline Farm~20 TIPS • One bunch crossing per 25 ns • 100 triggers per second • Each event is ~1 Mbyte ~100 MBytes/sec Tier 0 CERN Computer Centre >20 TIPS ~ Gbits/sec or Air Freight HPSS Tier 1 RAL Regional Centre US Regional Centre Italian Regional Centre French Regional Centre HPSS HPSS HPSS HPSS Tier 2 Tier2 Centre ~1 TIPS Tier2 Centre ~1 TIPS Tier2 Centre ~1 TIPS Tier 3 ~Gbits/sec Physicists work on analysis “channels” Glasgow has ~10 physicists working on one or more channels Data for these channels is cached by the Glasgow server Institute ~0.25TIPS Institute Institute Institute Physics data cache 100 - 1000 Mbits/sec Tier 4 Workstations

  6. Starting Point

  7. CPU Intensive Applications Numerically intensive simulations: • Minimal input and output data • ATLAS Monte Carlo (gg H bb) 182 sec/3.5 Mb event on 1000 MHz linux box Compiler Tests: Compiler Speed (MFlops) Fortran (g77) 27 C (gcc) 43 Java (jdk) 41 Standalone physics applications: 1. Simulation of neutron/photon/electron interactions for 3D detector design 2. NLO QCD physics simulation

  8. Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 2002 2003 2004 2005 Timeline Prototype of Hybrid Event Store (Persistency Framework) Hybrid Event Store available for general users applications Distributed production using grid services Full Persistency Framework Distributed end-user interactive analysis LHC Global Grid TDR grid “50% prototype” (LCG-3) available ScotGRID ~ 300 CPUs + ~ 50 TBytes LCG-1 reliability and performance targets First Global Grid Service (LCG-1) available

  9. ScotGRID • ScotGRID Processing nodes at Glasgow • 59 IBM X Series 330 dual 1 GHz Pentium III with 2GB memory • 2 IBM X Series 340 dual 1 GHz Pentium III with 2GB memory and dual ethernet • 3 IBM X Series 340 dual 1 GHz Pentium III with 2GB memory and 100 + 1000 Mbit/s ethernet • 1TB disk • LTO/Ultrium Tape Library • Cisco ethernet switches • ScotGRID Storage at Edinburgh • IBM X Series 370 PIII Xeon with 512 MB memory 32 x 512 MB RAM • 70 x 73.4 GB IBM FC Hot-Swap HDD • Griddev testrig at Glasgow • 4 x 233 MHz Pentium II • CDF equipment at Glasgow • 8 x 700 MHz Xeon IBM xSeries 370 4 GB memory 1 TB disk

  10. EDG TestBed 1 Status Web interface showing status of (~400) servers at testbed 1 sites GRID extend to all expts

  11. Glasgow within the Grid

  12. GridPP £17m 3-year project funded by PPARC • CERN - LCG • (start-up phase) • funding for staff and hardware... Applications Operations • EDG - UK Contributions • Architecture • Testbed-1 • Network Monitoring • Certificates & Security • Storage Element • R-GMA • LCFG • MDS deployment • GridSite • SlashGrid • Spitfire • Optor • GridPP Monitor Page • =Glasgow element £1.99m £1.88m Tier - 1/A £3.66m CERN £5.67m DataGrid £3.78m http://www.gridpp.ac.uk • Applications (start-up phase) • BaBar • CDF+D0 (SAM) • ATLAS/LHCb • CMS • (ALICE) • UKQCD

  13. Overview of SAM SAM

  14. HTTP + SSLRequest + client certificate Is certificate signedby a trusted CA? Has certificatebeen revoked? No No Yes Finddefault Role ok? Request a connection ID Spitfire - Security Mechanism Servlet Container SSLServletSocketFactory RDBMS Trusted CAs TrustManager Revoked Certsrepository Security Servlet ConnectionPool Authorization Module Does user specify role? Role repository Translator Servlet Role Connectionmappings Map role to connection id

  15. Optor – replica optimiser simulation • Simulate prototype Grid • Input site policies and experiment data files. • Introduce replication algorithm: • Files are always replicated to the local storage. • If necessary oldest files are deleted. • Even a basic replication algorithm significantly reduces network trafficand program running times. New economics-based algorithms under investigation

  16. Prototypes real world... simulated World… Tools: Java Analysis Studio over TCP/IP Instantaneous CPU Usage Scalable Architecture Individual Node Info.

  17. Glasgow Investment in Computing Infrastructure • Long tradition • Significant Dept. Investment • £100,000 refurbishment (just completed) • Long term commitment (LHC era ~ 15 years) • Strong System Management Team – underpinning role • New Grid Data Management Group – fundamental to Grid Development • ATLAS/CDF/LHCb software • Alliances with Glasgow Computing Science, Edinburgh, IBM.

  18. Grids are (already) becoming a reality Mutual InterestScotGRIDExample Glasgow emphasis on DataGrid Core Development Grid Data Management CERN+UK lead Multidisciplinary Approach University + Regional Basis Applications ATLAS, CDF, LHCb Large distributed databases a common problem=challenge CDF LHC Genes Proteins Detector forLHCb experiment Detector for ALICE experiment Summary(to be updated..) ScotGRID

More Related