1 / 17

LOFAR project

LOFAR project. Astroparticle Physics workshop 26 April 2004. LOFAR concept. Combine advances in enabling IT : inexpensive environmental sensors 10.000’s of sensors wide area optical broadband networks custom+GigaPort/Géant high performance computing IBM BlueGene/L

snowy
Télécharger la présentation

LOFAR project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LOFAR project Astroparticle Physics workshop 26 April 2004

  2. LOFAR concept • Combine advances in enabling IT: • inexpensive environmental sensors 10.000’s of sensors • wide area optical broadband networks custom+GigaPort/Géant • high performance computing IBM BlueGene/L • to make a ‘shared aperture multi-telescope’ but also: • to senseandinterpret the environment in innovative ways System spec driver

  3. LOFAR Sensors Sensor type Applications HF-antenna: astrophysics astro-particle physics VHF-antenna: cosmology, early Universe solar effects on Earth, space weather . Geophones: ground subsidence . gas/oil extraction Weather: micro-climate prediction precision agriculture wind energy . Water: precision agriculture habitat management public safety Infra-sound: atmospheric turbulence meteors, explosions, sonic booms

  4. Central processor Fibre data transport Sensor field LOFAR Phase 1 - Radio telescope - Seismic imager - Precision weather for agriculture, wind energy Integrate LOFAR network into regional fibre network, sharing costs with schools, health centres etc.

  5. Radio Telescope Specifications • Frequency range: • 20 – 80 MHz, 120 – 240 MHz • Angular resolution • few – 10 arcsec • Sensitivity • 100x previous instruments at these frequencies • Shared aperture multi-telescope • up to 8 independent telescopes • plus geophone, weather etc arrays • operated from remote Science Operations Centers • similar to LHC ‘tier-1’ centers

  6. One day in the life ofLOFAR, the radio telescope Telescope nr.

  7. Input rate > 300 Gbps Products ~1 Gbps Store: 25 Gbps Transpose ~300 Gbps 2 T-ops 5 T-flops 15 T-ops Storage: >500 TB 3 T-ops Within correlator: 20 Tbps Challenges • Data rate • ~ 15 Tbits / sec total data generated (increasing later) • ~ 330 Gbits / sec input data rate to central processor • ~ 1 Gbit / sec to distributed Science Operations Centres • Computational resources • ~ 34 TFLOP/s in custom co-processor (IBM BG/L) • ~ 500 TBytes on-line temporary storage • Calibration • adaptive multi-patch all-sky phase correction • 10 sec duty cycle

  8. IBM BlueGene/L • IBM • 1st research machine on road to multi-peta-FLOP/s • 3 BG/L machines under construction LLNL, LOFAR, IBM Research • numbers 1-10 of Top-500 supercomputers in one machine (LLNL) • SOC technology, standard components for reliability • dual PowerPC 440 chips per node with 700 MHz clock • scalability • to many times 100.000 nodes • low power, air cooled • ~ 20W per node

  9. IBM BlueGene/L • LOFAR • BG/L is our 1st non-custom central processor • total CPU power is ‘interesting’ (34 TFLOP/s) and scalable • component failure rate: one every 3 months, DRAM dominated • BG/L is embedded co-processor in LINUX cluster • stripped down LINUX kernal on-chip • general purpose capability allows complex modelling on-line, real time • efficient for complex arithmetic,streaming applications • 330 Gb/s input data rate initially; 768 Gb/s max • low power • 150 kW for LOFAR ( 6k nodes ) • scalable beyond LOFAR to SKA requirements

  10. CPU (SPECint95) No. of Processors Disk storage (TB) Tape storage (PB) LAN throughput (Gb/s) LHC / exp’t x 4 exp’ts ( Tier-0 ) 2,8 106 5600 / 11200 (?) 2160 12 368 LOFAR ( EOC ) 3,4 106 6144 / 12288 ~ 500 ?? > 330 Tier-0 computing LHC, LOFARin 2006

  11. LOFAR with Bsik financing Central core - plus - 45 stations 150 km max baseline

  12. Mid-LOFAR would extend into Lower Saxony, Schleswig-Holstein, Northrhein-Westphalen Max-LOFAR would have stations from Cambridge UK to Potsdam DE, from Nançay FR to Växjö SE

  13. Russia China USA 1-10 Gbps South Africa Post-2005: JIVE + LOFAR data processing centre 30 Gbps – 2 Tbps LOFAR, the Sensor Network is under consideration as FP7 ‘Technology Platform’

  14. LOFAR project timeline • PDR in June/Oct 2003: M€ 14 expended • Dutch funding end 2003: M€ 52 for ‘infrastructure’ • funding must be matched by ‘partners’ • 18 member consortium: additional partners possible • formal goal is economic positioning w.r.t. ‘adaptive sensor networks’ • RF, seismic, infra-sound, wind-energy sensors • prototyping of a full station is in progress • 100 low frequency antennas in field, now are making all-sky videos • end 2004, expect 2 beam web-based system on-line (to gain experience) • issues: calibration, RFI, adaptive re-allocation of resources • BlueGene/L delivery in 1Q-2005 • FDR start in mid-2004, complete mid-2005 • procurement start mid-2004, end mid-2006 • Initial operational status: end-2006 (solar minimum) • full operational status: mid-2008

  15. Remaining tasksfor which partners are being sought Where? • Array configuration size: new stations ! • extension of array size to 400+ km is highly desirable • cost is ~ € 500k per station • fiber connections through Géant, national academic networks • Definition, designation of operations centers • Science Operations Centers are remote, on-line • basic data taking and archiving of observations • financing mostly local, plus contribution to common services • Engineering Operations Center in Dwingeloo • monitor system, perform maintenance • integrated operations team (with WSRT, possibly JIVE) • Operational modelling and User interface • use of (quasi-real-time) GRID technologies foreseen • work packages not funded / manned yet Where?

  16. User involvement • Test User Group • Heino Falcke, leader • Lars Bähren, Michiel Breintjens, Stefan Wijholds etc • ‘open’, ‘remote’ access to developing system • step-wise functionality improvements until 2006 • 1st user workshop Dwingeloo, May 24-25, 2004 • ASTRON is ready to host a (limited) number of young researchers to test, help develop the system • Formal operations from 2007 • scheduling will be an ‘interesting’ problem

  17. LOFAR Research Consortium UniversitiesResearch InstitutesCommercial

More Related