150 likes | 160 Vues
Explore AstroWise initiative progress, including Cluster-in-a-Box and Display Wall-in-a-Box, and discuss parallel processing strategies. Current developments in various clusters like NOVA, Terapix, and Capodimonte are highlighted. Emphasis on parallel implementation techniques and data storage solutions. Focal points for future strategies and collaboration opportunities are also covered.
E N D
WP4 and WP5 for AstroWise WP4: Provide parallel processing WP5: Provide data storage AstroWise pre kick-off Meeting
Commodity Hardware • In-a-Box initiative • NCSA Alliance layered software • Cluster-in-a-Box (CiB) • Grid-in-a-Box (Gib) • Display Wall-in-a-Box (DBox) • Access Grid-in-a-Box (AGiB) AstroWise pre kick-off Meeting
Cluster-in-a-Box • Builds on OSCAR • Simplify installing and running Linux clusters • Compatible with Alliance’s production clusters • Software foundation for • Grid toolkits • Display Walls AstroWise pre kick-off Meeting
Display Wall-in-a-Box • Tiled display wall • WireGL, VNC, NCSA Pixel Blaster • Building instructions AstroWise pre kick-off Meeting
Current Developments • Ongoing activities • NOVA: testbed system • Terapix: production cluster • Capodimonte: WFI processing system • USM: WFI processing system AstroWise pre kick-off Meeting
Current Developments • NOVA • Leiden has 4+1 PIII PC cluster (400Mhz, 15Gb, 256 Mb, 100Mb/s) as hands-on experience • Leiden will acquire 16+1 P4 PC cluster (1.5 Ghz, 80Gb, 512 Mb, 1Gb/s-100Mb/s switched) as hands-on experience • Can postpone processing cluster to later AstroWise pre kick-off Meeting
Current Developments • Terapix • Driven by spending • Concentration on high bandwidth data I/O • 32b @ 33Mhz -> 133 MB/s • 64b @ 66 Mhz -> 512 MB/s • RAID5 has 80MB/s • Nodes • 4 AMD 2 SMP 2Gb ~Tb RAID0 1Gb/s + 100Mb/s • 1 AMD 2 SMP 2 Gb ~Tb RAID5 4x1Gb/s + 100Mb/s • No software parallellization • Process fields in parallel AstroWise pre kick-off Meeting
Current Developments • Capodimonte • Driven by spending • Opt for conventional system • 8 PIII 2 SMP 1 Ghz 40 Gb 512 b 100 Mb/s • 1 PIII 2 SMP 1 Ghz ~180Gb RAID 1Gb/s • Processing examples ESO beowulf system • Master Bias from 5 raw fields: 68 sec • Master Flat Field from 5 dome + 5 skyflat: 390 sec • Catalog & astrom on full cluster: 140 sec • Catalog & astrom on one single CCD on one CPU: 88 sec AstroWise pre kick-off Meeting
Current Developments • USM • Driven by spending • Opt for off-the-shelf configuration • Pay for configuration/install • Pay for maintenance • Nodes • 8+1 SMP (2) 4Gb, ~100 Gb, 1 Gb/s or Myranet • 1.4 Tb datastorage • Front-end user stations AstroWise pre kick-off Meeting
Parallel implementation • Single OS, multiple CPU’s • MOSIX • Fork and forget • Does load balancing, adaptive resource allocation • Cluster of Machines • MPI, PMV (Message passing) • PVFS (parallel virtual file system) • PBS (Portable Batch system) • MAUI (Job scheduling) • DIY scheduling (python sockets/pickling) AstroWise pre kick-off Meeting
Parallellization • Simple scripting • Rendezvous problem • Load balancing • Data distribution/administration • Code level • MPI Programming • How deep? • Loops • Matrix splitting • Sparse array coding AstroWise pre kick-off Meeting
Granularity • Coarse • Large tasks • Much comp. work • Infrequent comm. • Fine • Small tasks • Frequent comm. • Many processes AstroWise pre kick-off Meeting
Focal Points • Different architectures • compare performances • time to reduce field/night • quality of calibration • Benchmark set software/data • Share experience (Exchange url’s) • hardware • processing • software (parallellization) AstroWise pre kick-off Meeting
Focal Points • Time cost (unified structure for parallel processing) • software work to make parallel (burst data) • hardware possible (# nodes < 8) • Evaluate future network capacity • multiplicity • firewire AstroWise pre kick-off Meeting
Focal Points • Data storage & beowulf • Are they different? • Interaction processing/mining • Who pulls the wagon • T0 + 1Q : Design review • T0 + 2Q : procurement AstroWise pre kick-off Meeting