50 likes | 177 Vues
This document outlines the JDAT Production Hardware system, which utilizes compute nodes with up to 8 multicore processors per node and 16+ GB per processor. The system operates on 64-bit Linux and supports an Oracle cluster with a 5 TB shared database volume. It includes high-availability I/O nodes and a clustered NFS server with multiple connections to disk and tape drives. Initial storage capacity is set at 400 TB, with an annual increment of 100 TB. The infrastructure ensures efficient data processing, storage, and retrieval for mission-critical applications.
E N D
JDAT Production Hardware • Compute nodes (< 100) • 2-, 4-, 8-multicore processors per node • 16+ GB per processor • 64-bit linux • Database nodes (< 5) • Oracle cluster • 5 TB shared database volume • I/O nodes (< 10) • High-availability NFS server cluster • Multiple fibre channel connections to non-shared disks and tape drives • Multiple gigabit ethernet connections to compute nodes • RAID disk storage • 400 TB initially • 100 TB annual increment • SATA drives (500 GB today) • Tape Archive • Two PB-sized tape libraries initially • ½ PB per library annual increment • SAIT (500 GB, 30 MB/s today) or LTO (400 GB, 80 MB/s today)
Reality Check • AIA/HMI combined data volume 2 PB/yr = 60 MB/s • read + write x 2 • quick look + final x 2 • one reprocessing x 2 • 25% duty cycle x 4 2 GB/s (disk) 1/2 GB/s (tape) • NFS over gigabit ethernet 50 – 100 MB/s • 4 – 8 channels per server, 5 servers (today) • SAIT-1 native transfer rate 25 – 30 MB/s • 10 SAIT-1 drives per library, 2 libraries (today)
GSFC LMSAL White Sands housekeeping MOC House- keeping Database DDS HMI & AIA Operations Stanford HMI JSOC Pipeline Processing System Redundant Data Capture System Quicklook Viewing Primary Archive 30-Day Archive Local Archive AIA Analysis System Catalog High-Level Data Import Offline Archive Data Export & Web Service World Offsite Archive HMI & AIA JSOC Architecture Science Team Forecast Centers EPO Public