1 / 10

EGO Computing Center site report

EGO Computing Center site report. Stefano Cortese. EGO - Via E. Amaldi 56021 S. Stefano a Macerata - Cascina (PI)  |. INFN Computing Workshop – 26-05-2004. Interferometer Real Time Domain. 60 LynxOS CPUs 10 OS9 CPUs. Virgo-EGO computing areas. DAQ. Monitoring and Control.

gilon
Télécharger la présentation

EGO Computing Center site report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EGO Computing Center site report Stefano Cortese EGO - Via E. Amaldi 56021 S. Stefano a Macerata - Cascina (PI)  | INFN Computing Workshop – 26-05-2004

  2. Interferometer Real Time Domain 60 LynxOS CPUs 10 OS9 CPUs Virgo-EGO computing areas DAQ Monitoring and Control Alpha/OSF wks: 15 control 10 processing 2 servers x86/Linux nodes: 12 processing 4 servers 4 control On-line buffers 5TB 6 MB/s ON-LINE PROCESSING/ IN-TIME domain Software integration testing, archiving and installation 150 users OFFICE SERVICES 50 windows PCs On-line Analysis 5-(300) Gflops Tape Backup 6TB LINUX farm nodes 16 nodes: 54 Gflop 2 servers 13 storage nodes 25 linux PCs > 6 MB/s INTERNET SERVICES OFF-LINE COMPUTING Disk Storage 70TB Users computing and Data Access Bologna and Lyon Repositories 34-(155) Mbps

  3. Virgo-EGO LANs 3 KM Virgo Interferometer network (> 50 switches) Data Analysis network ( 7 switches) General windows PCs network ( 30 switches) 3 KM Offices DMZ UPS and generators WAN 34 Mbps Firewall CheckPoint over Nokia

  4. Storage 6MB/s 6MB/s Migration to mass storage Cataloguing MD5sum Virgo Data Backup via Legato HP_Compaq MA8000 Fiber Channel-To-SCSI 4 Terabytes 1 Week buffering Redundant Stream Tape Library HP LTO ULTRIUM-1 6TB near-line Storage FARM: 13 nodes with 70 TB of net RAID5 space 25 Infortrend FC-to-IDE arrays 4 Fibrenetix Zero-d SCSI-to-IDE + some 3ware Mainly 250GB/7200rpm WD disks Accusys SCSI-To-SCSI 1 Terabyte Everything with Linux RH9 and LVM 1.0.x

  5. Storage practices with IDE based arrays • Performances are good: 50/60MBytes/s over 1TB RAID5 set (single array, 5400/7200rpm disks) • The quality of the first releases of the products are very poor due mainly to firmware bugs or hardware tolerances that ultimately lead to hidden data corruption (meaning undetected by storage controller or operating system) • We developed a procedure for storage acceptance: • requirement of minimal performances according to market survey and in demo testing • Tenders are required in 2 lots, the first is for the acceptance test. The positive validation of the first lot is required for the acceptance of the second • Acceptance Test: • Performances tested with “bonnie” • Data integrity checked with a continuous benchmark that reads, writes and deletes data with 128bit MD5 verification after each operation

  6. Storage practices: data integrity • Data Integrity test: • The data integrity test ends after the processing of about 30TB (after about 10days) giving confidence that the BER is less than 3x10-14 • In our experience the errors may occur even after 1 week of processing and we rejected many configurations • The test needs to be repeated at each new firmware release installation, even if new features only are introduced • All this Is not enough: • Many functions of the firmware may happen to be executed after the systems is running since a long time.That is the case of block remapping following bad block occurrences on the disks (this could only be tested using really bad disks) Therefore: • The storage must be periodically monitored for data integrity • The firmware must provide the on-line low level media verification that must executed periodically to avoid the double bad-blocks or bad-block+disk-failure cases

  7. Storage Conclusions IDE based storage systems at 5000€/TB are good for mass storage with fast access and high density compared to near-line disk-cache/tape systems but availability is not guaranteed at all times They don’t offer the same level of reliability for critical tasks as more expensive disk based storage. Duplication or tape backup is still needed Direct Attached Arrays are preferable respect to NAS storage to be able to run tests independently of the networkWe prefer also arrays connected via standard buses (e.g. SCSI or FC) rather than “on server” controllers to avoid intermixing OS/driver/array problems LVM and automounter are required tools for mounting and serving about 100 file-systems (currently using amd, planning to pass to autofs on Linux)

  8. On-line Computing Virgo detection channels are extracted from rawdata and processed to obtain the h-reconstructed signal where the Gravitational signal must be found Small Scale Test System (2002) 16 bi-processors Compaq W60001.7GHz + PC800 RDRAM 2 front-ends 2 Standard gigabit ethernet LANs (internodes and storage) 8 bi-processors Intel Xeon 2.66GHz 2 bi-processors Intel Xeon 2.0GHz The h-reconstructed signal (16-600 KB/s) is fed to the computing farms for on-line search

  9. On-line Computing: Physical problem of coalescing binaries “in-time” detection was estimated by Virgo to require a 300GFlop system Flat search with Matched filtering via FFT with templates of various length is very dependent on the amount of RAM available for storing the templates, so the naive sizing by CPU power is not enough A benchmarking Virgo/EGO workgroup has been working since beginning of the year to arrive at more precise specifications (benchmark provided by Perugia group, tests performed by EGO)

  10. Overall problem Opteron has the best speedup for SIMD problems where data are partitioned among processors: up to 60MB/s of template floats processed per CPU for the Virgo benchmark The Maximum RAM supported by the platform has an impact on the number of CPUsOverall Virgo problem for a space of 200.000 templates (1.6 TB RAM) to be processed in 256s would require about 200 opteron with 8GB/CPU or 130 Itanium with 12GB/CPUOpteron has a higher performance per rack-unitCurrent tender is for 64 CPUs

More Related