150 likes | 249 Vues
Explore our vast range of services including CPU, networking, data storage, databases, email, and more. Supported platforms range from Linux RedHat to Solaris and include extensive disk space solutions. We cater to large experiments like BaBar and D0 with cutting-edge technologies. Discover how we support various research projects and engage with regional and core operation centers for seamless integration.
E N D
CCIN2P3Site report Wojciech A. Wojcik IN2P3 Computing Center
Services • CPU • Networking • Data storage and access • Data bases • E-mail • WEB • Electronic Documents Managment (EDMS) and CAD • LDAP (OpenLDAP) • MCU • Win2000 domain service
Supported platforms • Supported platforms: • Linux RedHat 7.2 SL3 • Solaris 2.8 Solaris 2.9 • AIX 5.1
Disk space • Need to make the disk storage independent of the operating system. • Disk servers based on: • A3500 from Sun with 3.5 TB • ESS-F20 from IBM with 21.4 TB • ESS from IBM with 5.9 TB • 9960 from Hitachi with 18 TB • FAST900 from IBM with 32 TB (not yet in prod)
Mass storage • Supported medias (all in the STK robots): • DLT4000/7000 • 9840 (Eagles) 9940 (200 GB) • HPSS – local developments: • Interface with RFIO: • API: C, Fortran (via cfio) • API: C++ (iostream) (for g++ and KCC) • bbftp – secure parallel ftp using RFIO interface • Interface with SRB
Mass storage • HPSS • $HPSS_SERVER:/hpss/in2p3.fr/… • HPSS – usage: 645 TB (123 TB in May 2002, 60 TB in Oct 2001). • BaBar – 245 TB • AUGER – 40 TB • EROS II – 32 TB • D0 – 110 TB • Virgo – 13 TB • Other experiments: ATLAS, SNovae, DELPHI, ALICE, PHENIX, CMS
Networking - LAN • Fast Ethernet (100 Mb full duplex) --> to interactive and batch services • Giga Ethernet (1 Gb full duplex) --> to disk servers and Objectivity/DB servers
Networking - WAN • Academic public network “Renater 3”. • Backbone 2.5 Gb • Access USA 2 * 2.5 Gb • CCIN2P3 access 1 Gb Tests give 400 Mb to SLAC 800 Mb to CERN
BAHIA - interactive front-end Based on multi-processors: • Linux (RH72, RH73, SL3 ) -> 16 dual PentiumIII1GHz • Solaris 2.8 -> 2 Ultra-4/E450 • AIX 5.1 -> 2 F40
Batch system - configuration Batch based on BQS (developed at CCIN2P3) • Linux (RH72) -> 652 cpu (PIII) • Linux (RH73/LCG) -> 100 cpu (PIII) • Linux (SL3) -> 68 cpu (PIV) • Solaris 2.8 -> 38 cpu (Ultra60) • AIX 4.3.2 -> 18 cpu (43P-B50)
Support for big experiments • BaBar • Objectivity/DB servers (v.7.1 on Solaris 2.8 and 2.9) • 2 on 440, 8 on Netra-T, 2 on 450, 5 on 480 (common with xrootd) • HPSS with interface to Objectivity (ams/oofs), RFIO and with xrootd – 245 TB (57 TB for root files) • Disk cache for Obj and xrootd – 45 TB (20 TB will be added soon) • SRB for import/export • xrootd is replacing Obj/DB
Support for big experiments • D0 • SAM server (on Linux) • bbftp for import/export with FNAL • Usage of HPSS as SAM cashing space
Present actions • Computing and data storage services for about 45 experiments (HEP, Nuclear Physics, Astro, Bio) • Support Center for EGEE • 10 FTE • ROC – Regional Operation Center • CIC – Core Infrastructure Center • Integration of BQS batch system into LCG
Present actions • LCG for LHC experiments • SRB for BaBar and SNovae (Astro and Bio soon) • xrootd for BaBar and D0
Present actions • Regional Center services for: • EROS II • BaBar ( Tier A) • D0 • AUGER • LHC