1 / 9

Grid Developers’ use of FermiCloud

Grid Developers’ use of FermiCloud. (to be integrated with master slides). Grid Developers Use of clouds. Storage Investigation OSG Storage Test B ed MCAS P roduction S ystem Development VM OSG User Support FermiCloud Development MCAS integration system.

chione
Télécharger la présentation

Grid Developers’ use of FermiCloud

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid Developers’ use of FermiCloud (to be integrated with master slides)

  2. Grid Developers Use of clouds • Storage Investigation • OSG Storage Test Bed • MCAS Production System • Development VM • OSG User Support • FermiCloud Development • MCAS integration system

  3. Storage Investigation: LustreTest Bed eth FCL Lustre: 3 OST & 1 MDT Dom0: - 8 CPU - 24 GB RAM FG ITB Clients (7 nodes - 21 VM) mount mount 2 TB 6 Disks Lustre Server VM BA • ITB clients vs. Lustre Virtual Server • FCL clients vs. Lustre V.S. • FCL + ITB clients vs. Lutre V.S. 7 x Lustre Client VM

  4. ITB clts vs. FCL Virt. Srv. Lustre Use Virt I/O drivers for Net Changing Disk and Net drivers on the Lustre Srv VM… Write I/O Rates 350 MB/s read 70 MB/s write (250 MB/s write on Bare M.) Read I/O Rates Bare Metal Virt I/O for Disk and Net Virt I/O for Disk and default for Net Default driver for Disk and Net

  5. 21 Nova clt vs. bare m. & virt. srv. Read – ITB vs. bare metal BW = 12.55  0.06 MB/s (1 cl. vs. b.m.: 15.6  0.2 MB/s) Read – ITB vs. virt. srv. BW = 12.27  0.08 MB/s (1 ITB cl.: 15.3  0.1 MB/s) Read – FCL vs. virt. srv. BW = 13.02  0.05 MB/s (1 FCL cl.: 14.4  0.1 MB/s) Virtual Clients on-board (on the same machine as the Virtual Server) are as fast as bare metal for read Virtual Server is almost as fast as bare metal for read

  6. OSG Storage Test BedOfficial test bed resources • 5 nodes purchased ~ 2 years ago • 4 VM on each node (2 VM SL5, 2 VM SL4) Test Systems: • BeStMan-gateway/xrootd • BeStMan-gateway, GridFTP-xrootd, xrootdfs • Xrootd redirector • 5 data server nodes • BeStMan-gateway/HDFS • BeStMan-gateway/GridFTP-hdfs, hdfs name nodes • 8 data server nodes • Client nodes (4 VMs): • Client installation tests • Certification tests • Apache/tomcat to monitor/display test results etc

  7. OSG Storage Test BedAdditional test bed resources • 6 VMs on nodes outside of the official testbed Test systems: • BeStMan-gateway with disk • BeStMan-fullmode • Xrootd (Atlas-Tier3, WLCG demonstrator project) • Various test installation • In addition, 6 “old” physical nodes are used as dCache test bed • These will be migrated to FermiCloud

  8. MCAS Production System FermiCloud hosts the production server (mcas.fnal.gov) • VM Config: 2 CPUs, 4GB RAM, 2GB swap • Disk Config: • 10GB root partition for OS and system files • 250GB disk image as data partition for MCAS software and data • Independent disk image makes is easier to upgrade the VM • On VM boot up: Data partition is staged and auto mounted in VM • On VM shutdown: Data partition is saved • Work in progress: Restart the VM without having to save and stage in the data partition to/from central image storage • MCAS services hosted on the server • Mule ESB • JBoss • XML Berkeley DB

  9. Metric Analysis and Correlation Service. CD Seminar

More Related