1 / 10

Predrag.Buncic@cern.ch

Use of HLT farm and Clouds in ALICE pre-GDB Meeting 15/01/2013. Predrag.Buncic@cern.ch. USE OF HLT FARMS and CLOUDS in ALICE. Strong request by C-RRB Computing capacity equivalent to US-ALICE share Approach based on CernVM No need to be smarter than the others

mendel
Télécharger la présentation

Predrag.Buncic@cern.ch

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Use of HLT farm and Clouds in ALICE pre-GDB Meeting 15/01/2013 Predrag.Buncic@cern.ch

  2. USE OF HLT FARMS and CLOUDS in ALICE • Strong request by C-RRB • Computing capacity equivalent to US-ALICE share • Approach based on CernVM • No need to be smarter than the others • Requires centralized software distribution using CVMFS • Universal namespace, easy configuration • Deploy and configure once, run everywhere • No need to ever remove software • Long term data preservation friendly approach • Extra benefit for ALICE users • CernVMon laptop/desktop Background

  3. BIG PICTURE QA Private CERN Cloud Cloud Gateway AliEn CAF On Demand CernVM/FS Cloud Agent HLT Public Cloud

  4. CernVMCloud • Provides a simple interface to deploy any number of VM instances in arbitrary cloud infrastructures • Cloud Aggregator • Composed of three components: • CernVMOnline Portal • CernVMCloud Agents • CernVMCloud Gateway Implementation

  5. CernVMOnline Cluster definition

  6. CernVMCloud Gateway CernVM Online CernVM Gateway • Create context definition for each service • Create a cluster definition : • Required offerings ( compute, disk, etc... ) • Minimum number of instances • Dependencies on other services • Instantiate a cluster based on it’s definition • Scale individual services (REST API)

  7. Overview Instances • Cloud agents expose the resources available in the cloud. • The users request resources via the gateway interface. • The site administrators only takes care of the installation. Cloud Agent Public Cloud Cloud Agent Private Cloud Cloud Agent Site Administrator System user Cloud Agent Machine Cloud Gateway Browser / Tools

  8. CernVMCloud Agent • Implemented adapters for: • CloudStack ( native API ) • libvirt ( XEN, KVM, VirtualBox, ... ) • EC2 tools ( command-line tools wrapper ) • OCCI API in development • CloudStack adapter tested with our infrastructure • Libvirt adapter tested on Alice HLT

  9. Testing deployment on ALICE HLT • Alice HLT Deployment • Batch nodes: • Using the default setup, behind NAT • Head nodes: • Using bridged network configuration. • 4 Preallocated MACs / IPs • Why VMs? Incompatible OS in the machine. Machine Machine Batch Nodes (Virtual) 192.168.1.x/24 Head Nodes (Virtual) 10.162.41.10/21 NAT Bridge XMPP Server 137.138.198.215 GW 10.162.40.28/21 fepdev11 10.162.40.27/21 fepdev10

  10. Conclusions • We managed to successfully start a cluster and control instances. • We have set-up a CVMFS repository • Need to adapt AliEn to use CVMFS • Simulation, reconstruction, calibration… • Similar solution can be applied to • badly needed on-demand QA cluster(s) for release testing • CAF on demand • T3 in the box • Willing to work with other experiments and CernVM team towards a common solution

More Related