1 / 7

VC Deployment Script for OpenNebula /KVM

This guide provides detailed steps for deploying a KVM virtual cluster using OpenNebula version 3.6. The demo environment consists of a front node and four compute nodes, each equipped with 500GB SSDs. The prerequisites include user capabilities for managing VM images and availability of both public and private network resources. The script outlines the process of embedding configuration, registering images, booting the nodes, and testing the virtual cluster. Important commands for managing instances are also included, ensuring an efficient deployment.

denali
Télécharger la présentation

VC Deployment Script for OpenNebula /KVM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VC Deployment Script for OpenNebula/KVM Yoshio Tanaka, Akihiko Ota (AIST)

  2. Demo Environments & Prerequisites • Demo Environment • OpenNebula Version 3.6 (built from source bundles) • 1 front node + 4 compute nodes • Each node has 500GB SSD • Prerequisites • The user is able to register/delete VM images to/from OpenNebula • The following network resources are available in OpenNebula • pragmapub (public network addresses) • pragmapriv (private network addresses) • vc-in.xml and image files are available in the same directory (e.g. /gfarm/vm-images/SDSC)

  3. Step 1/4 • Embed CONTEXT script$ sudo ./pragma-makeover.pl /gfarm/vm-images/SDSC/calit2-119-222.xmlCONTEXT script will set the following parameters and output vc-out.xml • IP addresses / netmasks • default gateway • /etc/resolve.conf • /etc/hosts • hostname • sshd

  4. Step 2/4 • Register the image to OpenNebula$ ./pragma-register.pl /gfarm/vm-images/SDSC/calit2-119-222.xmlThis may take 3 minutes (on SSD) or 12-15 minutes (on HDD).Check the status by oneimage command$ oneimage top

  5. Step 3/4 • Boot a VC$ ./pragma-boot.pl –c 3 /gfarm/vm-images/SDSC/calit2-119-222.xmlFirst, boot a front node, then boot compute nodes.Booting the front node may take 3 minutes (on SSD) or 12-15 minutes (on HDD).Check the status by onevm command

  6. Step 4/4 • Login and test the VC$ sshroot@163.220.57.203$ ssh hosted-vm-0-0-1 • We haven’t provided scripts for shutdown the VC and delete VM images. Need to use OpenNebula commands.ex:$ onevm shutdown 157 # shutdown compute node$ onevm shutdown 154 # shutdown front node$ oneimage delete 20 # delete VM image for compute node$ oneimage delete 21 # delete VM image for front node

  7. Comments • Do we need vcdb.txt? • AIST provides vc-in.xml as a command-line argument. Is it enough isn’t it? • Do we need mac/ip addresses in vc-in.xml? • Are there needs to use the mac/ip addresses in vc-in.xml? • These are site-specific and actually AIST does not using them.

More Related