240 likes | 358 Vues
Welcome to the ExoGENI Rack Administrator Primer! This guide provides step-by-step instructions for importing the OVA into VirtualBox, logging in as the gec20user, and starting ORCA actors. With a focus on the GPO-funded racks and partnerships between RENCI, Duke, and IBM, we delve into the architecture and software stack, including OpenStack, xCAT, and OpenFlow. Learn how to manage virtual infrastructure, monitor systems, and provision resources effectively in a collaborative networked cloud environment.
E N D
ExoGENI: rack administrator primer Ilya Baldin RENCI, UNC-CH Victor Orlikowski Duke University
Hello and Welcome! • First things first • IMPORT THE OVA INTO YOUR VIRTUAL BOX • LOGIN as gec20user/gec20tutorial • START ORCA ACTORS • sudo /etc/init.d/orca_am+broker-12080 clean-restart • sudo /etc/init.d/orca_sm-14080 clean-restart • sudo /etc/init.d/orca_controller-11080 start • WAIT AND LET IT CHURN – THIS IS ALL OF EXOGENI IN ONE VIRTUAL MACHINE! • WILL TAKE SEVERAL MINUTES!
Testbed • 13 GPO-funded racks built by IBM plus several “opt-in” racks • Partnership between RENCI, Duke and IBM • IBM x3650 M4 servers (X-series 2U) • 1x146GB 10K SAS hard drive +1x500GB secondary drive • 48G RAM 1333Mhz • Dual-socket 8-core CPU • Dual 1Gbps adapter (management network) • 10G dual-port Chelsio adapter (dataplane) • BNT 8264 10G/40G OpenFlow switch • DS3512 6TB or server w/ drives totaling 6.5TB sliverable storage • iSCSI interface for head node image storage as well as experimenter slivering • Cisco (WVN, NCSU, GWU) and Dell (UvA) configurations also exist • Each rack is a small networked cloud • OpenStack-based with NEuca extensions • xCAT for baremetal node provisioning • http://wiki.exogeni.net
ExoGENI at a glance 5 upcoming racks at TAMU, UMass Amherst, WSU, UAF and PSC not shown
Rack software • CentOS 6.X base install • Resource Provisioning • xCAT for bare metal provisioning • OpenStack + NEuca for VMs • FlowVisor • Floodlight used internally by ORCA • GENI Software • ORCA for VM, bare metal and OpenFlow • FOAM for OpenFlow experiments • Worker and head nodes can be reinstalled remotely via IPMI + KickStart • Monitoring via Nagios (Check_MK) • ExoGENI ops staff can monitor all racks • Site owners can monitor their own rack • Syslogs collected centrally
Elements of rack software (OpenStack) • OpenStack • Currently Folsom based on early release of RHOS • Patched to support ORCA • Additional nova subcommands • Quantum plugin to manage Layer2 networking between VMs • Manages creation of VMs with multiple L2 interfaces attached to high-bandwidth L2 dataplane switch • One “management” interface created by nova attaches to management switch for low-bandwidth experimenter access • Quantum plugin • Creates and manages 802.1q interfaces on worker nodes to attach VMs to VLANs • Creates and manages OVS instances to bridge interfaces to VLANs • DOES NOT MANAGE MANAGEMENT IP ADDRESS SPACE! • DOES NOT MANAGE THE ATTACHED SWITCH!
Elements of software (xCAT) • Manages booting of bare-metal nodes for users and installation of OpenStack workers for sysadmins • Stock software • Upgrading the rack means • Upgrading the head node (painful) • Using xCAT to update worker nodes (easy!)
Elements of software (OpenFlow) • Flowvisor • Used by ORCA to “slice” the OpenFlow part of the switch for experiments via API • Typically along <port><vlan tag> dimensions • For emulating VLAN behavior ORCA starts Floodlight instances attached to slices • Floodlight • Stock v. 0.9 packaged as a jar • Started with parameters that minimize JVM footprint • Uses ‘forwarding’ module to emulate learning switch behavior on a VLAN • FOAM • Translator from GENI AM API + RSpec to Flowvisor • Does more, but not in ExoGENI
Elements of software (ORCA) • Control framework • Orchestrates resources at user requests • Provides operator visibility and control • Presents multiple APIs • GENI API • Used by GENI experimenter tools (Flack, omni) • ORCA API • Used by Flukes experimenter tool • Management API • Used by Pequod administrator tool
Building network topologies Slice owner may deploy an IP network into a slice (OSPF). slice OpenFlow-enabled L2 topology Cloud hosts with network control Computed embedding Virtual colo campus net to circuit fabric Virtual network exchange
Brokering Services Site Site User facing ORCA Agents User • Provision a dynamic slice of networked virtual infrastructure from multiple providers, built to order for a guest application. • Stitch slice into an end-to-end execution environment. Topology requests specified in NDL
ORCA Actors and Containers • An actor encapsulates a piece of logic • Aggregate Manager (AM) – owner of the resources • Broker – partitions and redistributes resources • Service Manager – interacts with the user • A Controller is a separable piece of logic encapsulating topology embedding and presenting remote APIs to users • A container stores some number of actors, connects them to • The outside world using remote API endpoints • A database for storing their state • Any number of actors can share a container • A controller is *always* by itself
ORCA tickets, leases and reservations • Tickets, leases and reservations are used somewhat interchangeably • Tickets and leases are kinds of reservation • A ticket indicates the right to instantiate a resource • A lease indicates ownership of instantiated resources • AM gives tickets to brokers to indicate delegation of resources • Brokers subdivide the given tickets into other smaller tickets and give them to SMs upon request • SMs redeem tickets with AMs and receive leases which indicate which resources have been instantiated for them
ORCA reservations and slices • Slices consist of reservations • Slices are identified by GUID • They do have user-given names as an attribute • Reservations are identified by GUIDs • They have additional properties that describe • Constraints • Details of resources • Each reservation belongs to a slice • Slice and reservation GUIDs are the same across all actors • Ticket issued by broker to a slice • Ticket seen on SM in a slice, becomes a lease with the same GUID • Lease issued by AM for a ticket to a slice
ORCA Configuration files (1/3) • ORCA actor configuration • ORCA looks for configuration files relative to $ORCA_HOME environment variable • /etc/orca/am+broker-12080 • /etc/orca/sm-14080 • ORCA controller configuration • Similar, except everything is in reference to $CONTROLLER_HOME • /etc/orca/controller-11080
ORCA Configuration files (2/3) • Actor configuration • config/orca.properties – describes the container • config/config.xml – describes the actors in the container • config/runtime/ - contains keys of actors • config/ndl/ - contains NDL topology descriptions of actor topologies (AMs only)
ORCA Configuration files (3/3) • Controller configuration • config/controller.properties – similar to orca.properties describes the container for controller • geni-trusted.jks – Java truststore with trust roots for XMLRPC interface to users • xmlrpc.jks – Java keystore with the keypair for this controller to use for SSL • xmlrpc.user.whitelist, xmlrpc.user.blacklist – lists of user urn regex patterns that should be allowed/denied • Blacklist parsed first
Inventory and resource delegation • AMs and brokers have ‘special’ inventory slices • AM inventory slice describes the resources it owns • Broker inventory slice describes resources it was given • AMs also have slices named after the broker they delegated resources to • Describe resources delegated to that broker
Hands-on • Inspect existing slices on different actors using Pequod • Create an inter-rack slice in Flukes • Inspect slice in Flukes • Inspect slice in • SM • Broker • AMs • Close reservations/slice on various actors
Thank you! • http://www.exogeni.net • http://wiki.exogeni.net