1 / 8

Supporting experiment workflows on ExoGENI with OMF and GIMI

Supporting experiment workflows on ExoGENI with OMF and GIMI. Ilia Baldine, Jeff Chase, Mike Zink, Max Ott. Testbed. 14 GPO-funded racks Partnership between RENCI, Duke and IBM IBM x3650 M3/M4 servers 1x146GB 10K SAS hard drive +1x500GB secondary drive 48G RAM

marion
Télécharger la présentation

Supporting experiment workflows on ExoGENI with OMF and GIMI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Supporting experiment workflows on ExoGENI with OMF and GIMI Ilia Baldine, Jeff Chase, Mike Zink, Max Ott

  2. Testbed • 14 GPO-funded racks • Partnership between RENCI, Duke and IBM • IBM x3650 M3/M4 servers • 1x146GB 10K SAS hard drive +1x500GB secondary drive • 48G RAM • Dual-socket 8-core CPU w/ Sandy Bridge • 10G dual-port Chelseo adapter • BNT 8264 10G/40G OpenFlow switch • DS3512 6TB sliverable storage • iSCSI interface for head node image storage as well as experimenter slivering • Each rack is a small networked cloud • OpenStack-based • EC2 nomenclature for node sizes (m1.small, m1.large etc) • Interconnected by combination of dynamic and static L2 circuits through regionals and national backbones • http://wiki.exogeni.net

  3. ExoGENI Status • 3 racks deployed • RENCI, GPO and NICTA • 2 existing racks • Duke and UNC • 2 more racks coming • FIU and UH • Connected via BEN (http://ben.renci.org), LEARN and NLR FrameNet, I2

  4. ExoGENI slice isolation • Strong isolation is the goal • Compute instances are KVM based and get a dedicated number of cores • VLANs are the basis of connectivity • VLANs can be best effort or bandwidth-provisioned (within and between racks)

  5. Experiment Workflow

  6. Measurement Environment Persistent Server emmy9 NodeB NodeA OML Server RC ML RC ML AM XMPP Server NodeC BBN iRODS RC ML IREEL RENCI Tutorial VM User Workspace EC Visualization iRODS Client

  7. Tools • iRODS • Integrated Rule-Oriented Data System that aims at managing distributed massive data • IREEL • Internet Remote Emulation Experiment Laboratory • Measurement Portal • OMF/OML • Orbit Measurement Framework • Orbit Measurement Library

  8. Thank you!

More Related