1 / 36

emulab Current and Future: An Emulation Testbed for Networks and Distributed Systems

emulab.net Current and Future: An Emulation Testbed for Networks and Distributed Systems. Jay Lepreau University of Utah December 12, 2001. The Main Players. Undergrads Chris Alfeld, Chad Barb Grads Dave Andersen, Shashi Guruprasad, Abhijeet Joglekar, Indrajeet Kumar, Mac Newbold Staff

yachi
Télécharger la présentation

emulab Current and Future: An Emulation Testbed for Networks and Distributed Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. emulab.net Current and Future:An Emulation Testbed for Networks and Distributed Systems Jay Lepreau University of Utah December 12, 2001

  2. The Main Players • Undergrads • Chris Alfeld, Chad Barb • Grads • Dave Andersen, Shashi Guruprasad, Abhijeet Joglekar, IndrajeetKumar, Mac Newbold • Staff • Mike Hibler, Rob Ricci, Leigh Stoller, Kirk Webb • Alumni • Various

  3. What? • A configurable Internet emulator in a room • Today: 328 nodes, 1646 cables, 4x BFS (switch) • virtualizable topology, links, software • Bare hardware with lots of tools • An instrument for experimental CS research • Universally available to any remote experimenter • Simple to use

  4. What’s a Node? • Physical hardware: PCs, StrongARMs • Virtual node: • Router (network emulation) • Host, middlebox (distributed system) • Future physical hardware: IXP1200 +

  5. Why? • “We evaluated our system on five nodes.” -job talk from university with 300-node cluster • “We evaluated our Web proxy design with 10 clients on 100Mbit ethernet.” • “Simulation results indicate ...” • “Memory and CPU demands on the individual nodes were not measured, but we believe will be modest.” • “The authors ignore interrupt handling overhead in their evaluation, which likely dominates all other costs.” • “Resource control remains an open problem.”

  6. Why 2 • “You have to know the right people to get access to the cluster.” • “The cluster is hard to use.” • “<Experimental network X> runs FreeBSD 2.2.x.” • “October’s schedule for <experimental network Y> is…” • “<Experimental network Z> is tunneled through the Internet.”

  7. Complementary to OtherExperimental Environments • Simulation • Fast prototyping, easy to use, but less realistic • Small static testbeds • Real hardware and software, but hard to configure and maintain, and lack scale • Live networks • Realistic, but hard to control, measure, or reproduce results emulab complements and also helps validate these environments

  8. Sharks Sharks PC Internet Web/DB/SNMP Switch Mgmt Users PowerCntl Control Switch/Router Serial PC 168 160 “Programmable Patch Panel”

  9. Experiment Creation Process

  10. Zoom In: One Node

  11. Fundamental Leverage: • Extremely Configurable • Easy to Use

  12. Key Design Aspects • Allow experimenter complete control • Configurable link bandwidth, latency, and loss rates, via transparently interposed “traffic shaping” nodes that provide WAN emulation • … but provide fast tools for common cases • OS’s, state mgmt tools, IP, batch, ... • Disk loading – 6GB disk image FreeBSD+Linux • Unicast tool: 88 seconds to load • Multicast tool: 40 nodes simultaneously in < 5 minutes • Virtualization • of all experimenter-visible resources • node names, network interface names, network addrs • Allows swapin/swapout, easily scriptable

  13. Key Design Aspects (cont’d) • Flexible, extensible, powerful allocation algorithm • Matches desired “virtual” topology to currently available physical resources • Persistent state maintenance: • none on nodes, all in database • work from known state at boot time • Familiar, powerful, extensible configuration language: ns • Separate, isolated control network

  14. Obligatory Pictures

  15. Then

  16. Now

  17. A Few Research Issues and Challenges • Network management of unknown and untrusted entities • Security (root!) • Scheduling of experiments • Calibration, validation, and scaling • Artifact detection and control • NP-hard virtual --> physical mapping problem • Providing a reasonable user interface • ….

  18. How To Use It ... • Submit ns script via web form • Relax while emulab … • Generates config from script & stores in DB • Maps specified virtual topology to physical nodes • Allocate resources • Provides user accounts for node access • Assigns IP addresses and host names • Configures VLANs • Loads disks, reboots nodes, configures OSs • Yet more odds and ends ... • Runs experiment • Reports results • Takes ~3 min to set up 25 nodes

  19. An “Experiment” • emulab’s central operational entity • Directly generated by an ns script, • … then represented entirely by database state • Steps: Web, compile ns script, map, allocate, provide access, assign IP addrs, host names, configure VLANs, load disks, reboot, configure OS’s, run, report

  20. Mapping Example

  21. Automatic mapping of desired topologies and characteristics to physical resources • NP-hard problem: graph to graph mapping • Algorithm goals: • Minimize likelihood of experimental artifacts (bottlenecks) • “Optimal” packing of many simultaneous experiments • Extensible for heterogeneous hardware, software, new features • Randomized heuristic algorithm: simulated annealing • Typically completes in < 1 second • May move to genetic algorithm

  22. Mapping Results • < 1 second for first solution, 40 nodes • “Good” solution within 5 seconds • Apparently insensitive to number of node “features”

  23. Disk Loading • 13 GB generic IDE 7200 rpm drives • Was 20 minutes for 6 GB image • Now 88 seconds • Unicast – domain-specific compression • Multicast – “Frisbee”

  24. Testbed Users • 26 Active Projects • 20 External • 7 “active” active network projects • SANDS (TASC)** • Activecast (Kentucky)** • AMP NodeOS (NAI Labs)** • Active proxies (UMass) • XML-based content routing (MIT) • Janos, Agile protocols (Utah)** • 3 “not-so-active” DARPA AN projects • 4 other active security projects

  25. Users… • Two OSDI’00 and three SOSP’01 papers • 20% SOSP general acceptance rate • 60% SOSP acceptance rate for emulab users! • More emulab’s under construction: • Kentucky, Duke, CMU, Cornell • Others intended: MIT, WUSTL, Princeton, HPLabs, Intel/UCB, Mt. Holyoke, …

  26. Federation of many diverse “testbeds” Challenge: heterogeneous sites Challenge: resource allocation Wireless nodes, mobile nodes IXP1200 nodes, tools, code fragments Routers, high-capacity shapers Simulation/emulation transparency Event system Scheduling system Topology generation tools and GUI Data capture, logging, visualization tools Microsoft OSs, high speed links, more nodes! Ongoing and Future Work

  27. A Global-scale Testbed • Federation key • Bottom-up “organic” growth • Local autonomy and priority • Existing hardware resources • Provides diverse hardware • PCs • Wireless, mobile • Real routers, switches (Wisconsin, …) • Network processors (IXP’s) • Research switches (WUSTL)

  28. NSF ITR Proposal (Nov 01) • Global-scale testbed • Utah primary • Subcontractors: • Brown co-PI (resource allocation) • MIT (RON overlay, wireless) • Duke (ModelNet, early adopter) • Mt. Holyoke (diverse users, education) • $5M, 5 years, almost no hardware

  29. Types of Sites • High-end facilities • Generic clusters • Generic labs • “Virtual machines” (leverage ANETS R&D) • Internet2 links between some sites

  30. Result… • Loosely coupled distributed system • Controlled isolation • “Internet Petri Dish”

  31. New Stuff: Extending to Wireless and Mobile Problems with existing approaches • Same problems as wired domain • But worse (simulation scaling, ...) • And more (no models for new technologies, ...)

  32. Our Approach: Exploit a Dense Mesh of Devices • Density enables broad range of emulation • Wireless • Deploy devices throughout campus • Measure NxN path characteristics (e.g. power, interference, bit error rate) • Employ diversity: 900 MHz, Bluetooth, IEEE 802.11 • Mobile • Leverage passive “couriers” • Assign PDAs to students walking to class • Equip public transit system with higher-end devices • Provides a realistic, predictable mobile testbed

  33. Possible User Interfaces • Specify desired device and path properties • emulab selects closest approximation • Specify desired spatial layout • emulab selects closest mapping • Manually select from deployed devices

  34. Wireless Virtual to Physical Mapping

  35. Available for universities, labs, and companies, for research and teaching, at:www.emulab.net

More Related