300 likes | 423 Vues
Harness is a collaborative research initiative from Emory University, Oak Ridge National Laboratory, and the University of Tennessee, supported by the U.S. Department of Energy. This experimental metacomputing system is designed to be dynamically reconfigurable, facilitating the integration of new technologies and addressing software obsolescence. Harness supports diverse programming paradigms, ensuring runtime reconfiguration and dynamic implementation of software components. The prototype features a Java-based, event-driven backplane that provides reliable services and interfaces for components, enabling the creation and management of Distributed Virtual Machines.
E N D
Harness: Framework of the Day Vaidy Sunderam CCGSC : Sep 26, 2000
Emulating Parallel Environments in HarnessM. Migliardi, S. Schubiger, T. Tyrakowski & Vaidy SunderamEmory University Harness is a joint research project at Emory University, Oak Ridge National Laboratory, and the University of Tennessee Research supported by U.S. DoE MICS
What is Harness? • Experimental metacomputing system • dynamically reconfigurable • both resources enrolled and services offered • object-oriented • based on components/opaque interfaces • So what? • Easily incorporate new technologies • Solution to obsolescence • Adapt environment to application, not vice-versa e.g. programming environments
Research Focii • Distributed pluggability as a run-time configurable, dynamic implementation of the distributed component based programming environment • Supporting different (and mixed) programming paradigms on demand, via distributed components • Issues in design and development of software backplane enabling runtime reconfiguration, status tracking and management
Harness Prototype • Current prototype of the software backplane • Portable (Java based) • Extensible (component based) • Dynamic (event driven) • The backplane has been successfully tested as the basis to build, manage, use and dismantle Distributed Virtual Machines
Backplane API and services • Standards • Java and RMI for backplane and core services • Standard, OO, portability (esp. for baseline services), integrate with native code • The backplane enables the loading and unloading of additional services • a Harness service is a component offering a well defined interface to other components • Backplane allows querying the DVM status
Plugins: Overview public interface H_plugin { public String getInterfaceType(); public String getInterfaceDescriptor(); } • Interface type: • RMI • CORBA • SOCKETS • UNKNOWN Interface descriptor: additional info (for example port no.).
Plugin Handles //host_name/DVM_name.plugin.instance • host_name is the name of the physical machine on which the plugin is loaded • DVM_name is the name of the Harness distributed virtual machine, which the plugin is part of • plugin is the fully qualified name of the plugin class, for example • edu.emory.mathcs.harness.H_INotifierImpl • instance is the ordinal number of the plugin - many plugins may be loaded on the same machine, even multiple instances of the same plugin are allowed, so to differentiate those instances each plugin loaded has a natural number assigned (those numbers are unique only within one Harness kernel).
Qualities of Service • All or none vs. Best Effort • fails if any fails • fails if all fail • Guaranteed completion vs. Guaranteed Execution • returns to the user when operation is committed • returns to the user when operation is completed • All four combinations are valid QoS
Components Availability • The whole DVM acts as a source for class files and libraries • A set of trusted repositories can be searched for classes and libraries • The set of repositories can be extended or shrunk at run-time • Each computational resource can set a different policy for classes and libraries retrieval
Computational Resources Consistency • Baseline is checked to enroll in a DVM • The baseline of a DVM is • The services needed to generate, process and exchange events • The services needed to dynamically load new services • The services that are included in the baseline cannot be changed at runtime
Component Consistency • Naming • pluggable mapper components can be used to build user defined naming policies • common fallback name space • Versioning • every component loaded in a DVM is checked for a version signature • Synchronization • the loading of every component is synchronized
Framework Architecture • Kernel • delivers baseline services on the local computational resource • provides an address space for components • keeps a local copy of the DVM status • DVM Server • guarantees total ordering of events • saves the state of crashed kernels • discovers the presence of kernels willing to join • if multicast discovery is used
Plug-in interaction methods • Harness defines different methods to interact with plug-ins • RMI over JRMP • stream level message passing • CORBA (Work in progress) • support for plug-in specific proxy (work in progress) • The set is rather open ended as it allows plug-in specific, user developed protocols
Additional base services • The basic distribution of the backplane provides also some basic services • Event distribution • Subscription/notification model, both synchronous and callback oriented with guaranteed order • Message passing • p2p, reliable, ordered • reliable (not atomic) multicast and broadcast • Repositories management • Unix-like Process Spawning • An example Computational Resource Mapper
Distributed Components Programming • Component based programming is the native programming model for Harness in the Emory implementation • Reusable, run-time configurable simulation framework • Proof of concept • Successfully tested tested on the problem of crystal growth • Demonstrated at SC98, SC99
Harness Compatibility Suites • Supporting different programming paradigm such as message passing by means of sets of ad-hoc plug-ins (compatibility suites) • PVM compatibility suite as a proof of concept • Designed to allow execution of unchanged, unaware PVM apps on top of Harness • testing in progress • beta release available…soon
PVM C-Suite Design Goals • to require no changes into PVM applications to run in the new environment • to minimize the amount of changes to be inserted in the application side PVM library • only 1 change necessary due to Java limitation • to achieve a modular design for the services provided by the PVM daemon • to open a transition path to PVM users • verify/test Harness model and capability
Harness PVMD • Generic plugins • Event notifier, even poster • Process spawner • Point-to-point communications • PVMD-specific plugin • Harness-side liaison • Application-side (libpvm3.a side) liaison • Housekeeping classes • Multithreaded for asynchrony
PVMD-specific plugin • Application interface classes • Messages (applications) classes (Harness) and (remote/local) method invocation • Daemon-interface (RMI-based) • PVMD PVMD control exchanges • PVMD Master PVMD interactions • Event handling system • Libpvm recompilation … • … needed to force INET sockets locally
Harness-PVM Benefits • As is: • The capability to soft install native libraries and executables via Harness repositories • Reliable, ordered delivery of PVM notifications and Harness status tracking/management • Performance: almost on par with native PVM • Future • Customizable PVMD modules (e.g. networks) • Access to distributed objects from PVM; mix PVM message passing with other paradigms
JavaSpaces in Harness • JavaSpace • Unifies communication, coordination, sharing • JS holds “entries” (instances of classes implementing Entry interface) • Templates used to match entries associatively • Operations on a JS • write : place entry in JS • read, readIfExists, take, takeIfExists • notify, snapshot
Harness JS Implementation • Architecture
Harness JS (contd.) • Interface • Jspace class : front end to send and receive requests to/from the Store class • Store • process() method: read/take requests serviced or queued (invoking process blocks) • put() method: insert new entry into JS • hash-based retrieval of entries
Harness JS (contd.) • Store Interceptor • Pluggable consistency/coherence subsystem • Replicated implementation used in prototype
Harness JS(contd.) • Token ring implementation of groups • Performance: • Simulated JS operations on matrix object • Compared to Sun outrigger implementation on Sun U10 workstations on 100Mb Ethernet • One node writing; 10 reading/taking • write: 2x improvement over SunJS • read: 4x - 7x improvement • take: 0.15x degradation
Harness Status • Dynamically Pluggable Java Backplane • Multiple ways to interact with plug-ins • Emulation of multiple parallel programming environments • Software: • Alpha: http://www.mathcs.emory.edu/harness/ • Release 1.0 during Fall 2000 • Alternative approach to Metacomputing