Download
lecture grids and markup languages n.
Skip this Video
Loading SlideShow in 5 Seconds..
Lecture Grids and Markup Languages PowerPoint Presentation
Download Presentation
Lecture Grids and Markup Languages

Lecture Grids and Markup Languages

93 Vues Download Presentation
Télécharger la présentation

Lecture Grids and Markup Languages

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Lecture Grids and Markup Languages Gregor von Laszewski Argonne National Laboratory and University of Chicago gregor@mcs.anl.gov http://www.mcs.anl.gov/~gregor

  2. Outline • Gestalt of the Grid • State of the Grid • Example for a production Grid • Markup Languages • Example Query

  3. Gestalt of the Grid • We start the discussion with a famous picture used in early psychology experiments. • If we examine the drawing in detail, it will be rather difficult to decide what the different components represent in each of the interpretations. Although hat, feather, and ear are identifiable in the figure, one’s interpretation (Is it an old woman or a young girl?) is based instead on “perceptual evidence.” • This figure should remind us to be open to individual perceptions about Grids and to be aware of the multifaceted aspects that constitute the Gestalt of the Grid.

  4. Motivation: Perform Collaborative Multiscale Science sensors scientists compute and storage facilities consumer measure collaborate calculate deliver observations model prediction feedback • von Laszewski, et al. Gestalt of the Grid, http://www.mcs.anl.gov/~gregor/

  5. The motivating experiment at ANL Virtual Lecture Room Advanced Photon Source Scientist Grid Electronic Library and Databases

  6. Grid: an evolving term (1) • Kleinrock 1969: • We will probably see the spread of computer utilities, which, like present electric and telephone utilities, will service individual homes and offices across the country. • 90s: Prior to using the term Grid • Catlett: pre 1996 metacomputer • Foster: 1996 networked supercomputing environment • von Laszewski: 1996 integration of knowledge resources (= data & humans) into the networked • 1999: The Grid Book • A computational Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities • Limits definition to hardware and software infrastructure

  7. Grid: an evolving term (2) • 2000 von Laszewski: Grid approach • We define the Grid approach as a general concept and idea to promote a vision for sophisticated international scientific and business-oriented collaborations. • A Grid is the infrastructure that makes the Grid approach a reality. • A production Grid is a shared computing infrastructure of hardware, software, and knowledge resources that allows the coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations to enable sophisticated international scientific and business-oriented collaborations • An ad hoc Grid provides a production Grid that addresses management issues related to sporadic, ad hoc, and time-limited interactions and collaborations including the instantiation and management of the production Grid itself.

  8. Grid • Building a collaborative environment to share resources • Provide the users with an impression of a persistent infrastructure • Virtualize the concept of a resource • Virtualize the concept of groups sharing the

  9. History of Globus and CoG at ANL

  10. Management Challenge • Users requirements result in a variety of complex challenges • They will keep us busy for quite a while • We should not expect the solution to be here tomorrow or • that it was here yesterday.

  11. Grid Management Aspects • Information • Security • …

  12. Subset of Grid related Security Concepts Single Sign-on Authorization Authentication Secure communication through encryption and non-repudiation Access control through authenticationand authorization Community authorization Secure Execution Delegation

  13. Grid computing must address integration challenge

  14. Grid deployments and software releases

  15. Grid Computing is more than middleware • Grid computing must be seamlessly integrated in commodity technologies to be effective

  16. Evolution invariant architectures • Longevity is bound to evolution invariant architectures

  17. Visual Interfaces / Grid faces • Education need easy access to lower barrier

  18. // Check whether you can submit a job to a particular gatekeeper. • Gram.ping( proxy, “hot.mcs.anl.gov”); • // Create a job • GramJob job = new GramJob(proxy, rsl.toRSL()); • // Add a status change listenerclass GramJobListenerImpl implements GramJobListener { public void statusChanged(GramJob job) { String status = job.getStatusAsString(); System.out.println(status); } job.addListener(new GramJobListenerImpl()); • // Submit the job to a GRAM resource managerjob.request(“hot.mcs.anl.gov”); // default IANA port 2119 • //Cancel the job, if need be.job.cancel() Identity id = cog-run (“-h hot.mcs.anl.gov –e nbody”); Rapid Prototyping: Job Submission callback_func(void *user_arg, char *job_contact, int state, int errorcode) { globus_i_globusrun_gram_monitor_t *monitor; monitor = (globus_i_globusrun_gram_monitor_t *) user_arg; globus_mutex_lock(&monitor->mutex); monitor->job_state = state; switch(state) { case GLOBUS_GRAM_PROTOCOL_JOB_STATE_PENDING: { globus_i_globusrun_gram_monitor_t *monitor; monitor = (globus_i_globusrun_gram_monitor_t *) user_arg; globus_mutex_lock(&monitor->mutex); monitor->job_state = state; switch(state) { case GLOBUS_GRAM_PROTOCOL_JOB_STATE_FAILED: if(monitor->verbose) { globus_libc_printf("GLOBUS_GRAM_PROTOCOL_JOB_STATE_FAILED\n"); } monitor->done = GLOBUS_TRUE; break; case GLOBUS_GRAM_PROTOCOL_JOB_STATE_DONE: if(monitor->verbose) { globus_libc_printf("GLOBUS_GRAM_PROTOCOL_JOB_STATE_DONE\n"); } monitor->done = GLOBUS_TRUE; break; } globus_cond_signal(&monitor->cond); globus_mutex_unlock(&monitor->mutex); } globus_l_globusrun_gramrun(char * request_string, unsigned long options, char *rm_contact){ char *callback_contact = GLOBUS_NULL; char *job_contact = GLOBUS_NULL; globus_i_globusrun_gram_monitor_t monitor; int err; monitor.done = GLOBUS_FALSE; monitor.verbose=verbose; globus_mutex_init(&monitor.mutex, GLOBUS_NULL); globus_cond_init(&monitor.cond, GLOBUS_NULL); err = globus_module_activate(GLOBUS_GRAM_CLIENT_MODULE); if(err != GLOBUS_SUCCESS) { … } err = globus_gram_client_callback_allow( globus_l_globusrun_gram_callback_func, (void *) &monitor, &callback_contact); if(err != GLOBUS_SUCCESS) { … } err = globus_gram_client_job_request(rm_contact, request_string, GLOBUS_GRAM_PROTOCOL_JOB_STATE_ALL, callback_contact, &job_contact); if(err != GLOBUS_SUCCESS) { … } globus_mutex_lock(&monitor.mutex); while(!monitor.done) { globus_cond_wait(&monitor.cond, &monitor.mutex); } globus_mutex_unlock(&monitor.mutex); globus_gram_client_callback_disallow(callback_contact); globus_free(callback_contact); globus_mutex_destroy(&monitor.mutex); globus_cond_destroy(&monitor.cond);

  19. Scientific workflows • <project> • <include file="cogkit.xml"/> • <execute executable="/bin/climate" • host="hot.mcs.anl.gov" • provider="GT4"/> • <echo message="Job completed"/> • </project> • Lessen we seem to learn: • Kepler and Taverna complex

  20. Education • Tutorial and slide material available for Globus • They contain a portion • We found that for beginners the entry curve is steep • CoG Kit entry curve is relatively low • Authentication, job submission, file transfer (ssh like …) • Used successfully in REU and SULI projects (undergrads) • Viz/GUIs gets students interested • Educational dichometry: • we do want to use the Grid but do not want or have the time to learn about it

  21. References • Globus • http://www.globus.org • CoG Kits • http://www.cogkit.org • Portals • http://www.ogce.org • Papers • http://www.mcs.anl.gov/~gregor • The Grid-Idea and Its Evolution, Gregor von Laszewski, accepted for publication in the Journal of Information Technology, http://www.mcs.anl.gov/~gregor/papers/vonLaszewski-grid-idea.pdf • Biography • Gregor von Laszewski is a Scientist at Argonne National Laboratory and a fellow of the Computation Institute at University of Chicago. He received a Masters Degree in 1990 from the University of Bonn, Germany, and a Ph.D. in 1996 from Syracuse University in computer science. He is involved in Grid computing since the term was coined. Current research interests are in the areas of Grid computing, Grid workflows, and Grid user interfaces. He is the principal investigator of the Java Commodity Grid Kit which provides a basis for manyGrid related projects.

  22. Project focused, globally distributed teams, spanning organizations within and beyond company boundaries Collaborative and Dynamic Each team member/group brings own data, compute, & other resources into the project Distributed and Heterogeneous Access to computing and data resources must be coordinated across the collaboration Data & Computation Intensive Resources must be available to projects with strong QoS, & also reflect enterprise-wide biz priorities Concurrent Innovation Cycles Why do we need the Grid today? Changing Nature of Work IT must adapt to this new reality

  23. Tool Tool Workflow Uniform interfaces, security mechanisms, Web service transport, monitoring Registry Credent. DAIS GRAM User Svc User Svc GridFTP Host Env Host Env Approach: Bridging the Application-Resource Gap User Application User Application User Application Database Specialized resource Computers Storage

  24. User Applications GT4 & Web Services Custom Services Custom WSRF Services GT4WSRF Web Services Registry & Admin GT4 Container(e.g., Apache Axis) WS-A, WSRF, WS-Notification WSDL, SOAP, WS-Security

  25. GT4 Services Include … • Data • GridFTP: file access & movement • Reliable File Transfer • Replica Location Service • Data Access & Integration: database access • Computation • GRAM: reliable job submission • Workspace: virtual machine deployment • Security • Credential repository, authorization, … • … & many others …

  26. Mobius Globus BPEL GRAM Globus myProxy OGSA-DAI Globus Toolkit GSI CAS caCORE Globus Used to Create Powerful Systems:E.g., Cancer Bioinformatics Grid Functions Management Schema Management Metadata Management ID Resolution Workflow Security Resource Management Service Registry Service Service Description Grid Communication Protocol Transport Spans 60 NIH cancer centers across the U.S. Slide credit: Peter Covitz, National Institutes of Health

  27. A Stateful Odyssey “Tell of the storm-tossed man, O Muse, who wandered long …” (Homer) • A simple goal • Web Services conventions for manipulating state • A hopeful departure • OGSI: Open Grid Services Infrastructure • Some detours en route • WS-RF: WS Resource Framework • WS-Transfer and friends • Home at last? • WS-ResourceTransfer, WS-Eventing, etc. “And the end of all our exploring/Will be to arrive where we started/And know the place for the first time” (Elliot)

  28. Stateful Odyssey: Practical Implications • GT4 supports WSRF today • Mechanisms have proved incredibly useful in many different contexts • A large user community • We will incorporate support for: • … final WSRF/WS-Notification specs • … WS-RT & friends (when specs mature) • If/when justified based on user demand • We will ensure backward compatibility • Via a single service with multiple interfaces

  29. Other Standards • Data • GridFTP • Data Access & Integration (DAIS) • Replica location (in progress) • Security • WS-Security, SAML — included in GT4 • XACML — included in GT4 • SAML-2 — awaiting contribution of code • Job submission • JSDL – alpha implementation available • BES – when BES specification completed

  30. A Production Grid

  31. Slides courtesy of Jeffrey Gardner & Charlie Catllet The TeraGrid “The world’s largest collection of supercomputers”

  32. TeraGrid: A High Level View User & Facilities Support Help desk/Portal and ASTA Grid Software and ENV Deployment CTSS Authorization, Accounting and Authentication TG Allocation and Accounting Grid Monitoring and Information Systems MDS4 & Inca

  33. TeraGridAllocation& Accounting

  34. TeraGrid Allocation • Researchers request “allocation of resource” through formal process • Process works similarly as that for submitting a NSF grant proposal • There are eligibility requirements • US faculty member or researcher for an non-profit organization • Principle Investigators submits CV • More… • Description of research, requirements etc. • Proposal is peer reviewed by allocation committees: • DAC: Development Allocation Committee • MRAC: Medium Resource Allocation Committee • LRAC: Large Resource Allocation Committee

  35. Authentication, Authorization & Accounting • TG Authentication & Authorization is automatic • User accounts are created when allocation is granted • Resources can be accessed through: • ssh: via password, ssh keys • Grid access: via GSI mechanism (grid-mapfile, proxies…) • Accounts created across TG sites users in allocation • Accounting system is oriented towards TG Allocation Service Units (ASU) • Accounting system is well defined and monitored closely • Each TG sites is responsible for its own accounting

  36. TeraGridMonitoring and Validation

  37. Information providers: Collect information from various sources Local batch system; Torque, PBS Cluster monitoring; ganglia, Clumon… Spits out XML in a standard schema (attribute value pairs) Information is collected into local Index service Global TG wide Index collector with WebMDS Site1 Site2 GT4 Container GT4 Container PBS Torque WS- GRAM WS- GRAM Clumon Ganglia MDS4 Index MDS4 Index TG Wide Index WebMDS Application Application Browser Browser TeraGrid and MDS4

  38. Inca: TeraGrid Monitoring… Inca is a framework for the automated testing, benchmarking and monitoring of Grid resource • Periodic scheduling of information gathering • Collects and archives site status information • Site validation & verification • Checks site services & deployment • Checks software stack & environment • Inca can also site performance measurements

  39. TeraGridGrid Middleware & Software Environment

  40. The TeraGrid Environment • SoftEnv: all software on TG can be accessed via keys defined in $HOME/.soft • SoftEnv system is user configurable • Environment can also be accessed at run time for WS GRAM jobs You will be interacting with SoftEnv during the exercises later today

  41. TeraGrid Software: CTSS • CTSS: Coordinated TeraGrid Software Service • A suite of software packages that includes globus toolkit, condor-g, myproxy, openssh… • Installed at every TG site

  42. TeraGrid User & Facility Support • The TeraGrid Help desk help@teragrid.org • Central location for user support • Routing of trouble tickets • TeraGrid portal: • User’s view of TG • Resources • Allocations… • Access to Docs!

  43. TeraGrid’s ASTA Program Advanced Support for TeraGrid Application • Help application scientists with TG resources • Associates one or more TG staff with application scientists • Sustained effort • A minimum of 25% FTE • Goal Maximize effectiveness of application software & TeraGrid resources

  44. Topics Not Covered Managed Storage Grid Scheduling More

  45. Managing Storage • Problems: • No real good way to control the movement of files into and out of site • Data is staged by fork processes! • Anyone with access to the site can submit such a request and swamp the server • There is also no space allocation control • A grid user can dump files of any size on a resource • If users do not cleanup sys, admin have to intervene These can easily overwhelm a resource

  46. Managing Storage • A Solution: SRM (Storage Resource Manager) • Grid enabled interface to put data on a site • Provides scheduling of data transfer requests • Provides reservation of storage space • Technologies in the OSG pipeline • dCache/SRM (disk cache with SRM) • Provided by DESY & FNAL • SE(s) available to OSG as a service from the USCMS VO • DRM (Disk Resource Manager) • Provided by LBL • Can be added on top of a normal UNIX file system $> globus-url-copy srm://ufdcache.phys.ufl.edu/cms/foo.rfz \ gsiftp://cit.caltech.edu/data/bar.rfz

  47. Grid Scheduling The problem: With job submission this still happens! Why do I have to do this by hand? @?>#^%$@# Why do I have to do this by hand? ? User Interface VDT Client Grid Site A Grid Site B Grid Site X

  48. Grid Scheduling • Possible Solutions • Sphinx (GriPhyN, UF) • Work flow based dynamic planning (late binding) • Policy based scheduling • More details ask Laukik • Pegasus (GriPhyN, ISI/UC) • DAGman based planner and Grid scheduling (early binding) • More details in Work Flow • Resource Broker (LCG) • Match maker based Grid scheduling • Employed by application running on LCG Grid resources