1 / 16

Resource Fabrics: The Next Level of Grids and Clouds

Resource Fabrics: The Next Level of Grids and Clouds. Lei Shi. Introduction. Clouds and multi-core processors make concurrent compute units available for the average user Cloud systems: server-like machines over internet Multi-core machines: locally available Scale

emmy
Télécharger la présentation

Resource Fabrics: The Next Level of Grids and Clouds

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Resource Fabrics:The Next Level of Grids and Clouds Lei Shi

  2. Introduction • Clouds and multi-core processors make concurrent compute units available for the average user • Cloud systems: server-like machines over internet • Multi-core machines: locally available • Scale • Cloud systems: perform scale by replication • Multi-core machines: scaling vertically across the resources • Context • HPC: dealing with execution of one process per core • Desktop App: concurrent execution and time-sharing

  3. Introduction • Distributed system • Service /App execute consists of multiple resources connected via comm or msg link • Clouds and multi-core diff on capabilities • Exploit remote resources as if local • Operating system and programming model • Resource fabric

  4. New Architecture • Multi-core processors, clusters, clouds and grids • Integrate compute units over a communication link • Multi-core: low latency • Clouds and grid: intra/internet with high latency • Latency • Invocations

  5. New Architecture • Interactive applications sensible to latency • Word processor: >0.1 second is non-reactive • Browsers: more tolerant • Different resource types • Distinguish between connectivity • Connects any amount of von Neumann like units • New architecture from programming perspective • Move I/O closer to processing unit • Dedicated I/O between PU and MU

  6. Modified von Neumann Architecture Typical Architecture Modified Architecture

  7. New Data Management Model • Data generation, exchange and storage • Time-consuming • Need to be managed in a more intelligent fashion • Use frequency • Auto space request and dynamic track • Data replication • Increase locality and availability • Mapping data sets into physical devices without affecting applications

  8. The Structure of Applications • Indicator for its distributability • Runtime behavior provides more information about the potential code distribution • Invocations: functionality not communication driven • Run-time analysis • Produce dependency graph

  9. The Structure of Applications C2: for (int i = 0; i < 4; i++) a[i] = 0; C2: for (int i = 1; i < 3; i++) a[i] = a[i-1] * 2;

  10. The Structure of Applications • To increase the execution performance • Strength of relationship • Size of code block • Size of data • Extract segments • Good cutting point • Fewer accesses

  11. Lifecycle of Applications • Information acquire at runtime • Distribution information may change • Analysis of application behavior • Identification of appropriate resources • Distribution and adaptation of code and data • Execution and runtime analysis • Information storing

  12. Middleware for Resource Fabrics • Virtual environment needed • Capture memory access and so on • Virtual memory management • Distributed Execution • Segments form workflow respects to • Availability of resources in principle • Minimizing execution time • Data Maintenance • Preemptive distribution • Context switch • On demand

  13. Middleware for Resource Fabrics

  14. S(o)OS project • Dealing with the scenarios in the middleware design • http://www.soos-project.eu/ • Distributed microkernel instances fit into local memory of a compute unit • Local instances only deal with communication and virtual memory management

  15. Reference • Beyond Clouds – Towards Real Utility Computing M. Assel et al. • Service-Oriented Operating Systems: Future Workspaces L. Schubert et al. • Cloud Computing Expert Working Group Report: The Future of Cloud Computing • Resource Fabrics: The Next Level of Grids and Clouds S. Wesner et al.

  16. Thank you Q & A

More Related