1 / 1

Motivation

Transformation Issues and Implementation . Motivation. Main Transformation Issues. Reduction Translation. The growth of data-intensive computing has been tied to the popularity of new programming paradigms: Map-Reduce and some similar systems came up;

aldis
Télécharger la présentation

Motivation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transformation Issues and Implementation Motivation Main Transformation Issues Reduction Translation • The growth of data-intensive computing has been tied to the popularity of new programming paradigms: • Map-Reduce and some similar systems came up; • Various high level High Performance Computing Languages are developed. • Questions: I. Are HPC languages suitable for expressing data-intensive computations? • II. a) if so, what are the issues in using them? • II. b) if not, what characteristics of data-intensive computations force the need of separate languages? • Invoke the split function in FREERIDE; • Call reduction function to update the redobj; • Call combine ( and finalize) functions. • Main point is the two-stage mapping algorithm: • Collect the necessary information; • de-linearize the data set. Linearization Algorithm Two-stage algorithm, including: I. Compute the linearized data size; II. Linearize the data set. Data structure before and after linearization Chapel & Reduction Support data[l]: … data[l-1] data[1] data[0] … … b1[1] b1[0] b2 b2 b1[1] … b1[0] b1[n-1] b1[n-1] • Chapel supports two kinds of reduction model: • Local-view abstraction: • Straight-forward for implement; • Global-view abstraction: • accumulate: local reduction; • combine: global reduction; • generate: post-processing. … … a1[0] a1[1] a1[0] a1[1] … a2 a2 a1[m-1] a1[m-1] Linearizing Alg Mapping Alg Linear_data[ ]: b2 b2 … … b2 a1[0] a2 a1[0] a2 … … … a1[m-1] a1[m-1] m n l Experimental Results Conclusions • Present a case study for the possible use of a new HPC language for data-intensive computations; • Show how to transform the reduction features of Chapel down to FREERIDE middleware; • Combine the productivity of high level language with the performance of a specialized runtime system. Configuration PCA • CPU: Intel Xeon E5345 • 2 quad-core 2.33GHz • Memory: 6GB • OS: 64-bit Linux FREERIDE Middleware • Local Reduction • API function void (*reduction_t)(reduction_ags_t* ) is called, and the local reduction object is updated; • Global Reduction • API function void (*combination_t)(void* ) is called, and copies of reduction object are combined. K-means row=1000; column=10,000 Size=1.2G; k=10; i=10 row=1000; column=100,000 Size=1.2G; k=100; i=1 Size=12M; k=100; i=10 Acknowledgments: This work is supported by NSF awards to OSU and DARPA to Cray Inc

More Related