1 / 22

The Gamma Operator for Big Data Summarization on an Array DBMS

The Gamma Operator for Big Data Summarization on an Array DBMS. Carlos Ordonez. Acknowledgments. Michael Stonebraker from MIT My PhD students: Yiqun Zhang, Wellington Cabrera SciDB team: Paul Brown, Bryan Lewis, Alex Polyakov. Why SciDB?. Large matrices beyond RAM size

donkor
Télécharger la présentation

The Gamma Operator for Big Data Summarization on an Array DBMS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Gamma Operator for Big Data Summarization on an Array DBMS Carlos Ordonez

  2. Acknowledgments Michael Stonebraker from MIT My PhD students: Yiqun Zhang, Wellington Cabrera SciDB team: Paul Brown, Bryan Lewis, Alex Polyakov

  3. Why SciDB? Large matrices beyond RAM size Storage by row or column not good enough Matrices natural in statistics, engineer. and science Multidimensional arrays -> matrices, not same thing Parallel shared-nothing best for big data analytics Closer to DBMS technology, but some similarity with Hadoop Feasible to create array operators, having matrices as input and matrix as output Combine processing with R package and LAPACK

  4. Old: separate sufficient statistics

  5. New: Generalizing and unifying Sufficient Statistics: Z=[1,X,Y]

  6. Equivalent equations with projections from Γ

  7. Important properties of  PARALLEL

  8. Storage in array chunks

  9. In SciDB we store the points in X as 2D array. SCAN Worker

  10. Array storage and processing in SciDB Assuming d<<n it is natural to hash partition X by i=1..n Gamma computation is fully parallel maintaining local Gamma versions in RAM. X can be read with a fully parallel scan No need to write Gamma to disk during scan, unless fault tolerant

  11. Point must fit in one chunk. Otherwise, join is needed (slow) NO! OK Coordinator Coordinator Worker 1 Worker 1

  12. Parallel computation Coordinator Worker 1 Worker 2 send send

  13. Gamma Operator algorithm: full Γ

  14. Pros: Algorithm evaluation with physical array operators Since xi fits in one chunk we do not need to compute joins Since xi*xiT can be computed in RAM we avoid an aggregation which would require sorting points by i No need to store X twice: X, XT: half I/O No need transpose X, costly reorganization even in RAM Operator works in C++ compiled code: fast.

  15. System issues and limitations • Gamma not efficiently computable in AQL or AFL: hence operator is required • Arrays of tuples in SciDB are more general, but cumbersome for matrix manipulation: arrays of single attribute (double) • Points must be stored completely inside a chunk: wide rectangular chunks: may not be I/O optimal • Slow: Arrays must be pre-processed to SciDB load format, loaded to 1D array and re-dimensioned=>optimize load. • Multiple SciDB instances per node improve I/O speed: interleaving CPU • Larger chunks are better: 8MB, especially for dense matrices; avoid shuffling; avoid joins • Dense (alpha) and sparse (beta) versions

  16. Benchmark: scale up • Small: cluster with 2 Intel Quadcore servers 4GB RAM, 3TB disk • Large: Amazon cloud 2

  17. Combination: SciDB + R

  18. Conclusions • One pass parallel summarization operator for a large matrix • Optimization of outer matrix multiplication as sum (aggregation) of vector outer products • Operator compatible with any parallel shared-nothing system • Gamma matrix must fit in RAM, but n unlimited • Summarization matrix can be exploited in many intermediate computations (with appropriate projections) in linear models • Simplifies many methods to two phases: • Summarization • Computing model parameters • Requires arrays, but can still work with SQL or MapReduce

  19. Future work • Theory • Use Gamma in other models like logistic regression, clustering, Factor Analysis, HMMs • Connection to frequent itemset • Sampling • Higher expected moments, co-variates • Unlikely: Numeric stability with unnormalized sorted data • Systems • DONE: Sparse matrices: layout, compression • DONE: Beat LAPACK on high d • Online model learning (cursor interface needed) • Unlimited d (currently d>8000); join required for high d? Parallel processing of high d more complicated, chunked • PENDING: Interface with BLAS and MKL • Faster than UDFs in a columnar DBMS?

More Related