1 / 41

Data Motifs: A Benchmark Proposal for Big Data and AI

Data Motifs: A Benchmark Proposal for Big Data and AI. Wanling Gao ICT, CAS & UCAS. Bench18 , Seattle, WA, USA. Database system Relation algebra F ive primitive and fundamental operators. Hardware and Software Co-design.

kfranklin
Télécharger la présentation

Data Motifs: A Benchmark Proposal for Big Data and AI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DataMotifs:ABenchmarkProposalforBigDataandAI WanlingGao ICT, CAS &UCAS Bench18, Seattle,WA,USA

  2. Databasesystem Relationalgebra Five primitive and fundamental operators HardwareandSoftwareCo-design • A New Golden Age for Computer Architecture—Domain-specificco-design • >>John Hennessy and David Patterson • BigdataandAIsystems • Demand-driven • Case-by-case Select Project Product Union Costlyandtime-consuming! Difference

  3. WhyBigDataandAI Benchmarking? BigDataandAI Solutions Measuring systems and architectures quantitatively

  4. Benchmark Construction • Top-down: • representative program selection • can yield accurate representations of the program space of interest • usually impossible to make any form of hard statements about the representativeness • Bottom-up: • diverse range of characteristics • program characteristics are quantities that can be measured and compared • not all portions of the characteristics space are equally important -- C. Bienia. Benchmarking modern multiprocessors. PhD thesis, Princeton University, 2011.

  5. CurrentBenchmarkMethodology • Popularity • CloudSuite • Parsec • BigDataBench3.0 • Basedononeapplicationdomain • TPC-C、TPC-DS

  6. Relevant Portable Scaleble Simple BenchmarkingCost • Howaboutbenchmarkingscalability • Benchmarkconstructingcost • Benchmarkevaluationcost Thenumberofbenchmarksandrunningcost Whatmakesagoodbenchmark? ---Jim Gray Hardwareconfiguration • ComplexityanddiversityofbigdataandAIworkloads • Impactofdataonworkloadbehaviors Thenumberofworkloadsanddatasets

  7. BenchmarkRequirementsforBigDataandAI • Large-scalesystemandarchitectureevaluation • Portabilityforearlierstage • Comprehensivenessandrealityforlaterstage • Consistencyacross different communities • System:performance、clusterscalability • Architecture:micro-architecturebehaviors • Machinelearning:executiontime,model prediction precision • Reproducibilityandinterpretability

  8. DataMotif-basedCo-design&Benchmarking • DataMotifDefinition • Captures the common requirements of each class of unit of computation • Being reasonably divorced from individual implementations • A minimum set to represent maximum patterns WeneedtounderstandWhat’sthe abstractions of frequently-appearing units of computationamongbigdataandAIworkloads(bigdataandAImotif)? Benchmarking scalability Portability cost Better interpretation of performance data

  9. Overview • Looking back at history • WhatisData Motif • BigDataBench4.0 • UnifiedBigDataandAIBenchmarkSuite • ProxyBenchmarksforSimulation • Conclusion

  10. Successfulabstractionsinotherdomains…

  11. Abstraction-RelationalAlgebra • RelationalAlgebra • Five primitive and fundamental operators • Theoretical foundation of database • Strongexpressionpower • Composecomplexqueries Select Project Product Union Difference From E. F. Codd, A relational Model of Data for Large shared data banks. Communication of ACM, vol 13. no.6, 1970

  12. Abstraction-Numerical Method • Seven motifs would be important for the next decade • 7“Motifs” • PhillipColellaproposed • Simulation in the physical sciences is done out using various combinations of the following core algorithms • distinctive combination of computation and data access Unstru-cturedGrids Structu-redGrids FFT Denselinearalgebra Particles Sparselinearalgebra MonteCarlo From P. Colella, “Defining software requirements for scientific computing,” 2004.

  13. Abstraction–ParallelComputing • Landscape of Parallel Computing Research • 13dwarfs • Berkeleyresearchgroup • Define building blocks for creating libraries & frameworks • Apattern of computation and communication Backtrackandbranchbound Unstru-cturedGrids Dynamicprogramming Structu-redGrids N-Bodymethod Denselinearalgebra Graphmodels Spectralmethod Combinationlogic Sparselinearalgebra Finitestatemachine MonteCarlo Graphtraversal From K. Asanovic, R. Bodik, B. C. Catanzaro, J. J. Gebis, P. Husbands, K. Keutzer, D. A. Patterson, etal, “The landscape of parallel computing research: A view from berkeley,” tech. rep., Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley, 2006.

  14. Successfulbenchmarksbasedonabstractions…

  15. TPC Functional Workload Model • Application domain  encapsulate user cases • Functions of abstraction • abstraction of the implementations of use cases in different application domains. • Systems View and Physical View • Different systems and hardware -- YanpeiChen, Francois Raab, Randy Katz: From TPC-C to Big Data Benchmarks: A Functional Workload Model, WBDB, 2012

  16. TPC-C Methodology • Functions of Abstraction • a mid-weight read-write transaction (i.e., New-Order) • a light-weight read-write transaction (i.e., Payment) • a mid-weight read-only transaction (i.e., Order-Status) • a batch of mid-weight read-write transactions (i.e., Delivery) • a heavy-weight read-only transaction (i.e., Stock-Level) • Functional Workload Model • captures in an implementation-independent manner the load that the system needs to service

  17. https://pdfs.semanticscholar.org/bf41/25645dd8a4c0b7f0b1c5502c3713d4ffe68c.pdfhttps://pdfs.semanticscholar.org/bf41/25645dd8a4c0b7f0b1c5502c3713d4ffe68c.pdf

  18. HPCCMethodology http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.482.1783&rep=rep1&type=pdf

  19. Overview • Looking back at history • WhatisData Motif • BigDataBench4.0 • UnifiedBigDataandAIBenchmarkSuite • ProxyBenchmarksforSimulation • Conclusion

  20. AIWorkload---AlexNet

  21. FeatureExtraction---SIFT

  22. WhatisData motif? • DataMotif---unifiedcomputationabstraction • abstractions of time-consuming units of computation Wanling Gao, Jianfeng Zhan, Lei Wang,etal. Data Motif:ALenstowardsFullyUnderstandingBig Data and AI Workloads. PACT2018.

  23. Algorithms • 40+algorithmswithabroadspectrum • Datamining/Machinelearning • Naturallanguageprocessing • Computervision • Bioinformatics

  24. Sampling Transform Graph Logic • Set Statistics Sort • Matrix EightDataMotifs

  25. DifferencewithKernels • Data motifs • behaviors are affected by the sizes, patterns, types, and sources of different data inputs • reflect not only computation patterns, memory access patterns, but also disk and network I/O patterns

  26. Domain-specific Hardware and Software Co-design • Tailoring the system and architecture to characteristics of data motifs • Newarchitecture/acceleratordesign • Datamotif-basedlibraries • Bottleneckidentificationandoptimization

  27. Overview • Looking back at history • WhatisData Motif • BigDataBench4.0 • UnifiedBigDataandAIBenchmarkSuite • ProxyBenchmarksforSimulation • Conclusion

  28. ScalableBenchmarkMethodology • Datamotif-basedScalableMethodology • MicroBenchmark---Singledatamotif • ComponentBenchmark---Datamotifcombinationwithdifferentweights • ApplicationBenchmark---End-to-endapplicationmodel

  29. WhyDataMotif-basedBenchmark • UsingthecombinationtorepresentawidevarietyofbigdataandAIworkloads • Noneedtocreateanew benchmark or proxy for every possible workload • BigDataandAIMotif • Matrix • Sampling • Transform • Graph • Logic • Set • Statistics • Sort DiversebigdataandAIworkloads Combinationswithdifferentweights • Coverageoffundamentalunitsofcomputation • Providethemethodologyofchoosingtypicalworkloads • Reduceworkload redundancy

  30. RepresentativeApplicationDomain 600+new VIDEOS on YouTube every minute 13000+hours MUSIC streaming on PANDORA every minute 6600+new PHOTOS on FLICKR every minute Taking up 80% of internet services according to page views and daily visitors Internet Service Search engine, Social network, E-commerce Multimedia http://www.alexa.com/topsites/global;0 100’sVIDEO feeds from surveillance cameras 370000+minutes VOICE calls on Skype every minute 80%data growth are IMAGES, VIDEOS, documents, … Bioinformatics Top 20 websites http://www.ddbj.nig.ac.jp/breakdown_stats/dbgrowth-e.html#dbgrowth-graph http://www.oldcolony.us/wp-content/uploads/2014/11/whatisbigdata-DKB-v2.pdf

  31. BigDataBench4.0 UnifiedBigDataandAIBenchmarkSuite---http://prof.ict.ac.cn/BigDataBench Proxybenchmarksforsimulation Largescalesystem-levelbenchmarks ComponentBenchmark MicroBenchmark 100XRuntimeSpeedup 90%+AverageAccuracy DataAdaptability ApplicationBenchmark ConfigurationAdaptability CrossArchitecture Real-worlddatasetanddatagenerationtools 47Workloadscovering7types Audio Image Matrix Graph Text Table OnlineService AI Graph OfflineAnalytics Structured Semi-structured Un-structured NoSQL Streaming DataWarehouse 16Softwarestack MPI NoSql Impala DataMPI Shark Hadoop RDMA

  32. Micro Benchmarks

  33. Component Benchmarks

  34. Overview • Looking back at history • WhatisData Motif • BigDataBench4.0 • UnifiedBigDataandAIBenchmarkSuite • ProxyBenchmarksforSimulation • Conclusion

  35. Simulation Problem • RequirementsofHuaweiHiSilicon: • SimulationTime&Accuracy Technology • DataMotif-basedproxybenchmarks,averagesystemandmicro-architecturalaccuracy>=90% • 100Xruntimespeedup(1000s vs 10s) • Supportingdifferentarchitectures(X86 vs ARM) Wanling Gao, Jianfeng Zhan, Lei Wang,etal. Data Motif-based Proxy Benchmarks for Big Data and AI Workloads. Workload Characterization (IISWC2018).

  36. Proxy benchmarks • DataMotif-based proxy benchmarkgeneratingmethodology • ADAG-likecombinationofdatamotifs • An auto-tuning tool using machine learning model • Mimicsystemandmicro-architecturalbehaviors

  37. Overview • Looking back at history • WhatisData Motif • BigDataBench4.0 • UnifiedBigDataandAIBenchmarkSuite • ProxyBenchmarksforSimulation • Conclusion

  38. Executive summary BigDataBench4.0---UnifiedBigDataandAIBenchmarkSuite Workloads ProxyBenchmarks Graph OfflineAnalytics 100XRuntimeSpeedup AI OnlineService 90%+AverageAccuracy DataWarehouse NoSQL Streaming ADataMotif-basedScalableBenchmarkingMethodology DiverseImplementationsandDataMotifCombinations DataMotifs:ALensTowardsFullyUnderstandingBigDataandAIWorkloads MostTime-consumingUnitsofComputation DataCharacteristics Sampling Type Matrix Transform Logic Source Scale Sort Pattern Graph Set Statistics

  39. BigDataandAIBenchmark • Collaboratebetweenindustryandacademia • Industry:workload&dataset • Academia:benchmarkspecification • Variousapplicationdomains! • Internetservice • Scientificdata • Medicaldomain • ……

  40. Publications • Data Motifs: ALensTowards Fully Understanding Big Data andAI Workloads. PACT’18. • BigDataBench: a Motif-based Big Data and Artificial Intelligence Benchmark Suite. Technical Report. • Understanding Big Data Analytics Workloads onModern Processors. TPDS’16 • Auto-tuning Spark Big Data Workloads on POWER8: Prediction-Based Dynamic SMT . PACT’16 • BigDataBench: a Big Data Benchmark Suite from Internet Services. HPCA’14 • CVR: Efficient Vectorization of SpMV on X86 Processors. CGO’18. • BOPS, Not FLOPS! A New Metric, Measuring Tool, and Roofline Performance Model For Datacenter Computing. Technical report. • DataMotif-basedProxyBenchmarksforBigDataandAIWorkloads.IISWC2018.

More Related