1 / 45

Carnegie Mellon Univ. Dept. of Computer Science 15-415/615 - DB Applications

Carnegie Mellon Univ. Dept. of Computer Science 15-415/615 - DB Applications. C. Faloutsos – A. Pavlo Lecture#25: Column Stores. Administrivia. HW7 Phase 1: Wed Nov 11 th HW7 Phase 2: Mon Nov 30 th HW8: Mon Dec 7 th Final Exam : Friday Dec 18 th @ 1:00pm GHC 4401. Today’s Class.

hendrickson
Télécharger la présentation

Carnegie Mellon Univ. Dept. of Computer Science 15-415/615 - DB Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Carnegie Mellon Univ.Dept. of Computer Science15-415/615 - DB Applications C. Faloutsos – A. Pavlo Lecture#25: Column Stores

  2. Administrivia • HW7 Phase 1: Wed Nov 11th • HW7 Phase 2: Mon Nov 30th • HW8: Mon Dec 7th • Final Exam: • Friday Dec 18th @ 1:00pm • GHC 4401 CMU SCS 15-415/615

  3. Today’s Class • Storage Models • System Architectures • Vectorization • Compression • Distributed Execution CMU SCS 15-415/615

  4. Wikipedia Example CREATETABLE useracct ( userID INT PRIMARY KEY, userName VARCHAR UNIQUE, ⋮ ); CREATETABLE pages ( pageIDINT PRIMARY KEY, title VARCHAR UNIQUE, latest INT REFERENCES revisions (revID), ); CREATE TABLE revisions ( revID INT PRIMARY KEY, pageID INT REFERENCES pages (pageID), userIDINT REFERENCES useracct (userID), content TEXT, updated DATETIME );

  5. OLTP • On-line Transaction Processing: • Short-lived txns. • Small footprint. • Repetitive operations. SELECT* FROMuseracct WHEREuserName= ? AND userPass = ? SELECTP.*, R.* FROM pages ASP INNERJOINrevisions ASR ONP.latest= R.revID WHEREP.pageID= ? UPDATEuseracct SET lastLogin = NOW(), hostname = ? WHEREuserID= ? INSERT INTO revisions VALUES (?,?…,?)

  6. OLAP • On-line Analytical Processing: • Long running queries. • Complex joins. • Exploratory queries. SELECTCOUNT(U.lastLogin), EXTRACT(month FROMU.lastLogin) AS month FROMuseracctAS U WHERE U.hostnameLIKE ‘%.gov’ GROUP BY EXTRACT(month FROMU.lastLogin) CMU SCS 15-415/615

  7. Data Storage Models • There are different ways to store tuples. • We have been assuming the n-ary storage model this entire semester. CMU SCS 15-415/615

  8. n-ary Storage Model - userID - userName - userPass hostname - - lastLogin userID userName userPass hostname lastLogin userID userName userPass hostname lastLogin • The DBMS stores all attributes for a single tuple contiguously in a block. NSM Disk Page CMU SCS 15-415/615

  9. n-ary Storage Model B+Tree SELECT* FROMuseracct WHEREuserName= ? AND userPass = ? - userID - userName userPass - - hostname - lastLogin userID userName userPass hostname lastLogin userID userName userPass hostname lastLogin INSERT INTO useracct VALUES (?,?,…?) NSM Disk Page userID userName userPass hostname lastLogin CMU SCS 15-415/615

  10. n-ary Storage Model SELECTCOUNT(U.lastLogin), EXTRACT(month FROMU.lastLogin) AS month FROMuseracctAS U WHERE U.hostnameLIKE ‘%.gov’ GROUP BY EXTRACT(month FROMU.lastLogin) userID userID userName userName userPass userPass hostname hostname lastLogin lastLogin userID userName userPass hostname lastLogin userID userName userPass hostname lastLogin X NSM Disk Page CMU SCS 15-415/615

  11. n-ary Storage Model • Advantages • Fast inserts, updates, and deletes. • Good for queries that need the entire tuple. • Disadvantages • Not good for scanning large portions of the table and/or a subset of the attributes. CMU SCS 15-415/615

  12. Decomposition Storage Model • The DBMS stores a single attribute for all tuples contiguously in a block. hostname hostname hostname hostname hostname hostname hostname hostname hostname hostname DSM Disk Page hostname hostname hostname hostname hostname userID hostname hostname hostname hostname hostname lastLogin userName userPass CMU SCS 15-415/615

  13. Decomposition Storage Model SELECTCOUNT(U.lastLogin), EXTRACT(month FROMU.lastLogin) AS month FROMuseracctAS U WHERE U.hostnameLIKE ‘%.gov’ GROUP BY EXTRACT(month FROMU.lastLogin) hostname hostname hostname hostname hostname hostname hostname hostname hostname hostname DSM Disk Page hostname hostname hostname hostname hostname hostname hostname hostname hostname hostname CMU SCS 15-415/615

  14. Decomposition Storage Model • Advantages • Reduces the amount wasted I/O because the DBMS only reads the data that it needs. • Better compression (more on this later). • Disadvantages • Slow for point queries, inserts, updates, and deletes because of tuple splitting/stitching. CMU SCS 15-415/615

  15. History • 1970s: Cantor DBMS • 1980s: DSM Proposal • 1990s: SybaseIQ (in-memory only) • 2000s: Vertica, Vectorwise, MonetDB • 2010s: Cloudera Impala, Amazon Redshift, “The Big Three” CMU SCS 15-415/615

  16. System Architectures • Fractured Mirrors • Partition Attributes Across (PAX) • Pure Columnar Storage CMU SCS 15-415/615

  17. Fractured Mirrors • Store a second copy of the database in a DSM layout that is automatically updated. • Examples: Oracle, IBM DB2 BLU NSM DSM CMU SCS 15-415/615

  18. PAX • Data is still stored in NSM blocks, but each block is organized as mini columns. userID userID userID userID userName userName userName userName userPass userPass PAX Disk Page userPass userPass hostname hostname hostname hostname lastLogin lastLogin lastLogin lastLogin CMU SCS 15-415/615

  19. Column Stores • Entire system is designed for columnar data. • Query Processing, Storage, Operator Algorithms, Indexing, etc. • Examples: Vertica, VectorWise, Paraccel, Cloudera Impala, Amazon Redshift CMU SCS 15-415/615

  20. Virtual IDs vs. Offsets • Need a way to stitch tuples back together. • Two approaches: • Fixed length offsets • Virtual ids embedded in columns userID userName userPass hostname lastLogin userID userName userPass hostname lastLogin 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 Offsets Virtual Ids

  21. Modifying a Column Store • INSERT: • Split tuple into attributes, append to columns. • DELETE: • Mark the tuple as deleted in a separate bit-vector. Check visibility at runtime. • UPDATE: • Implement as DELETE+INSERT CMU SCS 15-415/615

  22. Bifurcated Architecture • All txns are executed on OLTP database. • Periodically migrate changes to OLAP database. OLTP OLAP Data Warehouse OLTP Extract Transform Load OLTP CMU SCS 15-415/615

  23. Today’s Class • Storage Models • System Architectures • Vectorization • Compression • Distributed Execution CMU SCS 15-415/615

  24. Query Processing Strategies • The DBMS needs to process queries differently when using columnar data. • We have already discussed the Iterator Model for processing tuples in the DBMS query operators. CMU SCS 15-415/615

  25. Materialization Model • Each operator consumes its entire input and generates the full output all at once. SELECT cname, amt FROM customer, account WHERE customer.acctno = account.acctno AND account.amt > 1000 p cname, amt ⨝ acctno=acctno s amt>1000 CUSTOMER ACCOUNT CMU SCS 15-415/615

  26. Iterator Model • Each operator calls next() on their child operator to process tuples one at a time. next SELECT cname, amt FROM customer, account WHERE customer.acctno = account.acctno AND account.amt > 1000 next p cname, amt ⨝ next acctno=acctno s amt>1000 CUSTOMER ACCOUNT CMU SCS 15-415/615

  27. Observations • The Materialization Model is a bad because the intermediate results may be larger than the amount of memory in the system. • The Iterator Model is bad with a DSM because it requires the DBMS to stitch tuples back together each time. CMU SCS 15-415/615

  28. Vectorized Model • Like the Iterator Model but each next() invocation returns a vector of tuples instead of a single tuple. • This vector does not have to contain the entire tuple, just the attributes that are needed for query processing. CMU SCS 15-415/615

  29. Vectorized Model • Each operator calls next() on their child operator to process vectors. next SELECT cname, amt FROM customer, account WHERE customer.acctno = account.acctno AND account.amt > 1000 next cname, amt M next ⨝ acctno=acctno acctno, amt M next acctno amt amt>1000 s CUSTOMER ACCOUNT CMU SCS 15-415/615

  30. Vectorized Model • Reduced interpretation overhead. • Better cache locality. • Compiler optimization opportunities. CMU SCS 15-415/615

  31. Today’s Class • Storage Models • System Architectures • Vectorization • Compression • Distributed Execution CMU SCS 15-415/615

  32. Compression Overview • Compress the database to reduce the amount of I/O needed to process queries. • DSM databases compress much better than NSM databases. • Storing similar data together is ideal for compression algorithms. CMU SCS 15-415/615

  33. Naïve Compression • Use a general purpose algorithm to compress pages when they are stored on disk. • Example: 10KB page in memory, 4KB compressed page on disk. • Do we have to decompress the page when it is brought into memory? Why or why not? CMU SCS 15-415/615

  34. Fixed-width Compression • Sacrifice some compression in exchange for having uniform-length values per tuple. Tuples are no longer aligned at offsets userID userID userName userName userPass userPass hostname hostname lastLogin lastLogin userID userName userPass hostname lastLogin 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 Original Data Variable-Length Compression Fixed-Length Compression 6 6 6 7 7 7 CMU SCS 15-415/615

  35. Run-length Encoding • Compress runs of the same value into a compact triplet: • (value, startPosition, runLength) All tuples are sorted on this column. Reduces the # of triplets userID userID userID userID sex sex sex sex 0 0 0 0 M M (M,0,6) (M,0,3) (F,3,2) 1 M 1 (F,6,2) M 1 1 (M,5,3) M M 2 2 2 2 5 5 F 3 3 M 6 4 6 4 F M M 5 5 7 M 7 3 3 6 F 6 M Original Data Sorted Data Unsorted RLE Sorted RLE F 4 M 7 4 7 CMU SCS 15-415/615

  36. Bit-Vector Encoding • Store a separate bit-vector for each unique value for a particular attribute where an offset in the vector corresponds to a tuple. userID sex M → 111 0 0 111 userID sex F → 0 0 0 11 0 0 0 0 0 M 1 M 1 A ‘1’ means that the tuple at that offset has the bit-vector’s value M 2 2 3 3 F 4 4 F M 5 5 6 6 M Original Data Bit-Vector Compression 7 M 7 CMU SCS 15-415/615

  37. Dictionary Compression • Construct a separate table of the unique values for an attribute sorted by frequency. • For each tuple, store the position of its value in the dictionary instead of the real value. userID city Dictionary userID city 0 0 0 New York 0→(New York,5) 1 1 1 San Francisco 1→(San Francisco,2) 0 New York 2 2 2→(Pittsburgh,1) New York 0 3 3 4 4 2 Pittsburgh 5 5 San Francisco 1 6 6 0 New York Original Data 0 New York 7 7 Dictionary Encoded

  38. Processing Compressed Data • Some operator algorithms can operate directly on compressed data • Saves I/O without having to decompress! • Difficult to implement when the DBMS uses multiple compression schemes. • It’s generally good to wait as long as possible to materialize/decompress data when processing queries… CMU SCS 15-415/615

  39. Today’s Class • Storage Models • System Architectures • Vectorization • Compression • Distributed Execution CMU SCS 15-415/615

  40. Distributed OLAP • Execute analytical queries that examine large portions of the database. • Used for back-end data warehouses: • Example: Data mining • Key Challenges: • Data movement. • Query planning. CMU SCS 15-415/615

  41. Distributed OLAP Single OLAP Query Partitions P3 P2 P4 P5 P1 Repl Repl Repl Repl Application Server Repl CMU SCS 15-415/615

  42. Distributed Joins Are Hard SELECT * FROM table1, table2 WHERE table1.val = table2.val • Assume tables are horizontally partitioned: • Table1 Partition Key → table1.key • Table2 Partition Key → table2.key • Q: How to execute? • Naïve solution is to send all partitions to a single node and compute join. CMU SCS 15-415/615

  43. Broadcast Join • Main Idea: Send the smaller table to all nodes where the join is then computed in parallel. • Only works if the table is small. CMU SCS 15-415/615

  44. Semi Join • Main Idea: First distribute the join attributes between nodes and then recreate the full tuples in the final output. • Send just enough data from each table to compute which rows to include in output. • Lots of choices make this problem hard: • What to materialize? • Which table to send? CMU SCS 15-415/615

  45. Rest of the Semester • Wed Dec 2nd – Data Warehousing + Mining • Mon Dec 7th – Guest Speaker from MemSQL • Wed Dec 9th – Final Review + Systems http://cmudb.io/f15-systems CMU SCS 15-415/615

More Related