1 / 42

Flexible Transactional Storage

Flexible Transactional Storage. Russell Sears sears@cs.berkeley.edu HPTS 2005. Outline. Introduction Problems with existing systems A modular approach Composable on-disk data structures Application control of low-level primitives Microbenchmarks The next steps

gwen
Télécharger la présentation

Flexible Transactional Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Flexible Transactional Storage Russell Sears sears@cs.berkeley.edu HPTS 2005

  2. Outline • Introduction • Problems with existing systems • A modular approach • Composable on-disk data structures • Application control of low-level primitives • Microbenchmarks • The next steps • Library optimization during application compilation • Verification of application-specific extensions • Conclusion

  3. Introduction • New applications introduce new demands for storage infrastructure • Database implementations eventually adapt • Continuous queries, database file systems, XML, OLAP • But not always • Web search, GMail, P2P • Either way, custom storage solutions fill in the cracks • Expensive; little reuse of existing infrastructure • Subtle bugs lead to data corruption

  4. Selective Reuse of Storage System Components • Expose the RSS to allow greater reuse • Berkeley DB / Sleepy Cat • Layered Databases • Proven real-world improvements in performance and code complexity • Why not provide lower level interfaces?

  5. Query Optimizer Query Evaluator Storage System Statistics Relations Tuples Physical Access Methods Recovery / Durability Locking Replication … Page File Log File Our Focus Allow applications to directly customize and reuse underlying storage primitives

  6. Design Goals • Let applications build upon or replace modules • Allocation strategies • Page layout • On disk data structures • Concurrency control • Log (format, durability and reordering) • Recovery • Improved usability and performance • Application specific data structure organization • Program specific optimizations

  7. LLADD’s Storage Interface (Lightweight Library for Atomicity and Data Durability) • Focus on simplifying the APIs within the RSS • “redo()” and “undo()” (there is no “do()”) • Subcomponents implement flexible APIs write log Tupdate() Log Manager Data Structure Plugin op(data) invoke REDO Wrapper Function(s) log entries Tset() Recovery / Abort Operation Implementation UNDO/REDO requests Read-only Access Methods Tread() page updates read memory Page File Write ahead logging implementation (Arrows point in the direction of application data flow)

  8. Poor locality / High overhead? Index Page Pages contain fixed length records Internal Fragmentation? Reusable data structures ArrayList Linked Lists • Familiar object oriented design patterns allow data structure reuse • Nested Top Actions can be used to provide atomicity • Easy to specialize data structures Linear Hash Table Buckets Bucket List

  9. Hash Table Bulk Load Time • Layered version’s performance is competitive • Also benchmarked optimized version • No nested top actions à Temporary inconsistency • Saves log bandwidth, roughly doubles throughput • Complex, monolithic code

  10. System Memory File system cache DB page cache Application Data (Live objects) Disk Object serialization • Persistent objects are often triple buffered • Turning off OS cache removes one copy • We can remove a second copy

  11. The Problem with the Page Cache • Approach #1: Reduce the number of live objects • Need to repeatedly serialize and deserialize objects • CPU intensive • Approach #2: Reduce the size of the page cache • Object updates force a write to the page cache • Two extra disk accesses (1 read, 1 write) to update an object in cache!

  12. Specialized Page Caching • Defer page update until object is evicted from application memory • Issue log writes immediately • Application cache manipulates page cache directly

  13. Object serialization performance Roughly doubled throughput while reducing memory requirements.

  14. Access Locality and Object Serialization Under heavy memory pressure, the optimization allows the cache to be utilized efficiently

  15. Language Based Tools • Modern programming techniques provide some interesting opportunities • Software verification • Optimization • High level interfaces make it difficult to take advantage of some of these tools • How much do we gain by moving to lower level interfaces?

  16. Memoization • Servicing a cache hit is expensive compared to a pointer traversal • Programs typically access the same page repeatedly • Simple solution: Keep a pointer to the last value returned by the page cache • Problem: Unrelated, interleaved calls • Multi-threaded code • Layered APIs

  17. Example • Consider this application code: for(int i = 0; i < len; i++) { value = hash_lookup(recordid, key[i]); } • hash_lookup() probably looks something like this: hash_lookup(…) { Page * p = pin(recordid.page); // Read hashtable header unpin(recordid.page); … // pin and unpin bucket, data pages } • Memoize header by storing values in the application’s stack frame

  18. Dynamic Checks • Insert memoization logic into application code, and store memoized values on the stack. • Preserves access locality within each thread • Handles “special cases” (B-Tree roots, iterators, etc) • Simplifies application/library source code • Implemented using CIL, a C source to source transformation library. • ~2x speedup on read-only CPU-bound hash table workload

  19. Static analysis (work in progress) • Dynamic checks are expensive • Use BLAST to remove redundant checks at compile time • Tentatively remove check and call to pin() • Ask BLAST to prove the memoized value is correct at pin()’s call site. • Assumed the original program is “well behaved” C by removing problematic constructs

  20. Verification of Invariants (future work) • Extensions to the library must follow a number of invariants • Using nested top actions correctly • Updating the LSN of altered pages • Not relying upon transient data in redo()/undo() • and so on • Want to check application code’s adherence to invariants • Hopefully, this will allow us to guarantee high level properties are met • Similar in spirit to the use of SLAM to verify Windows drivers

  21. Conclusion • Presented a simple storage architecture that supports a wide variety of applications • The architecture brings up a number of interesting research questions • A preliminary implementation is available • Ready for researchers, not for important data • http://lladd.sourceforge.net/

  22. Eric Brewer Jimmy Kittiyachavalit Jim Blomo Jason Bayer Mike Demmer Bowei Du Gilad Arnold Amir Kamil Colleen Lewis Acknowledgements

  23. Backup Slides

  24. Database Systems Take Control Away from Developers • Great solution for established classes of applications • Leads to serious problems in unanticipated situations • A DBMS implementation can only support a finite set of semantics and must make decisions regarding • Data layout / programming model • Concurrency / consistency • Recovery / durability • Replication / scalability

  25. One Solution • Give application developers more choices • Relational / Cube / XML data models • Optimistic / pessimistic concurrency control • Serializable / Repeatable Read / Read Committed / Read Uncommitted • Disable media recovery, partial logging, no logging • 2PC, merge replication, master / slave, partitioning • and so on… • Leads to complex DBMS implementations • It takes a long time to get this right!

  26. Editing DBMS Source Code is Difficult • Requires knowledge of complex DB internals • Easy to get the extensions wrong • Difficult to test or debug • Breaks existing functionality • Leads to incompatible DB versions. Are these all just artifacts of conventional database design?

  27. Challenges • It must be easy to add new extensions, and hard to (accidentally) break existing ones. • Low level changes should not alter high level functionality in unexpected ways • Bugs in recovery logic should be obvious • In ‘interesting’ cases, should see ‘significant’ performance improvement.

  28. Generic page layout: Fixed length record layout: LSN Fixed Length Data Length Record Count Page type specific LSN Page Type 1 2 … Multiple page formats • Record id’s are of the form: (page, slot, length) • ‘slot’ is interpreted by the appropriate page format implementation; ‘length’ is for the application’s benefit. • Page Type 0 is reserved (allows lazy page initialization)

  29. Original Code foo(int i, record r) { Page *p; while(i--) { r->slot++; p = pin(r.page); … unpin(p); if(...) { r.page++; r.slot = 0; } } } Optimized Code foo(int i, record r) { Page *p = null; while(i--) { r.slot++; if(!p || p->page != r.page) { unpin(p); p = pin(r.page); } … if(...) { r.page++; r.slot = 0; } } if(p) unpin(p); } Dynamic Check Example

  30. Original Code + Dynamic Checks foo(int i, record r) { Page *p = pin(r.page); … while(i--) { r.slot++; if(!p || p->page != r.page) { unpin(p); p = pin(r.page); } } unpin(p); } Optimized Code foo(int i, record r) { Page *p = pin(r.page); … while(i--) { r.slot++; } unpin(p); } Static Analysis Example

  31. Potential applications • Tool for future database research • Improved performance from better compiler / language based optimization • New programming language primitives seek to abstract SQL away. In some cases legacy declarative interfaces may simply be getting in the way

  32. Lock Manager API • Page level locking can be supported by the buffer manager, but requires solid error handling. • Record level / index locking is tricky • Needs to understand built in and third party extensions • Plan to implement Hierarchical 2PL in a way that allows reuse by index implementations • Index implementations can simply lock the entire index if performance is not an issue.

  33. In memory vs. on disk semantics • Holy grail: Application data acts like persistent data • But we still want a bunch of database features • One solution: Map a custom declarative interface into SQL. • Don’t we still need an optimizer, etc for the in memory data? • Transactional pages look a lot like RAM, especially if you provide a library of persistent data structures that match the ones the application uses

  34. Sample Operation Implementation (1/3) // Operation Implementation // p is the bufferPool’s current copy of the page. int operateIncrement(int xid, Page *p, lsn_t lsn, recordid rid, const void * d) { inc_dec_t * arg = (inc_dec_t*)d; int i; latchRecord(p, rid); readRecord(xid, p, rid, &i); // read current value i += arg->amount; // write new value, update LSN writeRecord(xid, p, lsn, rid, &i); unlatchRecord(p, rid); return 0; // no error }

  35. Sample Operation Implementation (2/3) // register the operation ops[OP_INCREMENT].implementation= &operateIncrement; ops[OP_INCREMENT].argumentSize = sizeof(inc_dec_t); // set the REDO to be the same as normal operation ops[OP_INCREMENT].redoOperation = OP_INCREMENT; // UNDO is the inverse of REDO ops[OP_INCREMENT].undoOperation = OP_DECREMENT; // Define inc_dec_t typedef struct {int amount } inc_dec_t;

  36. Sample Operation Implementation (3/3) // User friendly wrapper function int Tincrement(int xid, recordid rid, int amount) { // rec will be serialized to the log int_dec_t rec; rec.amount = amount; // write a log entry, then execute it Tupdate(xid, rid, &rec, OP_INCREMENT); // return the incremented value int new_value // wrappers can call other wrappers Tread(xid, rid, &new_value); return new_value; }

  37. What if the database is missing a crucial feature? • An application could use the database anyway • Convoluted data and/or programming model • Performance problems • Or it could implement what it needs from scratch • Reinventing the wheel • Subtle problems with data loss and corruption

  38. Modularity of storage implementation • Focus on simplifying the APIs within the RSS • Operation implementations consist of two callbacks, “redo()” and “undo()” (there is no “do()”) • Subcomponents implement flexible APIs Page File read memory page updates Read-only Access Methods UNDO/REDO requests Tread() Recovery / Abort Operation Implementation Wrapper Function(s) log entries op(data) Tset() invoke REDO App-specific extensions Tupdate() Log Manager write log (Arrows point in the direction of application data flow)

  39. Language Based Optimization • Applications often use storage libraries in limited, predictable ways • Storage infrastructure must support all legal access patterns • Could add calls to the API to optimize special cases • Difficult to use correctly • Library contains multiple implementations of each function

  40. Longer Introduction • Conventional databases are not appropriate for some applications • It takes time to add support for new classes of applications • Niche applications may not warrant added complexity • Sometimes declarative interfaces are overkill • Low level API’s can be difficult to use • Expose intricately connected subsystems • Bugs in recovery logic • Applications must implement high-level functionality • Modern programming techniques can address these problems

  41. • Relational databases force some decisions upon application developers: • Data model / layout • Concurrency model • Consistency model • Recovery and durability semantics • Replication system • Declarative programming models • and so on…

  42. Review of Write Ahead Logging Begin T1 P1ß … Abort T1 P2ß … P3ß … Begin T2 P3ß … Abort T1 Problem 1: Physical undo, concurrent transactions and non-atomic operations interact poorly Solution: Logical undo Problem 2: If the tree is inconsistent during recovery, logical undo fails Solution: (Nested Top Actions) Use physical undo until consistency is restored then atomically switch to logical undo

More Related