1 / 16

Fine Grained Transaction Log for Data Recovery in Database Systems

Department of Computer Sci. & Tech. Fine Grained Transaction Log for Data Recovery in Database Systems. Huazhong University of Sci. & Tech. Ge Fu fuge@smail.hust.edu.cn. Towards Database Security Ⅰ. What does conventional database security mechanism concern ?

gyula
Télécharger la présentation

Fine Grained Transaction Log for Data Recovery in Database Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Department of Computer Sci. & Tech. Fine Grained Transaction Log for Data Recovery in Database Systems Huazhong University of Sci. & Tech. Ge Fu fuge@smail.hust.edu.cn

  2. Towards Database Security Ⅰ • What does conventional database security mechanism concern? confidentiality, integrity, availability, survivability • What we have done? Authentication & Authorization Access Control (DAC, MAC, FGAC et al.) Inference Control Multilevel Secure Databases Data Encryption

  3. Towards Database Security Ⅱ • Disadvantages of these Methods? a) Addresses primarily how to protect the security of a database; b) Preventative based methods can not prevent all attacks. E.g. SQL injection & cross site script attacks c) Data recovery after attacks become an important issue.

  4. Existing Methods for data recovery • Complete rollback • Hard recovery • Flash back in Oracle 10g • Selective data recovery keep a transaction log, find the read-write dependency relationship between transactions. And then undo the malicious (bad) and affected (reads from malicious transactions) transactions, keep the results of benign transactions.

  5. Selective data recovery Ⅰ

  6. Selective data recovery Ⅱ • Selective data recovery methods include: • Locate each affected transaction (damage assessment). According to read-write dependency between transactions (A Dependency Graph), find transactions affected by the malicious transactions; • Recover the database from the damage caused on the data items updated by every malicious or affected transaction (damage recovery). Undo the malicious and affected transaction.

  7. Transaction Log in selective recovery • What kind of log is required for selective data recovery? • The log should record allreador writeoperations towardsdata items. • The log can be a table. It records all operations of transactions. Each row represents an operation (read or write) towards a data item. It can be this type: TRASATIONID, OPTYPE, ITEM, BEFOREIMG, AFTERIMG, TIME

  8. Does existing log in DBMS useful? • Conventional undo/redo log and triggers in DBMS is only for write operations, and can not capture read operations for transactions; • Existing auditing mechanisms are designed to audit database statements, privileges, or schema objects. The audit is based on the “table level”. It can not obtain data items that are manipulated by the operation.

  9. Fine Grained Transaction Log • The log system for selective data recovery is devoted to following problems: P1: The log should be created in the executing period of transactions. P2: The read operation in sub-query in a SQL statement should be captured. • If the log records all read and write operations in transactions, including sub-queries in SQL statements, we call this new kind of log Fine Grained Transaction Log.

  10. Fine Grained Transaction Log Generator

  11. Read Log Generator

  12. Experiment results and analysis Ⅰ • TPC-W Benchmark TPC-W is a transactional web e-commerce benchmark introduced by the Transaction Processing Performance Council. TPC-W specifies an ecommerce workload that simulates the activities of a retail website which produces workload on the backend databases. • Why we choose TPC-W? 1) TPC-W is a common used benchmark; 2) Provide three web interaction patterns and use WIPSb,WIPS, WIPSo to measure performances under different web interaction patterns.

  13. Experiment results and analysis Ⅱ Test Environment • DBMS: SQL Server 2000, a PC with Windows NT, Pentium R 2.8GHZ CPU, 2GB main memory. • FGTL Generator and TPC-W Platform: a PC with Windows NT, Pentium R 2.8GHZ CPU, 1GB main memory. • 10/100Mbps switch LAN. • FGTL Generator is implemented using Eclipse on JAVA platform.

  14. Experiment results and analysis Ⅲ • Results:

  15. Experiment results and analysis Ⅳ • The objective: Study throughput of FGTL Generator under different amounts of EBs. • Conclusions: • With the number of EBs increasing, the overhead of FGTL Generator become large. • In WIPSo, the overhead of throughput for FGTL Generator is the lowest and in WIPSb is the highest.

  16. Questions?

More Related