1 / 22

VIAF: Verification-based Integrity Assurance Framework for MapReduce

VIAF: Verification-based Integrity Assurance Framework for MapReduce. Yongzhi Wang , Jinpeng Wei. MapReduce in Brief. Satisfying the demand for large scale data processing It is a parallel programming model invented by Google in 2004 and become popular in recent years.

dora
Télécharger la présentation

VIAF: Verification-based Integrity Assurance Framework for MapReduce

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VIAF: Verification-based Integrity Assurance Framework for MapReduce Yongzhi Wang, Jinpeng Wei

  2. MapReduce in Brief • Satisfying the demand for large scale data processing • It is a parallel programming model invented by Google in 2004 and become popular in recent years. • Data is stored in DFS(Distributed File System) • Computation entities consists of one Master and many Workers (Mapper and Reducer) • Computation can be divided into two phases: Map and Reduce

  3. Word Count instance Hello, 4 Hello, 2

  4. Integrity Vulnerability How to detect and eliminate malicious mappers to guarantee high computation integrity ? Straightforward approach: Duplication Non-collusive worker

  5. Integrity vulnerability But how to deal with collusive malicious workers?

  6. VIAF Solution • Duplication • Introduce trusted entity to do random verification

  7. Outline • Motivation • System Design • Analysis and Evaluation • Related work • Conclusion

  8. Architecture

  9. Assumption • Trusted entities • Master • DFS • Reducers • Verifiers • Untrusted entities • Mappers • Information will not be tampered in network communication • Mappers’ result reported to the master should be consistent with its local storage (Commitment based protocol in SecureMR)

  10. Solution • Idea: Duplication + Verification • Deterministically duplicate each task to TWO mappers to discern the non-collusive mappers • Non-deterministically verify the consistent result to discern the collusive mappers • The credit of each mapper is accumulated by passing verification • A Mapper become trustable only when its credit achieves Quiz Threshold • 100% accuracy in detecting non-collusive malicious mapper • The more verification applied on a mapper, the higher accuracy to determine whether it is collusive malicious.

  11. Data flow control 2 3 5.a 1 4 6 4 3 1 5.b 2

  12. Outline • Motivation • System Design • Analysis and Evaluation • Related work • Conclusion

  13. Theoretical Analysis • Measurement metric (for each task) • Accuracy -- The probability the reducer receive a good result from mapper • Mapping Overhead – The average number of execution launched by the mapper • Verification Overhead – The average number of execution launched by the verifier

  14. Accuracy vs Quiz Threshold Collusive mappers only, different p: • m – Malicious worker ratio • c – Collusive worker ratio • p – probability that two assigned collusive workers are in one collusion group and can commit a cheat • q – probability that two collusive workers commit a cheat • r – probability that a non collusive worker commit a cheat • v – Verification Probability • k – Quiz Threshold.

  15. Overhead vs Quiz Threshold Collusive worker only: Overhead for each task stays between 2.0 to 2.4 Collusive worker only: Verification Overhead for each task stays between 0.2 to 0.207 when v is 0.2

  16. Experimental Evaluation • Implementation based on Hadoop 0.21.0 • 11 virtual machines(512 MB of Ram, 40 GB disk each, Debian5.0.6 ) deployed on a 2.93 GHz, 8-core Intel Xeon CPU with 16 GB of RAM • word count application, 400 mapping tasks and 1 reduce task • Out of 11 virtual hosts • 1 is both the Master and the benign worker • 1 is the verifier • 4 are collusive workers (malicious ratio is 40%) • 5 are benign workers

  17. Evaluation result Where c is 1.0, p is 1.0, q is 1.0, m is 0.40

  18. Outline • Motivation • System Design • Analysis and Evaluation • Related work • Conclusion

  19. Related work • Replication, sampling, checkpoint-based solutions were proposed in several distributed computing contexts • Duplicate based for Cloud: SecureMR(Wei et al, ACSAC ‘09), Fault Tolerance Middleware for Cloud Computing( Wenbinget al, IEEE CLOUD ‘10) • Quiz based for P2P: Result verification and trust based scheduling in peer-to-peer grids (S. Zhao et al, P2P ‘05) • Sampling based for Grid: Uncheatable Grid Computing (Wenliang et al, ICDCS’04) • Accountability in Cloud computing research • Hardware-based attestation: Seeding Clouds with Trust Anchors (Joshua et al, CCSW ‘10) • Logging and Auditing: Accountability as a Service for the Cloud(Jinhui, et al, ICSC ‘10)

  20. Outline • Motivation • System Design • Analysis and Evaluation • Related work • Conclusion

  21. Conclusion • Contributions • Proposed a Framework (VIAF) to defeat both collusive and non-collusive malicious mappers in MapReduce calculation. • Implemented the system and proved from theoretical analysis and experimental evaluation that VIAF can guarantee high accuracy while incurring acceptable overhead. • Future work • Further exploration without the assumption of trust of reducer. • Reuse the cached result to better utilize the verifier resource.

  22. Thanks! Q&A

More Related