1 / 17

Rollback by Reverse Computation

Rollback by Reverse Computation. Kevin Hamlen CS717: Fault-Tolerant Computing. Research on Reversible Computation. Quantum Computing Quantum Computation (Nielsen & Chuang) Nanocomputers Ralph Merkle @ Zyvex (http://www.merkle.com) Low-power processor design

kynton
Télécharger la présentation

Rollback by Reverse Computation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rollback byReverse Computation Kevin Hamlen CS717: Fault-Tolerant Computing

  2. Research onReversible Computation • Quantum Computing • Quantum Computation (Nielsen & Chuang) • Nanocomputers • Ralph Merkle @ Zyvex (http://www.merkle.com) • Low-power processor design • http://www.ai.mit.edu/~cvieri/reversible.html • Computational Complexity Theory • Time and Space Bounds for Reversible Simulation (Buhrman, Tromp, and Vitányi) • Parallel Programs & Fault Tolerance • Carothers (RPI), Perumalla (Georgia Tech), and Fujimoto (Georgia Tech)

  3. Georgia Tech Time Warp (GTW) “General purpose parallel discrete event simulation executive” • General purpose simulator • telecommunication network simulations • commercial air traffic simulations • Parallel • Shared memory • Message passing • Event-based • Each computation and each event associated with a stimulus event • Executes programs written in C/C++ w/API calls

  4. Optimistic Synchronization • Incorrect computations permitted • Roll back when error detected • Direct cancellation

  5. State-Saving • Copy state-saving – copy of entire state saved before every event • Periodic state-saving – copy of entire state saved before every pth event • Incremental state-saving – copy of only modified state saved before every event • Reversible Computing – copy of only destroyed state data saved before every event

  6. Advantages of Reversible Computing • Lower space overhead for saved state • Lower time overhead in non-rollback case • Cost of state-saving amortized over duration of computation • Advantages most pronounced in fine-grained settings (i.e. many events each associated with small computations)

  7. Reversing C Programs Original int x=0, y=1; f() { if (x>y) y += x; else x += y; } Reversible int x=0, y=1; bit b; f’() { b = (x>y); if (b) y += x; else x += y; } Reverse int x=0, y=1; bit b; rf’() { if (b) y -= x; else x -= y; } Note: If S is the state, then we need rf’(f’(S))==S. However we don’t care about the result of f’(rf’(S)).

  8. RCC: Reverse C Compiler • Transforms arbitrary C programs • Each function made reversible • A reverse of each function is generated • GTW Executive modified to call function reverses during rollback

  9. Saving Destroyed State Data:The Tape Driver Abstraction SAVE_BYTES(var) – push var onto the tape and increment the tape pointer by sizeof(var)*8 RESTORE_BYTES(var) – pop var from the tape and decrement the tape pointer by sizeof(var)*8 SAVE_BITS(var, n) – push the n low-order bits of var onto the tape; increment the tape pointer by n RESTORE_BITS(var, n) – pop n bits from the tape and store them in the n low-order bits of var; decrement the tape pointer by n

  10. RCC’s Transformation Original int x=0, y=1; f() { if (x>y) y += x; else x += y; } Reversible int x=0, y=1; f’() { char c=!!(x>y); if (c) y += x; else x += y; SAVE_BITS(c,1); } Reverse rf’() { char c; RESTORE_BITS(c,1); if (c) y -= x; else x -= y; }

  11. RCC Compilation Procedure Original C Program Normalize Transform Optimize Reversible C Program w/reverses

  12. Normalization Post-conditions • Only one assignment per expression. No other side-effects in such an expression except possibly a single function call. • for() loops replaced by equivalent while()’s. • Conditional expressions are side-effect free. • Only one return statement per function. It has the form “return;” or “returnvar;” • break’s and continue’s replaced by goto’s.

  13. Two Unusual Normalizations • All floating point datatypes promoted to strictly higher precision types • Software emulation of a new highest-precision datatype may be necessary. • Abnormal exit (via goto) from a block requires saving all variables which are about to go out of scope. • Analogous to how C++ handles implicit destructor calls

  14. Transformation Phase:Block Scopes Original { int x, y; s1 s2 s3 } Reversible { int x, y; s1 s2 s3 SAVE(x); SAVE(y); } Reverse { int x, y; RESTORE(y); RESTORE(x); rs3 rs2 rs1 }

  15. Interesting Transformations • Function pointer calls – maintain a hash table of the reverses of functions • Labeled join points – introduce a variable to record the source of the jump • Loops – introduce a counter variable to record the number of iterations

  16. Optimization Phase • Irreversible operations ignored (e.g. output, network message sends) • Kernel-reversible operations are special cases (e.g. message cancellation) • Dataflow analysis on reverses (esp. initializer expressions) • Invariant detection (esp. conditionals) • Tape compression (esp. loops)

  17. References C. Carothers, K. S. Perumalla, R. M. Fujimoto. “Efficient Optimistic Parallel Simulations using Reverse Computation.” In Proceedings of 13th Workshop on Parallel and Distributed Simulation, May 1999. K.S. Perumalla and R. M. Fujimoto. “Source code transformations for efficient reversibility.” Technical Report, GIT-CC-99-21, College of Computing, Georgia Institute of Technology.

More Related