1 / 19

An Efficient, Incremental, Automatic Garbage Collector

An Efficient, Incremental, Automatic Garbage Collector. P. Deutsch and D. Bobrow. Ivan Jibaja. CS 395T. Historical Context . Intel 8080 dominates the PC market 8-bit processor 2 MhZ Hard Drive and Tape for secondary storage. What is Reference Counting (RC)?. Basic Idea:

livvy
Télécharger la présentation

An Efficient, Incremental, Automatic Garbage Collector

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Efficient, Incremental,Automatic Garbage Collector P. Deutsch and D. Bobrow Ivan Jibaja CS 395T

  2. Historical Context • Intel 8080 dominates the PC market • 8-bit processor • 2 MhZ • Hard Drive and Tape for secondary storage

  3. What is Reference Counting (RC)? • Basic Idea: • Objects that do not have external references are unreachable • Unreachable objects are not live • Every object stores the count of external references to itself

  4. Reference Counting (RC) • Advantages • Immediate reclamation of dead objects • Easy to implement • Disadvantages • Circular structures • Computation overhead (per pointer mutation) • Space overhead

  5. Efficiency and Scalability • Reference Counting • Overhead is proportional to the amount of work performed by the mutator • Tracing algorithms • Overhead is proportional to the amount of allocated space

  6. Memory Characterization – LISP • Clark and Green analyzed Lisp programs • Most data have a reference count of one • Else, data is mostly temporary (esp. local data) • Very few cells are referenced by more than one cell ( ~ 2% - 10%)

  7. Deferred RC – Motivation I • Transactional. Stored in sequential file • Three events can modify the RC: • Allocation of a new object • Creation of a pointer to an object • Destruction of a pointer to an object • Benefit • No immediate time overhead for adjusting RC

  8. Deferred RC – Motivation II • Statement: Most data have a reference count of one • Adjustment: • Only store the RC for objects with 2 or more references • Store RC separately from data: Multireference Table (MRT) • Benefit: • Reduce space overhead • No paging (MRT is always in memory)

  9. Deferred RC – Motivation III • Statement: Local data is mostly temporary • Adjustment • Do not keep track of local references (from stack) • Store unreferenced from other cells (and pointed from stack) objects in Zero Count Table (ZCT) • Benefit • Reduce space overhead • Optimized for local variables

  10. Deferred RC - Visualization Zero Count Table (ZCT) Multireference Table (MRT) Unreferenced objects from other objects (RC = 0) || Pointed from stack RC >= 2 RC = 1

  11. Deferred RC – Implementation I • On “Allocation” • Add to the Zero Count Table

  12. Deferred RC – Implementation II • On “Creation of pointer p” If ( inZCT(p) ) { RemoveFromZCT(p) // After: RC = 1 } else { if ( inMRT(p) ) { IncrementRC(p); // After: RC > 2 } else { AddToMRT(p,2); // After: RC = 2 } }

  13. Deferred RC – Implementation III • On “Destruction of pointer p” If ( inMRT(p) ) { if ( getRCfromMRT(p) == 2 ) { removeFromMRT(p); // After: RC = 1 } else { decrementRC(p); // After: RC >= 2 } } else { AddToZCT(p); // After: RC = 0 }

  14. Deferred RC - Reclamation • All objects present in the ZCT not referenced by variables in the stack ARE reclaimable • Optimization: • Create a Variable Reference Table (VRT) • Hash table to store all the objects pointed by variables in the stack • Result: • If p is in ZCT and not in VRT then its reclaimable

  15. Deferred RC - Optimization • Use buffer in core to store hash table (bit for allocate) where the value is the RC • Benefit: Do not need to process some transactions • “allocate - create” pairs • “create – destroy” pairs • Abandoned data is left with only “allocate”

  16. What about cycles? • All RC schemes require an independent collector for cycles • This collector can also perform compaction and reorganization (for locality) • Paper describes a copying algorithm and the details for Deferred RC

  17. Discussion • CMP: Concurrent vs. heterogeneous cores? • Static analysis • Find possible transaction pairs that cancel each other • Generations: • RC vs. tracing. Young vs. Old Spaces • Does it really reduce space overhead? • Ungar claims Deferred RC uses 25 Kb more than traditional RC (for his 1 large lisp sample)

  18. Discussion • “For a ‘create pointer’ transaction: if the datum referenced is in the ZCT, just delete it, since the effective RC is 1”. Can we really just remove this entry? How about references from stack?

  19. Linearizing Cycle Collection • Expand MRT to include forwarding pointers and actually count stack references for MRT. • Store expanded MRT sorted (rather than hashed) so that the access is sequential

More Related