1 / 47

On-the-Fly Garbage Collection Using Sliding Views

On-the-Fly Garbage Collection Using Sliding Views. Erez Petrank Technion – Israel Institute of Technology Joint work with Yossi Levanoni, Hezi Azatchi, and Harel Paz. Garbage Collection.

aria
Télécharger la présentation

On-the-Fly Garbage Collection Using Sliding Views

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On-the-Fly Garbage Collection Using Sliding Views Erez Petrank Technion – Israel Institute of Technology Joint work with Yossi Levanoni, Hezi Azatchi, and Harel Paz

  2. Garbage Collection • User allocates space dynamically, the garbage collector automatically frees the space when it “no longer needed”. • Usually “no longer needed” = unreachable by a path of pointers from program local references (roots). • Programmer does not have to decide when to free an object. (No memory leaks, no dereferencing of freed objects.) • Built into Java, C#. GC via Sliding Views

  3. Garbage Collection Two Classic Approaches Tracing [McCarthy 1960]: trace reachable objects, reclaim objects not traced. Reference counting [Collins 1960]: keep a reference count for each object, reclaim objects with count 0. Traditional Wisdom Good Problematic GC via Sliding Views

  4. What (was) Bad about RC ? B • Does not reclaim cycles • A heavy overhead on pointer modifications. • Traditional belief: “Cannot be used efficiently with parallel processing” A GC via Sliding Views

  5. What’s Good about RC ? • Reference Counting work is proportional to work on creations and modifications. • Can tracing deal with tomorrow’s huge heaps? • Reference counting has good locality. • The Challenge: • RC overhead on pointer modification seems too expensive. • RC seems impossible to “parallelize”. GC via Sliding Views

  6. Garbage Collection Today • Today’s advanced environments: • multiprocessors + large memories Dealing with multiprocessors Single-threaded stop the world GC via Sliding Views

  7. Garbage Collection Today • Today’s advanced environments: • multiprocessors + large memories Dealing with multiprocessors Parallel collection Concurrent collection GC via Sliding Views

  8. Terminology(stop the world, parallel, concurrent, …) Stop-the-World Parallel (STW) Concurrent On-the-Fly program GC GC via Sliding Views

  9. Stop-the-World Parallel (STW) Concurrent On-the-Fly program GC Benefits & Costs 200ms Informal Pause times Throughput Loss: 10-20% 20ms 2ms GC via Sliding Views

  10. This Talk • Introduction: RC and Tracing, Coping with SMP’s. • RC introduction and parallelization problem. • Main focus: a novel concurrent reference counting algorithm (suitable for Java). • Concurrent made on-the-fly based on “sliding views” • Extensions: • cycle collection, mark and sweep, generations, age-oriented. • Implementation and measurements on Jikes. • Extremely short pauses, good throughput. GC via Sliding Views

  11. Basic Reference Counting • Each object has an RC field, new objects get o.rc:=1. • When p that points to o1is modified to point to o2execute: o2.rc++, o1.rc--. • if then o1.rc==0: • Delete o1. • Decrement o.rc for all children of o1. • Recursively delete objects whose rc is decremented to 0. p o1 o2 GC via Sliding Views

  12. An Important Term: • A write barrier is a piece of code executed with each pointer update. • “po2 ” implies: Read p; (see o1)p o2;o2.rc++; o1.rc- -; p o1 o2 GC via Sliding Views

  13. Deferred Reference Counting • Problem:overhead on updating program variables (locals) is too high. • Solution [Deutch & Bobrow 76] : • Don’t update rc for local variables (roots). • “Once in a while”: collect all objects with o.rc=0 that are not referenced from local variables. • Deferred RC reduces overhead by 80%. Used in most modern RC systems. • Still, “heap” write barrier is too costly. GC via Sliding Views

  14. Multithreaded RC? Traditional wisdom: write barrier must be synchronized !

  15. Multithreaded RC? • Problem 1: ref-counts updates must be atomic • Fortunately, this can be easily solved : Each thread logs required updates in a local buffer and the collector applies all the updates during GC (as a single thread).

  16. Problem 2: parallel updates confuse counters: Thread 1: Read A.next; (see B) A.next  C; B.rc- -; C.rc++ Thread 2: Read A.next; (see B) A.next  D; B.rc- -; D.rc++ A C B D Multithreaded RC? • Problem 1: ref-counts updates must be atomic

  17. Known Multithreaded RC • [DeTreville 1990, Bacon et al 2001]: • Cmp & swp for each pointer modification. • Thread records its updates in a buffer. GC via Sliding Views

  18. To Summarize Problems… • Write barrier overhead is high. • Even with deferred RC. • Using RC with multithreading seems to bear high synchronization cost. • Lock or “compare & swap” with each pointer update. GC via Sliding Views

  19. Reducing RC Overhead: • We start by looking at the “parent’s point of view”. • We are counting rc for the child, but rc changes when a parent’s pointer is modified. Parent Child

  20. . . . . . O0 O1 O2 O3 O4 On An Observation • Consider a pointer p that takes the following values between GC’s: O0,O1, O2, …, On . • All RC algorithms perform 2n operations: O0.rc--; O1.rc++; O1.rc--; O2.rc++; O2.rc--; … ; On.rc++; • But only two operations are needed: O0.rc-- and On.rc++ p

  21. Use of Observation Garbage Collection P  O1; (record p’s previous value O0) Garbage Collection: For each modified slot p: • Read p to get On, read records to get O0. • O0.rc-- , On.rc++ P  O2; (do nothing) … Time P  On; (do nothing) Only the first modification of each pointer is logged.

  22. Some Technical Remarks • When a pointer is first modified, it is marked “dirty” and its previous value is logged. • We actually log each object that gets modified (and not just a single pointer). • Reason 1: we don’t want a dirty bit per pointer. • Reason 2: object’s pointers tend to be modified together. • Only non-null pointer fields are logged. • New objects are “born dirty”.

  23. Effects of Optimization • RC work significantly reduced: • The number of logging & counter updates is reduced by a factor of 100-1000 for typical Java benchmarks !

  24. Elimination of RC Updates

  25. Effects of Optimization • RC work significantly reduced: • The number of logging & counter updates is reduced by a factor of 100-1000 for typical Java benchmarks ! • Write barrier overhead dramatically reduced. • The vast majority of the write barriers run a single “if”. • Last but not least: the task has changed ! We need to record the first update.

  26. Reducing Synch. Overhead Our second contribution: • A carefully designed write barrier (and an observation) does not require any sync. operation. GC via Sliding Views

  27. The write barrier Update(Object **slot, Object *new){ Object *old = *slot if (!IsDirty(slot)) { log( slot, old ) SetDirty(slot) } *slot = new } • Observation: • If two threads: • invoke the write barrier in parallel, and • both log an old value, • then both record the same • old value.

  28. Running Write Barrier Concurrently Thread 1: Update(Object **slot, Object *new){ Object *old = *slot if (!IsDirty(slot)) { /* if we got here, Thread 2 has */ /* yet set the dirty bit, thus, has */ /* not yet modified the slot. */ log( slot, old ) SetDirty(slot) } *slot = new } Thread 2: Update(Object **slot, Object *new){ Object *old = *slot if (!IsDirty(slot)) { /* if we got here, Thread 1 has */ /* yet set the dirty bit, thus, has */ /* not yet modified the slot. */ log( slot, old ) SetDirty(slot) } *slot = new }

  29. Concurrent Algorithm: • Use write barrier with program threads. • To collect: • Stop all threads • Scan roots (local variables) • get the buffers with modified slots • Clear all dirty bits. • Resume threads • For each modified slot: • decrement rc for old value (written in buffer), • increment rc for current value (“read heap”), • Reclaim non-local objects with rc 0.

  30. Timeline Stop threads. Resume threads. Scan roots; Get buffers; erase dirty bits; Decrement values in read buffers; Increment “current” values; Collect dead objects

  31. Unmodified current values are in the heap. Modified are in new buffers. Timeline Stop threads. Resume threads. Scan roots; Get buffers; erase dirty bits; Decrement values in read buffers; Increment “current” values; Collect dead objects

  32. Concurrent Algorithm: • Use write barrier with program threads. • To collect: • Stop all threads • Scan roots (local variables) • get the buffers with modified slots • Clear all dirty bits. • Resume threads • For each modified slot: • decrease rc for old value (written in buffer), • increase rc for current value (“read heap”), • Reclaim non-local objects with rc 0. Goal 2: stop one thread at a time Goal 1: clear dirty bits during program run.

  33. The Sliding Views “Framework” • Develop a concurrent algorithm • There is a short time in which all the threads are stopped simultaneously to perform some task. • Avoid stopping the threads together. Instead, stop one thread at a time. • Tricky part: “fix” the problems created by this modification. • Idea borrowed from the Distributed Computing community [Lamport]. GC via Sliding Views

  34. Graphically A Sliding View A Snapshot Heap Addr. Heap Addr. time time t t1 t2 GC via Sliding Views

  35. Fixing Correctness • The way to do this in our algorithm is to use snooping: • While collecting the roots, record objects that get a new pointer. • Do not reclaim these objects. • No details… GC via Sliding Views

  36. Cycles Collection • Our initial solution:use a tracing algorithm infrequently. • More about this tracing collector and about cycle collectors later… GC via Sliding Views

  37. Performance Measurements • Implementation for Java on the Jikes Research JVM • Compared collectors: • Jikes parallel stop-the-world (STW) • Jikes concurrent RC (Jikes concurrent) • Benchmarks: • SPECjbb2000: a server benchmark --- simulates business-like transactions. • SPECjvm98: a client benchmarks --- a suite of mostly single-threaded benchmarks GC via Sliding Views

  38. Pause Times vs. STW GC via Sliding Views

  39. Pause Times vs. Jikes Concurrent GC via Sliding Views

  40. SPECjbb2000 Throughput GC via Sliding Views

  41. SPECjvm98 Throughput GC via Sliding Views

  42. SPECjbb2000 Throughput GC via Sliding Views

  43. A Glimpse into Subsequent Work:SPECjbb2000 Throughput GC via Sliding Views

  44. Subsequent Work • Cycle Collection [CC’05]) • A Mark and Sweep Collector [OOPSLA’03] • A Generational Collector [CC’03] • An Age-Oriented Collector [CC’05] GC via Sliding Views

  45. Related Work • It’s not clear where to start… • RC, concurrent, generational, etc… • Some more relevant work was mentioned. GC via Sliding Views

  46. Conclusions • A Study of Concurrent Garbage Collection with a Focus on RC. • Novel techniques obtaining short pauses, high efficiency. • The best approach: age-oriented collection with concurrent RC for old and concurrent tracing for young. • Implementation and measurements on Jikes demonstrate non-obtrusiveness and high efficiency. GC via Sliding Views

  47. Project Building Blocks • A novel reference counting algorithm. • State-of-the-art cycle collection. • Generational RC (for old) and tracing (for young) • A concurrent tracing collector. • An age-oriented collector: fitting generations with concurrent collectors. GC via Sliding Views

More Related