1 / 25

Threads and Critical Sections

Threads and Critical Sections. Vivek Pai / Kai Li Princeton University. Gedankenthreads. What happens during fork? We need particular mechanisms, but do we have options about what to do?. Mechanics. Midterm grading finished! Tabulating, recording not done Available this afternoon, really!

Télécharger la présentation

Threads and Critical Sections

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Threads and Critical Sections Vivek Pai / Kai Li Princeton University

  2. Gedankenthreads • What happens during fork? • We need particular mechanisms, but do we have options about what to do?

  3. Mechanics • Midterm grading finished! • Tabulating, recording not done • Available this afternoon, really! • Let’s talk a little about the midterm • Start threads & critical sections • Readings will be updated & mentioned in followup (so you’re responsible  )

  4. Thread and Address Space • Thread • A sequential execution stream within a process (also called lightweight process) • Address space • All the state needed to run a program • Provide illusion that program is running on its own machine (protection) • There can be more than one thread per address space

  5. Concurrency and Threads • I/O devices • Overlap I/Os with I/Os and computation (modern OS approach) • Human users • Doing multiple things to the machine: Web browser • Distributed systems • Client/server computing: NFS file server • Multiprocessors • Multiple CPUs sharing the same memory: parallel program

  6. Typical Thread API • Creation • Fork, Join • Mutual exclusion • Acquire (lock), Release (unlock) • Condition variables • Wait, Signal, Broadcast • Alert • Alert, AlertWait, TestAlert

  7. User vs. Kernel-Level Threads • Question • What is the difference between user-level and kernel-level threads? • Discussions • When a user-level thread is blocked on an I/O event, the whole process is blocked • A context switch of kernel-threads is expensive • A smart scheduler (two-level) can avoid both drawbacks

  8. Thread Control Block • Shared information • Processor info: parent process, time, etc • Memory: segments, page table, and stats, etc • I/O and file: comm ports, directories and file descriptors, etc • Private state • State (ready, running and blocked) • Registers • Program counter • Execution stack

  9. Each thread has a user stack a private kernel stack Pros concurrent accesses to system services works on a multiprocessor Cons More memory Each thread has a user stack a shared kernel stack with other threads in the same address space Pros less memory Does not work on a multiprocessor Cons serial access to system services Threads Backed By Kernel Threads

  10. “Too Much Milk” Problem Person A Person B • Don’t buy too much milk • Any person can be distracted at any point Look in fridge: out of milk Leave for Wawa Arrive at Wawa Buy milk Arrive home Look in fridge: out of milk Leave for Wawa Arrive at Wawa Buy milk Arrive home

  11. A Possible Solution? • Thread can get context switched after checking milk and note, but before buying milk if ( noMilk ) { if (noNote) { leave note; buy milk; remove note; } } if ( noMilk ) { if (noNote) { leave note; buy milk; remove note; } }

  12. Another Possible Solution? Thread A Thread B • Thread A switched out right after leaving a note leave noteA if (noNoteB) { if (noMilk) { buy milk } } remove noteA leave noteB if (noNoteA) { if (noMilk) { buy milk } } remove noteB

  13. Yet Another Possible Solution? Thread A Thread B • Safe to buy • If the other buys, quit leave noteB if (noNoteA) { if (noMilk) { buy milk } } remove noteB leave noteA while (noteB) do nothing; if (noMilk) buy milk; remove noteA

  14. Remarks • The last solution works, but • Life is too complicated • A’s code is different from B’s • Busy waiting is a waste • Peterson’s solution is also complex • What we want is: Acquire(lock); if (noMilk) buy milk; Release(lock); } Critical section

  15. What Is A Good Solution • Only one process inside a critical section • No assumption about CPU speeds • Processes outside of critical section should not block other processes • No one waits forever • Works for multiprocessors

  16. Primitives • We want to avoid thinking (repeatedly) • So, we want some “contract” that provides certain behavior • Low-level behavior encapsulated in “primitives” • Application uses primitives to construct more complex behavior

  17. The Simplistic Acquire/Release Acquire() { disable interrupts; } • Kernel cannot let users disable interrupts • Critical sections can be arbitrarily long • Used on uniprocessors, but won’t work on multiprocessors Release() { enable interrupts; }

  18. Disabling Interrupts • Done right, serializes activity • People think sequentially – easier to reason • Guarantees code executes without interruption • Delays handling of external events Used throughout the kernel

  19. Using Disabling Interrupts Acquire(lock) { disable interrupts; while (lock != FREE){ enable interrupts; disable interrupts; } lock = BUSY; enable interrupts; } Release(lock) { disable interrupts; lock = FREE; enable interrupts; } • Why do we need to disable interrupts at all? • Why do we need to enable interrupts inside the loop in Acquire?

  20. Using Disabling Interrupts Acquire(lock) { disable interrupts; while (lock == BUSY) { enqueue me for lock; block; } else lock = BUSY; enable interrupts; } Release(lock) { disable interrupts; if (anyone in queue) { dequeue a thread; make it ready; } lock = FREE; enable interrupts; } • When does Acquire re-enable interrupts in going to sleep? • Before enqueue? • After enqueue but before block?

  21. Hardware Support for Mutex • Mutex = mutual exclusion • Early software-only approaches limited • Hardware support became common • Various approaches: • Disabling interrupts • Atomic memory load and store • Atomic read-modify-write • L. Lamport, “A Fast Mutual Exclusion Algorithm,” ACM Trans. on Computer Systems, 5(1):1-11, Feb 1987. – use Google to find

  22. The Big Picture Concurrent Applications High-Level Atomic API Locks Semaphores Monitors Send/Receive Low-Level Atomic Ops Load/Store Interrupt disable Test&Set Interrupt (timer or I/O completion), Scheduling, Multiprocessor

  23. Atomic Read-Modify-Write Instructions • Test&Set: Read value and write 1 back to memory • Exchange (xchg, x86 architecture) • Swap register and memory • Compare and Exchange (cmpxchg, 486+) • If Dest = (al,ax,eax), Dest = SRC; else (al,ax,eax) = Dest • LOCK prefix in x86 • Load link and conditional store (MIPS, Alpha) • Read value in one instruction, do some operations • When store, check if value has been modified. If not, ok; otherwise, jump back to start

  24. A Simple Solution with Test&Set Acquire(lock) { while (!TAS(lock)) ; } Release(lock) { lock = 0; } • Waste CPU time • Low priority threads may never get a chance to run

  25. Test&Set, Minimal Busy Waiting Release(lock) { while (!TAS(lock.guard)) ; if (anyone in queue) { dequeue a thread; make it ready; } else lock.value = 0; lock.guard = 0; } Acquire(lock) { while (!TAS(lock.guard)) ; if (lock.value) { enqueue the thread; block and lock.guard = 0; } else { lock.value = 1; lock.guard = 0; } } • Why does this work?

More Related