1 / 37

Implementing Threads/Locks on a Uni-Processor

Learn about implementing threads and locks on a uni-processor, including thread interactions, shared data access, use of locks, monitors, and semaphores, and the different states of threads.

jasonkemp
Télécharger la présentation

Implementing Threads/Locks on a Uni-Processor

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CPS110: Implementing threads/locks on a uni-processor Landon Cox

  2. Recall, thread interactions • Threads can access shared data • Use locks, monitors, and semaphores • What we’ve done so far • Threads also share hardware • CPU (uni-processor) • Memory

  3. Private vs global thread state • What state is private to each thread? • Code (like lines of a play) • PC (where actor is in his/her script) • Stack, SP (actor’s mindset) • What state is shared? • Global variables, heap • (props on set)

  4. Thread control block (TCB) • Running thread’s private data is on CPU • Non-running thread’s private state is in TCB • Thread control block (TCB) • Container for non-running thread’s private data • (PC, code, SP, stack, registers)

  5. Thread control block Address Space TCB1 PC SP registers TCB2 PC SP registers TCB3 PC SP registers Ready queue Code Code Code Stack Stack Stack Thread 1 running CPU PC SP registers

  6. Thread control block Address Space TCB2 PC SP registers TCB3 PC SP registers Ready queue Code Stack Stack Stack Thread 1 running CPU PC SP registers

  7. Thread states • Running • Currently using the CPU • Ready • Ready to run, but waiting for the CPU • Blocked • Stuck in lock (), wait () or down ()

  8. Switching threads • What needs to happen to switch threads? • Thread returns control to OS • For example, via the “yield” call • OS chooses next thread to run • OS saves state of current thread • To its thread control block • OS loads context of next thread • From its thread control block • Run the next thread Project 1: swapcontext

  9. 1. Thread returns control to OS • How does the thread system get control? • Could wait for thread to give up control • (internal events) • Thread might block inside lock or wait • Thread might call into kernel for service • (system call) • Thread might call yield • Are internal events enough?

  10. 1. Thread returns control to OS • External events • (events not initiated by the thread) • Hardware interrupts • Transfer control directly to OS interrupt handlers • From 104 • CPU checks for interrupts while executing • Jumps to OS code with interrupt mask set • Interrupts lead to pre-emption (a forced yield) • Common interrupt: timer interrupt

  11. 2. Choosing the next thread • If no ready threads, just spin • Modern CPUs: execute a “halt” instruction • Project 1: exit if no ready threads • Loop switches to thread if one is ready • Many ways to prioritize ready threads • Will discuss a little later in the semester

  12. 3. Saving state of current thread • What needs to be saved? • Registers, PC, SP • What makes this tricky? • Need registers to save state • But you’re trying to save all the registers • Saving the PC

  13. 4. OS loads the next thread • Where is the thread’s state/context? • Thread control block (in memory) • How to load the registers? • Use load instructions to grab from memory • How to load the stack? • Stack is already in memory, load SP

  14. 5. OS runs the next thread • How to resume thread’s execution? • Jump to the saved PC • On whose stack are these steps running? or Who is running these steps? • The thread that called yield • (or was interrupted or called lock/wait) • How does this thread run again? • Some other thread must switch to it

  15. Example thread switching Thread 1 print “start thread 1” yield () print “end thread 1” Thread 2 print “start thread 2” yield () print end thread 2” yield () print “start yield (thread %d)” switch to next thread (swapcontext) print “end yield (thread %d)” Thread 1 output Thread 2 output -------------------------------------------- start thread 1 start yield (thread 1) start thread 2 start yield (thread 2) end yield (thread 1) end thread 1 end yield (thread 2) end thread 2 Note: this assumes no pre-emptions. Programmers must assume pre-emptions, so this output is not guaranteed.

  16. Creating a new thread • Also called “forking” a thread • Idea: create initial state, put on ready queue • Allocate, initialize a new TCB • Allocate a new stack • Make it look like thread was going to call a function • PC points to first instruction in function • SP points to new stack • Stack contains arguments passed to function • Project 1: use makecontext • Add thread to ready queue

  17. Using interrupt disable-enable • Disable-enable on a uni-processor • Assume atomic (can use atomic load/store) • How do threads get switched out? • Internal events (yield, I/O request) • External events (interrupts, e.g. timers) • Easy to prevent internal events • Use disable-enable to prevent external events

  18. Why use locks? disable interrupts while (1){} • If we have disable-enable, why do we need locks? • Program could bracket critical sections with disable-enable • Might not be able to give control back to thread library • Can’t have multiple locks (over-constrains concurrency) • Project 1: only disable interrupts in thread library

  19. Lock implementation #1 unlock () { disable interrupts value = FREE enable interrupts } lock () { disable interrupts while (value != FREE) { enable interrupts disable interrupts } value = BUSY enable interrupts } Why is it ok for lock code to disable interrupts? It’s in the trusted kernel (we have to trust something). Disable interrupts, busy-waiting

  20. Lock implementation #1 unlock () { disable interrupts value = FREE enable interrupts } lock () { disable interrupts while (value != FREE) { enable interrupts disable interrupts } value = BUSY enable interrupts } Do we need to disable interrupts in unlock? Only if “value = FREE” is multiple instructions (safer) Disable interrupts, busy-waiting

  21. Lock implementation #1 unlock () { disable interrupts value = FREE enable interrupts } lock () { disable interrupts while (value != FREE) { enable interrupts disable interrupts } value = BUSY enable interrupts } Why enable-disable in lock loop body? Otherwise, no one else will run (including unlockers) Disable interrupts, busy-waiting

  22. Using read-modify-write instructions • Disabling interrupts • Ok for uni-processor, breaks on multi-processor • Why? • Could use atomic load-store to make a lock • Inefficient, lots of busy-waiting • Hardware people to the rescue!

  23. Lock implementation #2 unlock () { value = 0 } lock () { while (test&set(value) == 1) { } } What happens if value = 1? What happens if value = 0? Test&set, busy-waiting Initially, value = 0

  24. Locks and busy-waiting • All implementations have used busy-waiting • Wastes CPU cycles • To reduce busy-waiting, integrate • Lock implementation • Thread dispatcher data structures

  25. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Interrupt disable, no busy-waiting

  26. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Who gets the lock after someone calls unlock?

  27. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Who might get the lock if it weren’t handed-off directly? (e.g. if value weren’t set BUSY in unlock)

  28. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } What does this mean? Are we saving the PC?

  29. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Why a separate queue for the lock?

  30. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Project 1 note: you must guarantee FIFO ordering of lock acquisition.

  31. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { enable interrupts add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } When and how could this fail?

  32. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock enable interrupts switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } When and how could this fail?

  33. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Putting lock thread on lock wait queue, switch must be atomic. Must call switch with interrupts off.

  34. How is switch returned to? • Review from last time • Think of switch as three phases • Save current location (SP, PC) • Point processor to another stack (SP’) • Jump to another instruction (PC’) • Only way to get back to a switch • Have another thread call switch

  35. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } What is lock() assuming about the state of interrupts after switch returns?

  36. Interrupts and returning to switch • Lock() can assume that switch • Is always called with interrupts disabled • On return from switch • Previous thread must have disabled interrupts • Next thread to run • Becomes responsible for re-enabling interrupts • Invariants: threads promise to • Disable interrupts before switch is called • Re-enable interrupts after returning from switch

  37. Thread A Thread B yield () { disable interrupts … switch enable interrupts } // exit thread library function <user code> lock () { disable interrupts … switch back from switch … enable interrupts } // exit yield <user code> unlock () // moves A to ready queue yield () { disable interrupts … switch back from switch … enable interrupts } // exit lock <user code>

More Related