Download
threads and lightweight processes n.
Skip this Video
Loading SlideShow in 5 Seconds..
Threads and Lightweight Processes PowerPoint Presentation
Download Presentation
Threads and Lightweight Processes

Threads and Lightweight Processes

230 Vues Download Presentation
Télécharger la présentation

Threads and Lightweight Processes

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Threads and Lightweight Processes

  2. Introduction • Motivation • Sharing common resources • Multiprocessor architecture

  3. Multiple Threads and Processors • True parallelism for multiprocessor architectures • Multiplex if T > P • Ideally: if an application need 1 unit time with one thread version, it will only need 1/n unit time with a multithread version on a computer with n processors.

  4. Traditional UNIX system

  5. Multithreaded Processes

  6. Multiprocessor

  7. Concurrency & Parallelism • Concurrency: The maximum parallelism it can achieve with an unlimited number of processors. • Parallelism: The actual degree of parallel execution achieved and is limited by the number of physical processors. • User/System concurrency

  8. Fundamental Abstraction • A process is a compound entity that can be divided into two components—a set of threads and a collection of resources. • A thread is a dynamic object that represents a control point in the process and that executes a sequence of instructions. • Shared resources + its private objects: pc, stack, register context

  9. Processes • Have a virtual address space which holds the process image • Protected access to processors, other processes, files, and I/O resources

  10. Threads • Has an execution state (running, ready, etc.) • Saves thread context when not running • Has an execution stack • Has some per-thread static storage for local variables • Has access to the memory and resources of its process • all threads of a process share this

  11. Threads and Processes one process one thread one process multiple threads multiple processes one thread per process multiple processes multiple threads per process

  12. Single Threaded and Multithreaded Process Models Multithreaded Process Model Single-Threaded Process Model Thread Thread Thread Thread Control Block Thread Control Block Thread Control Block Process Control Block User Stack Process Control Block User Stack User Stack User Stack User Address Space Kernel Stack User Address Space Kernel Stack Kernel Stack Kernel Stack

  13. Benefits of Threads • Takes less time to create a new thread than a process • Less time to terminate a thread than a process • Less time to switch between two threads within the same process • Since threads within the same process share memory and files, they can communicate with each other without invoking the kernel

  14. Threads • Suspending a process involves suspending all threads of the process since all threads share the same address space • Termination of a process, terminates all threads within the process

  15. Kernel Threads • A kernel thread is responsible for executing a specific function. • It shares the kernel text and global data, and has its own kernel stack. • Independently scheduled. • Useful to handle asynchronous I/O. • Inexpensive

  16. Lightweight Processes • LWP is a kernel-supported user thread. • It belongs to a user process. • Independently scheduled. • Share the address space and other resources of the process. • LWP should be synchronized on shared data. • Blocking an LWP is expensive.

  17. Lightweight processes

  18. User Threads • All thread management is done by the application • The kernel is not aware of the existence of threads • Thread switching does not require kernel mode privileges • Scheduling is application specific

  19. User Threads • Created by thread library such as C-threads(Mach) or pthreads(POSIX). • A user-level library multiplex user threads on top of LWPs and provides facilities for inter-thread scheduling, context switching, and synchronization without involving the kernel.

  20. User Threads

  21. User threads multiplexed

  22. Latency Creation time Synchronization Time using semaphore User thread 52 66 LWP 350 390 Process 1700 200

  23. Split scheduling • The threads library schedules user threads • The kernel schedules LWPs and processes. • Every threads are eventually scheduled. • True: each thread is bound to one LWP. • False: Multiplexed on several LWPs

  24. Lightweight Process Design • Semantics of fork • Create a child process. • Copy only the LWP into the new process. • Duplicate all the LWPs of the parent. • All LWPs share a set of file descriptors • All LWPs share a common address space and may manipulate it concurrently through system calls such as mmap & brk.

  25. lseek Problems

  26. Signal Delivery and Handling • Different methods: • Send the signal to each thread • Appoint a master thread in each process to receive all signals • Send the signal to any arbitrarily chosen thread • Use heuristics to determine the thread to which the signal applies • Create a new thread to handle each signal

  27. Stack Growth • Overflows of stack: a segmentation violation fault. • The kernel has no ideas about the user thread stack. • It is the thread’s responsibility to extend the stack or handle the overflows, the kernel responds by sending a SIGSEGV signal to the appropriate thread.

  28. User-Level Thread Libraries • The Programming Interface • Creating and terminating threads • Suspending and resuming threads • Assigning priorities to individual threads • Thread scheduling and context switching • Synchronizing activities through facilities such as semaphores and mutual exclusion locks • Sending messages from one thread to another • The priority of a thread is simply a process-relative priority used by the threads scheduler to select a thread to run within the process.

  29. Implementing Threads Libraries • By LWPs: • Bind each thread to a different LWP. • Multiplex user threads on a (smaller) set of LWPs. • Allow a mixture of bound and unbound threads in the same process.

  30. User thread states

  31. Multithreading in Solaris • Process includes the user’s address space, stack, and process control block • User-level threads (threads library) • invisible to the OS • are the interface for application parallelism • Kernel threads • KT can be dispatched on a processor and its structures are maintained by the kernel • Lightweight processes (LWP) • each LWP supports one or more ULTs and maps to exactly one KLT • each LWP is visible to the application

  32. Multithreading in Solaris • Kernel Threads(resources) • Saved copy of the kernel registers • Priority and scheduling information • Pointers to put the thread on a scheduler queue or, if the thread is blocked, on a resource wait queue. • Pointer to the stack. • Pointers to the associated lwp and proc structures • Pointers to maintain a queue of all threads of a process and a queue of all threads in the system • Information about the associated LWP

  33. Lightweight Process Impl. • lwp: (per-LWP part) • Saved values of user-level registers • System call arguments, results, and error code • Signal handling information • Resource usage and profiling data • Virtual time alarms • User time and CPU usage • Pointer to the kernel thread • Pointer to the proc structure

  34. LWP Impl. • Synchronization: • Mutex • Condition variables • Counting semaphores • Read-write locks • All LWPs share a common set of signal handlers. • May mask.

  35. User Threads • Threads library: create, destroy, manage threads without kernel • Threads library was built on LWPs. • Most applications are written with user threads. • Two types: bound & unbound • Having many more threads than LWPs is a disadvantage. • Having bound & unbound threads in one application is very useful.

  36. Solaris

  37. Process 2 is equivalent to a pure ULT approach Process 4 is equivalent to a pure KLT approach We can specify a different degree of parallelism (process 3 and 5)

  38. User Thread Implementation • User thread info • Thread ID • Saved register state • User Stack • Signal mask • Priority • Thread local storage • User thread synchronization • Synchronization variables in shared memory

  39. Solaris: versatility • We can use ULTs when logical parallelism does not need to be supported by hardware parallelism (we save mode switching) • Ex: Multiple windows but only one is active at any one time • If threads may block then we can specify two or more LWPs to avoid blocking the whole application

  40. Solaris: user-level thread execution • Transitions among states is under the exclusive control of the application • a transition can occur only when a call is made to a function of the thread library • It’s only when a ULT is in the active state that it is attached to a LWP (so that it will run when the kernel level thread runs) • a thread may transfer to the sleeping state by invoking a synchronization primitive) and later transfer to the runnable state when the event waited for occurs • A thread may force another thread to go to the stop state...

  41. Solaris: user-level thread states (attached to a LWP)

  42. Decomposition of user-level Active state • When a ULT is Active, it is associated to a LWP and thus, to a KLT • Transitions among the LWP states is under the exclusive control of the kernel • A LWP can be in the following states: • running: when the KLT is executing • blocked: because the KLT issued a blocking system call (but the ULT remains bound to that LWP and remains active) • runnable: waiting to be dispatched to CPU

  43. Solaris: Lightweight Process States LWP states are independent of ULT states (except for bound ULTs)

  44. Interrupt Handling • In traditional: using ipl • In Solaris: mutex locks & semaphores • Deal with interrupts using kernel threads with the highest priorities • The interrupt handling thread is preallocated and partially initialized

  45. Pinned here unpinned

  46. System Call Handling • fork() : duplicate each LWPs of the parent. • fork1() : only copy one thread invoking it • Concurrent random I/O : pread(), pwrite() • Programmers can write applications using only threads, and later optimize them by manipulating the underlying LWPs to best provide the real concurrence needed by the application.

  47. pthread Library • POSIX-compliant • User-level • aioread(), new thread to deal with I/O

  48. Asynchronous I/O

  49. Linux Task Data Structure • State • Scheduling information • Identifiers • Interprocess communication • Links • Times and timers • File system • Address space • Processor-specific context