1 / 57

Threads, SMP, and Microkernels

Operating Systems. Threads, SMP, and Microkernels. Processes and Threads. So far, presented the concept of process as embodying: Resource ownership : process includes a virtual address space to hold the process image

wyatt
Télécharger la présentation

Threads, SMP, and Microkernels

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating Systems Threads, SMP, and Microkernels

  2. Processes and Threads • So far, presented the concept of process as embodying: • Resource ownership:process includes a virtual address space to hold the process image • Scheduling/execution:process follows an execution path that may be interleaved with other processes • These two characteristics are treated independently by the operating system

  3. Processes and Threads • Unit of dispatching is referred to as a thread or lightweight process • Unit of resource ownership is referred to as a process or task

  4. Multithreading • Multithreading: OS supports multiple, concurrent paths of execution within a single process • MS-DOS supports a single user process and thread • Some UNIX variants supports multiple user processes but only one thread per process • Java run-time environment is a single process with multiple threads • Windows, Solaris and many modern versions of UNIX support multiple user processes with multiple threads per process

  5. Threads and Processes

  6. Processes • In multithreaded environment, a process is defined as a unit of resource allocation and a unit of protection. • Process has associated: • A virtual address space which holds the process image • Protected access to processors, other processes (IPC), files, and I/O resources

  7. One or More Threads in Process • Each thread has • An execution state(running, ready, etc.) • Saved thread context when not running • An execution stack • Some per-thread static storage for local variables • Access to process memory and resources(shared by all threads)

  8. Threads

  9. Benefits of Threads • Takes less time to create a new thread than a process • Less time to terminate a thread than a process • Less time to switch between two threads within the same process • Since threads within the same process share memory and files, they can communicate with each other without invoking the kernel

  10. Uses of Threads in a Single-User Multiprocessing System • Foreground and background work • One thread managing display and input • Another executing commands • Asynchronous processing • Thread responsible for periodic backup • Speed of execution • parallelism • Modular program structure

  11. Threads • Suspending a process involves suspending all threads of the process since all threads share the same address space • Termination of a process, terminates all threads within the process

  12. Thread States • Operations associated with a change in thread state • Spawn • Spawn another thread • Block • To wait for an event • Unblock • To move to the ready queue • Finish • De-allocate register context and stacks

  13. User-Level Threads • All thread management is done by the application • The kernel is not aware of the existence of threads • Any application can be multithreaded via a threads library which contains code for • Creating and destroying threads • Passing messages and data between threads • Scheduling thread execution • Saving and restoring thread contexts

  14. User-Level Threads

  15. Process B is Running two threads: Thread 2 is Running, Thread 1 is Ready Thread 2 makes an IO request which results in process B being Blocked

  16. Process B is Blocked waiting for IO requested by Thread 2 Within Process B state of Thread 2 is still Running, Thread 1 is Ready

  17. Process B resumes execution and Thread 2 hits a point where it needs some action performed by Thread 1. So Thread 2 blocks and Thread 1 starts running

  18. Note that Thread scheduling is done by procedure call to the Threads library and this execution is part of Process B’s execution

  19. Kernel-Level Threads(lightweight processes) • Windows is an example of this approach • Kernel maintains context information for the process and the threads • Scheduling is done on a thread basis

  20. Kernel-Level Threads

  21. Advantages and Disadvantages • ULT advantages • Thread switching does not require going to kernel mode and back, saving overhead • Application-specific scheduling • ULTs can run on any OS. Threads library is a set of application-level functions shared by all apps • ULT disadvantages • Most system calls are blocking, which blocks all the threads within a process • Unable to take advantage of multiprocessing

  22. Combined Approaches • Example is Solaris • Thread creation done in the user space • Bulk of scheduling and synchronization of threads within application

  23. Combined Approaches

  24. Relationship Between Thread and Processes

  25. Categories of Computer Systems(Flynn, 1972) • Single Instruction Single Data (SISD) stream • Single processor executes a single instruction stream to operate on data stored in a single memory • Single Instruction Multiple Data (SIMD) stream • Each instruction is executed on a different set of data by the different processors

  26. Categories of Computer Systems(Flynn, 1972) • Multiple Instruction Single Data (MISD) stream • A sequence of data is transmitted to a set of processors, each of which executes a different instruction sequence. Never implemented • Multiple Instruction Multiple Data (MIMD) • A set of processors simultaneously execute different instruction sequences on different data sets

  27. Parallel Processor Architectures

  28. Symmetric Multiprocessing • Kernel can execute on any processor • Typically each processor does self-scheduling from the pool of available processes or threads

  29. Symmetric Multiprocessor Organization

  30. Multiprocessor Operating Systems Design Considerations • Simultaneous concurrent processes or threads • Kernel routines must be reentrant • Complex issues in kernel table and management structures to avoid deadlock, invalid ops • Scheduling • Avoid conflicts in independent scheduling by different processors • In kernel-level multithreading, parallel execution of threads possible

  31. Multiprocessor Operating Systems Design Considerations • Synchronization • Enforce mutual exclusion and event ordering • Locks are a commonly used mechanism • Memory management • Complexities of enforcing consistency when pages are shared among processors • Reliability and fault tolerance • Graceful degradation in instances of processor failure • Requires restructuring management tables

  32. Kernel Architecture

  33. Microkernels • Small operating system core • Contains only essential core operating systems functions • Client-Server approach

  34. Microkernels • Many services traditionally part of the OS are now external subsystems • Subsystems interact with the kernel and with each other • Includes • device drivers • file systems, • virtual memory manager, • windowing system, • security services

  35. Benefits of a Microkernel Organization • Uniform interfaces • Don’t need to distinguish between kernel-level and user-level services • All services are provided by means of message passing through the microkernel • Microkernel validates messages, passes them between components and grants access to hardware • Security: prevents message passing unless exchange is allowed

  36. Benefits of a Microkernel Organization • Extensibility • Easier to add new services • Modifications do not require building a new kernel • Flexibility • New features added • Existing features can be subtracted

  37. Benefits of a Microkernel Organization • Portability • Processor specific code restricted to the microkernel • Changes needed to port the system to a new processor are few and tend to be arranged in logical groupings • Reliability • Small microkernel can be rigorously tested

  38. Benefits of a Microkernel Organization • Distributed system support • Message are sent without knowing what the target machine is • Object-oriented operating systems • Components are objects with clearly defined interfaces that can be interconnected to form software

  39. Microkernel Design • Low-level memory management • Mapping each virtual page to a physical page frame

  40. Interprocess Communication • Interprocess communication by messages • Message consists of a header and a body • Header, identifying sender and receiver • Body containing • Direct data; or • Pointer to a block of data; or • Control information about the process

  41. Process “Ports” • IPC based on “ports” associated with processes • Port may be viewed as a queue of messages destined for a process • Each port determines a list of other processes that may use it to communicate with the host process (capabilities) • Port identities and capabilities maintained by the kernel • Host process can grant new access by sending a message to the kernel

  42. I/O and Interrupt Management • With microkernel architecture, hardware interrupts may be handled as messages • I/O ports are included in the address space • Microkernel recognizes interrupts but does not handle them • Instead, generates a message for the user process associated with the interrupt

  43. Solaris • Process includes the user’s address space, stack, and process control block • User-level threads • A user-created execution unit within a process • implemented through a library in a process’s address space • invisible to the OS

  44. Kernel Threads (Solaris) • Kernel threads are fundamental entities that can be scheduled and dispatched to run on one of the system processors • Concurrency and execution are managed at the kernel thread level

  45. Kernel Threads (Solaris) • Some kernel threads are bound to a user-level thread by means of a lightweight process • Other kernel threads are used to implement system functions • They are created, run and destroyed by the kernel

  46. Solaris LWPs • Lightweight processes (LWP) • Visible within a process to the application • Thus, LWP data structures exist within the process address space • Each LWP supports a user-level thread and is bound to one dispatchable kernel thread • Data structures for kernel threads are maintained in the kernel’s address space

  47. Processes and Threads in Solaris

  48. LWP Data Structure • Identifier • Priority • Signal mask • Saved values of user-level registers

  49. LWP Data Structure • Kernel stack • Resource usage and profiling data • Pointer to the corresponding kernel thread • Pointer to the process structure

  50. Process Structure Replaced by Also contains a pointer to the associated kernel thread

More Related