1 / 14

Operating Systems CMPSC 473

Operating Systems CMPSC 473. Processes (6) September 24 2008 - Lecture 12 Instructor: Bhuvan Urgaonkar. Announcements. Quiz 1 is out and due in at midnight next M Suggested reading: Chapter 4 of SGG Honors credits Still possible to enroll Email me if you are interested

maik
Télécharger la présentation

Operating Systems CMPSC 473

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating SystemsCMPSC 473 Processes (6) September 24 2008 - Lecture 12 Instructor: Bhuvan Urgaonkar

  2. Announcements • Quiz 1 is out and due in at midnight next M • Suggested reading: Chapter 4 of SGG • Honors credits • Still possible to enroll • Email me if you are interested • Extra work in projects

  3. Overview of Process-related Topics Done • How a process is born • Parent/child relationship • fork, clone, … • How it leads its life • Loaded: Later in the course • Executed • CPU scheduling • Context switching • Where a process “lives”: Address space • OS maintains some info. for each process: PCB • Process = Address Space + PCB • How processes request services from the OS • System calls • How processes communicate • Threads (user and kernel-level) • How processes synchronize • How processes die Done Done Partially done Start today Finish today

  4. Multi-threading Models • User-level thread libraries • E.g., the one provided with Project 1 • Implementation: You are expected to gain this understanding as you work on Project 1 • Pop quiz: Context switch overhead smaller. Why? • What other overheads are reduced? Creation? Removal? • Kernel-level threads • There must exist some relationship between user threads and kernel threads • Why? • Which is better?

  5. Multi-threading Models: Many-to-one Model User thread • Thread management done by user library • Context switching, creation, removal, etc. efficient (if designed well) • Blocking call blocks the entire process • No parallelism on uPs? Why? • Green threads library on Solaris k Kernel thread

  6. Multi-threading Models: One-to-many Model User thread • Each u-l thread mapped to one k-l thread • Allows more concurrency • If one thread blocks, another ready thread can run • Can exploit parallelism on uPs • Popular: Linux, several Windows (NT, 2000, XP) k k k k Kernel thread

  7. Multi-threading Models: Many-to-many Model User thread • # u-l threads >= #k-l threads • Best of both previous approaches? k k k Kernel thread

  8. Multi-threading Models: Many-to-many Model (2) User thread • Popular variation on many-to-many model • Two-level model • IRIX, HP-UX, Tru64 UNIX, Older than Solaris 9 • Pros? Cons? Kernel thread k k k k

  9. Popular Thread Libraries • Pthreads • The POSIX standard (IEEE 1003.1c) defining an API for thread creation and synchronization • Specification NOT implementation • Recall 311 and/or Check out Fig 4.6 • You will use this in Project 1 • Win32 threads • Java threads • JVM itself is at least a thread for the host OS • Different implementations for different OSes

  10. Thread-specific Data • Often multi-thread programs would like to have each thread have access to some data all for itself • Most libraries provide support for this including Pthreads, Win32, Java’s lib.

  11. Approach #3: User or kernel support to automatically share code, data, files! In virtual memory code data • E.g., a Web browser • Share code, data, files (mmaped), via shared memory mechanisms (coming up) • Burden on the programmer • Better yet, let kernel or a user-library handle sharing of these parts of the address spaces and let the programmer deal with synchronization issues • User-level and kernel-level threads heap files registers stack registers stack registers stack registers stack threads URL parsing process Network sending process Network reception process Interprets response, composes media together and displays on browser screen

  12. Approach #3: User or kernel support to automatically share code, data, files! In virtual memory code data • E.g., a Web browser • Share code, data, files (mmaped), via shared memory mechanisms (coming up) • Burden on the programmer • Better yet, let kernel or a user-library handle sharing of these parts of the address spaces and let the programmer deal with synchronization issues • User-level and kernel-level threads heap files heap heap heap heap registers stack registers stack registers stack registers stack threads URL parsing process Network sending process Network reception process Interprets response, composes media together and displays on browser screen

  13. LWP k Scheduler Activations User thread • Kernel provides LWPs that appear like processor to the u-l library • Upcall: A way for the kernel to notify an application about certain events • Similar to signals • E.g., if a thread is about to block, kernel makes an upcall and allocates it a new LWP • U-l runs an upcall handler that may schedule another eligible thread on the new LWP • Similar upcall to inform application when the blocking operation is done • Read Sec 4.4.6 • Paper by Tom Anderson (Washington) in early 90s • Pros and cons? Lightweight process Kernel thread

  14. Costs/Overheads of a Context Switch • Direct/apparent • Time spent doing the switch described in previous lectures • Fixed (more or less) • Indirect/hidden costs • Cache pollution • Effect of TLB pollution (will study this when we get to Virtual Memory Management) • Workload dependent

More Related