1 / 84

Processes and Threads

Processes and Threads. Chapter 4. CS 345. Chapter 4 Learning Outcomes. Understand the distinction between process and thread. Describe the basic design issues for threads. Explain the difference between user-level threads and kernel-level threads.

allen-hays
Télécharger la présentation

Processes and Threads

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Processes and Threads Chapter 4

  2. CS 345 Threads

  3. Chapter 4 Learning Outcomes • Understand the distinction between process and thread. • Describe the basic design issues for threads. • Explain the difference between user-level threads and kernel-level threads. • Describe the thread management facility in Windows 7. • Describe the thread management facility in Solaris. • Describe the thread management facility in Linux. Threads

  4. Questions… • What is a Process? • What is a Thread? • What are the different types of Threads? • What are the benefits of Threads? • What are possible Thread States? • What is a RPC? • How are Threads managed? • How are ULT’s created in C? Threads

  5. What is moving around? Threads

  6. Processes 1. What is a Process? • A Process is a unit of: • Resource ownership • Code, Data, Address space • I/O channels, devices, files • Execution path • “Time on the clock” • Current state • Interleaved with other processes • What if we treat each independently? • Unit of resource ownership  process or task • Unit of execution  thread or lightweight process Threads

  7. Processes Processes • Resources owned by a process: • code ("text"), • data (VM), • stack, • heap, • file I/O, and • signal tables. • Processes have a significant amount of overhead: • Tables have to be flushed from the processor when context switching. • Processes share information only through pipes and shared memory. Threads

  8. Threads 2. What is a Thread? • A thread of execution • Smallest unit of processing that can be scheduled by an operating system • Threads reduce overhead by sharing the resources of a process. • Switching can happen more frequently and efficiently. • Sharing information is not so "difficult" anymore - everything can be shared. • A Thread is an independent program counter operating within a process. • Sometimes called a lightweight process (LWP) • A smaller execution unit than a process. Threads

  9. one process one thread one process multiple threads multiple processes one thread per process multiple processes multiple threads per process Threads Threads and Processes Threads

  10. Threads Multi-threading • Operating system or user may support multiple threads of execution within a single process. • Traditional approach is single process, single threaded. • Current support for mult-process, mult-threading. • Examples: • MS-DOS: single user process, single thread. • UNIX: multiple processes, one thread per process. • Java run-time environment: one process, multiple threads. • Windows 2000 (W2K), Solaris, Linux, Mach, and OS/2: multiple processes, each supports multiple threads. Threads

  11. Threads 3. What Types of Threads? • There are two types of threads: • User-space (ULT) and • Kernel-space (KLT). • A thread consists of: • a thread execution state (Running, Ready, etc.) • a context (program counter, register set.) • an execution stack. • some per-tread static storage for local variables. • access to the memory and resources of its process (shared with all other threads in that process.) • OS resources (open files, signals, etc.) • Thus, all of the threads of a process share the state and resources of the parent process (memory space and code section.) Threads

  12. Threads 4. What are the Benefits of Threads? • A process has at least one thread of execution • May launch other threads which execute concurrently with the process. • Threads of a process share the instructions (code) and process context (data). • Key benefits: • Far less time to create/terminate. • Switching between threads is faster. • No memory management issues, etc. • Can enhance communication efficiency. • Simplify the structure of a program. Threads

  13. Threads Threads

  14. Threads Single Threaded vs. Multi-threaded Multithreaded Process Model Single-Threaded Process Model Thread Thread Thread Thread Control Block Thread Control Block Thread Control Block Process Control Block User Stack Process Control Block User Stack User Stack User Stack User Address Space Kernel Stack User Address Space Kernel Stack Kernel Stack Kernel Stack Threads

  15. Threads Using Threads • Multiple threads in a single process • Separate control blocks for the process and each thread • Can quickly switch between threads • Can communicate without invoking the kernel • Four Examples • Foreground/Background – spreadsheet updates • Asynchronous Processing – Backing up in background • Faster Execution – Read one set of data while processing another set • Organization – For a word processing program, may allow one thread for each file being edited Threads

  16. Threads 5. What are Possible Thread States? • Thread operations • Spawn – Creating a new thread • Block – Waiting for an event • Unblock – Event happened, start new • Finish – This thread is completed • Generally, it is desirable that a thread can block without blocking the remaining threads in the process • Allow the process to start two operations at once, each thread blocks on the appropriate event • Must handle synchronization between threads • System calls or local subroutines • Thread generally responsible for getting/releasing locks, etc. Threads

  17. RPC’s 6. What is a RPC? 1. Client calls the client stub (stack). 2. Client stub packs (marshalls) parameters. 3. Client's OS sends message to server. 4. Server OS passes packets to server stub. 5. Server stub unpacks (unmarshalls) message. 6. Server stub calls the server procedure. 7. Reply traces in the reverse direction. “A remote procedure call (RPC) is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction.” Threads

  18. Thread Issues 7. How are Threads Managed? • How should threads be scheduled compared to processes? • Equal to processes • Within the parent processes quantum • How are threads implemented? • kernel support (system calls) • user level threads • What about mutual exclusion? • Process resources are shared • Data coherency Threads

  19. ULT’s 8. User-Level Threads • User-level avoids the kernel and manages the tables itself. • Often this is called "cooperative multitasking" where the task defines a set of routines that get "switched to" by manipulating the stack pointer. • Typically each thread "gives-up" the CPU by calling an explicit switch, sending a signal or doing an operation that involves the switcher. • Also, a timer signal can force switches. • User threads typically can switch faster than kernel threads [however, Linux kernel threads' switching is actually pretty close in performance]. Threads

  20. ULT’s User-Level Threads • Disadvantages. • User-space threads have a problem that a single thread can monopolize the timeslice thus starving the other threads within the task. • Also, it has no way of taking advantage of SMPs (Symmetric MultiProcessor systems, e.g. dual-/quad-Pentiums). • Lastly, when a thread becomes I/O blocked, all other threads within the task lose the timeslice as well. • Solutions/work arounds. • Timeslice monopolization can be controlled with an external monitor that uses its own clock tick. • Some SMPs can support user-space multithreading by firing up tasks on specified CPUs then starting the threads from there [this form of SMP threading seems tenuous, at best]. • Some libraries solve the I/O blocking problem with special wrappers over system calls, or the task can be written for nonblocking I/O. Threads

  21. KLT’s Kernel-Level Threads • KLTs often are implemented in the kernel using several tables (each task gets a table of threads). • The kernel schedules each thread within the timeslice of each process. • There is a little more overhead with mode switching from user->kernel-> user and loading of larger contexts, but initial performance measures indicate a negligible increase in time. • Advantages. • Since the clocktick will determine the switching times, a task is less likely to hog the timeslice from the other threads within the task. • I/O blocking is not a problem. • If properly coded, the process automatically can take advantage of SMPs and will run incrementally faster with each added CPU. Threads

  22. Thread Management User-Level and Kernel-Level Threads Threads

  23. Thread Management Thread Management • Some implementations support both ULT and KLT threads. • Can take advantage of each to the running task. • Since Linux's kernel-space threads nearly perform as well as user-space, the only advantage of using user-threads would be the cooperative multitasking. • OS system calls could each be written as a thread or OS could be single threaded. • Advantages: Speed and Concurrency • Disadvantages: Mutual exclusion and complexity Threads

  24. Thread Problems • In many other multithreaded OSs, threads are not processes merely parts of a parent task. • Therefore, if a thread calls fork()’s or execve()'s some external program, the whole task could be replaced. • The POSIX 1c standard defines a thread calling fork() to duplicate only the calling thread in the new process; and an execve() from a thread would stop all threads of that process. • Having two different implementations and schedulers for processes is a flaw that has perpetuated from implementation to implementation. • Some multitasking OSs have opted not to support threads due to these problems (not to mention the effort needed to make the kernel and libraries 100% reentrant). • For example, Windows NT opts not to support POSIX-compliant threads (Windows NT does support threads but they are not POSIX compliant). Threads

  25. Thread Problems • Most people have a hard enough time understanding tasks. • “Chopped up tasks" or threads is difficult to envision. • "What can be threaded in my app?". • Deciding what to thread can be very laborious. • Another problem is locking. • All the nightmares about sharing, locking, deadlock, race conditions, etc. come vividly alive in threads. • Processes don't usually have to deal with this, since most shared data is passed through pipes. • Threads can share file handles, pipes, variables, signals, etc. • Test and duplicate error conditions can cause more gray hair than a wayward child. Threads

  26. Thread Support • As of 1.3.56, Linux has supported kernel-level multithreading. • User-level thread libraries around as early as 1.0.9. • On-going effort to refine and make the kernel more reentrant. • With the introduction of 2.1.x, the memory space is being revised so that the kernel can access the user memory more quickly. • Windows NT opts not to support POSIX-compliant threads (Windows NT does support threads but they are not POSIX compliant). Threads

  27. C Threads Thread Review • How does a thread differ from a process? • Resource ownership • Smallest unit of processing that can be scheduled by an operating system • What are the implications of having an independent program counter? • Each thread has its own stack. • Code and global data belong to the process and are shared among threads. • Threads “own” local data. • Thread state is defined by processor registers and the stack. Threads

  28. Project 2 - Tasking

  29. P2 - Tasking Project 2 • Change the scheduler from a 2 state to a 5 state scheduler using semaphores with priority queues. • int scheduler() in os345.c • semWait(), semSignal, semTryLockin os345semaphores.c • Tasks are functions and are added to the task scheduler ready queue via the “createTask()” function. • The first task scheduled is your shell from Project 1. • The “SWAP” directive replaces clock interrupts for context switching between tasks (cooperative scheduling). • Context switching directives may be placed anywhere in your user task code. • SWAP, SEM_SIGNAL, SEM_WAIT, SEM_TRYLOCK Project 2 - Tasking

  30. P2 - Tasking Project 2 (continued…) • The highest priority, unblocked, ready task should always be executing. • Tasks of the same priority should be scheduled in a round-robin, FIFO fashion. • Any change of events (SEM_SIGNAL) should cause a context switch. • To simulate interrupts, character inputs and timers need to be “polled” in the scheduling loop. • void pollInterrupts() in OS345p1.c • Parsed command line arguments are passed to tasks (ie. functions) via argc/argv variables. Project 2 - Tasking

  31. P2 - Tasking Step 1: Priority Queue • Create a priority queue • typedefint TID; // task ID typedelint Priority; // task priority typedefint* PQueue; // priority queue • PQueuerq; // ready queue rq= (int*)malloc(MAX_TASKS * sizeof(int)); rq[0] = 0; // init ready queue • Queue functions • intenQ(PQueue q, TID tid, Priority p); • intdeQ(PQueue q, TID tid); • q # | pr1/tid1 | pr2/tid2 | … • tid >=0 find and delete tid from q -1 return highest priority tid • inttid (if found and deleted from q) -1 (if q empty or task not found) Project 2 - Tasking

  32. P2 - Tasking Step 2: Schedule w/Ready Queue • Create a ready priority queue • PQueuerq; // ready queue rq = (int*)malloc(MAX_TASKS * sizeof(int)); rq[0] = 0; // init ready queue • Add new task to ready queue in createTask • enQ(rq, tid, tcb[tid].priority); • Change scheduler() to deQueue and then enQueue next task • if ((nextTask = deQ(rq, -1)) >= 0) { enQ(rq, nextTask); } Project 2 - Tasking

  33. P2 - Tasking 2-State Scheduler dispatch() createTask() killTask() swapTask() Ready Queue New Exit Running nextTask = enQueue(rq, deQueue(rq, -1)); Project 2 - Tasking

  34. P2 - Tasking Step 3: 5-State Scheduling • Add priority queue to semaphore struct • typedefstructsemaphore// semaphore { structsemaphore* semLink; // link to next semaphore char* name; // semaphore name (malloc) int state; // state (count) int type; // type (binary/counting) int taskNum; // tid of creator PQueue q; // blocked queue } Semaphore; • Mallocsemaphore queue in createSemaphore • semaphore->q = (int*)malloc(MAX_TASKS * sizeof(int)); semaphore->q[0] = 0; // initqueue • semWait: deQueue current task from ready queue and enQueue in semaphore queue • semSignal: deQueue task from blocked queue and enQueue in ready queue. Project 2 - Tasking

  35. P2 - Tasking 5-State Scheduler #define SWAP swapTask(); #define SEM_WAIT(s) semWait(s); #define SEM_SIGNAL(s) semSignal(s); #define SEM_TRYLOCK(s) semTryLock(s); dispatch() Ready Queue createTask() killTask() New Exit Running Blocked Queues swapTask() semWait() semTryLock() semSignal() Project 2 - Tasking

  36. Scheduling Task Scheduling Scheduler / Dispatcher Executing Ready Priority Queue SWAP SEM_SIGNAL SEM_WAIT Semaphore Priority Queue SEM_SIGNAL SEM_WAIT Semaphore Priority Queue SEM_SIGNAL SEM_WAIT Semaphore Priority Queue … Project 2 - Tasking

  37. P2 - Tasking Step 4: Counting Semaphore • Implement counting functionality to semaphores • Add a 10 second timer (tics10sec) counting semaphore to the polling routine (pollInterrupts). This can be done by including the <time.h> header and calling the C function time(time_t *timer). semSignal the tics10sec semaphore every 10 seconds. • Create a reentrant high priority task that blocks (SEM_WAIT) on the 10 second timer semaphore (tics10sec). When activated, output a message with the current task number and time and then block again. Project 2 - Tasking

  38. State = { NEW, READY, RUNNING, BLOCKED, EXIT } Priority = { LOW, MED, HIGH, VERY_HIGH, HIGHEST } P2 - Tasking Pending semaphore when blocked. Task Control Block (tcb) // task control block typedefstruct// task control block { char* name; // task name int (*task)(int,char**); // task address int state; // task state (P2) int priority; // task priority (P2) int argc; // task argument count (P1) char** argv; // task argument pointers (P1) int signal; // task signals (P1) // void (*sigContHandler)(void); // task mySIGCONT handler void (*sigIntHandler)(void); // task mySIGINT handler // void (*sigKillHandler)(void); // task mySIGKILL handler // void (*sigTermHandler)(void); // task mySIGTERM handler // void (*sigTstpHandler)(void); // task mySIGTSTP handler TID parent; // task parent int RPT; // task root page table (P4) int cdir; // task directory (P6) Semaphore *event; // blocked task semaphore (P2) void* stack; // task stack (P1) jmp_buf context; // task context pointer (P1) } TCB; Project 2 - Tasking

  39. P2 - Tasking Step 5: List Tasks • Modify the list tasks command to display all tasks in the system queues in execution/priority order indicating the task name, if the task is ready, paused, executing, or blocked, and the task priority. If the task is blocked, list the reason for the block. Project 2 - Tasking

  40. P2 - Tasking Step 6: Verification • The project2 command schedule timer tasks 1 through 9, 2 signal tasks and 2 “ImAlive” tasks. The tics10sec task about the current time every 10 seconds in a round robin order. The “ImAlive” tasks will periodically say hello. The high priority “Signal” tasks should respond immediately when semaphore signaled. Project 2 - Tasking

  41. P2 - Tasking Step 7: Bonus Credit • Implement a buffered pseudo-interrupt driven character output and demonstrate that it works by implementing a my_printf function. • Implement time slices that adjust task execution times when scheduled. #include <stdarg.h> void my_printf(char* fmt, ...) {va_listarg_ptr; char pBuffer[128]; char* s = pBuffer; va_start(arg_ptr, fmt); vsprintf(pBuffer, fmt, arg_ptr); while (*s) putIObuffer(*s++); va_end(arg_ptr); } // end my_printf createTask( "myShell", // task name P1_shellTask, // task 5, // task priority argc, // task arg count argv// task argument pointers ); Project 2 - Tasking

  42. setjmp/longjmp setjmp / longjmp • #include <setjmp.h> • jmp_buf struct • stack pointer (sp), frame pointer (fp), and program counter (pc). • setjmp(jmp_buf env); • saves the program state (sp, fp, pc) in env so that longjmp() can restore them later. • returns 0 value. • longjmp(jmp_buf env, int val); • resets the registers to the values saved in env. • longjmp() returns as if you have just called the setjmp() call that saved env with non-zero value. Project 2 - Tasking

  43. setjmp/longjmp Multi-tasking in C Project 2 - Tasking

  44. createTask Creating a Task int createTask( char* name, // task name int (*task)(int, char**), // task address int priority, // task priority intargc, // task argument count char* argv[ ]) // task argument pointers { inttid, j; for(tid=0; tid<MAX_TASKS; tid++) { if(tcb[tid].name[0] == 0) break; // find an open tcb entry slot } if(tid == MAX_TASKS) return -1; // too many tasks strncpy(tcb[tid].name, name, MAX_NAME_SIZE-1); // task name tcb[tid].task = task; // task address tcb[tid].state = S_NEW; // NEW task state tcb[tid].priority = priority; // task priority tcb[tid].parent = curTask; // parent tcb[tid].argc = argc; // argument count // ?? malloc new argvparameters (Project 1) tcb[tid].argv = argv; // argument pointers Project 2 - Tasking

  45. createTask Creating a Task (continued…) tcb[tid].event = 0; // suspend semaphore tcb[tid].RPT = 0; // root page table (project 5) tcb[tid].cdir = cDir; // inherit parent cDir (project 6) // allocate own stack and stack pointer tcb[tid].stack = malloc(STACK_SIZE * sizeof(int)); // signals tcb[tid].signal = 0; // Project 1 if (tid) { tcb[tid].sigIntHandler = tcb[curTask].sigIntHandler; // SIGINT handler } else { tcb[tid].sigIntHandler = defaultSigIntHandler; // default } // ?? inserting task into "ready" queue (Project 2) return tid; // return tcb index (curTask) } // end createTask Project 2 - Tasking

  46. SWAP SWAP (Context Switch) // *********************************************************************** // Do a context switch to next task. // 1. Save the state of the current task and enter kernel mode. // 2. Return from here when task is rescheduled. void swapTask() { swapCount++; // increment swap cycle counter if(setjmp(tcb[curTask].context)) return; // resume execution of task // task context has been saved in tcb // if task RUNNING, set to READY if(tcb[curTask].state == S_RUNNING) tcb[curTask].state = S_READY; longjmp(k_context, 2); // kernel context } // end swapTask Project 2 - Tasking

  47. Scheduling Task Scheduling // *********************************************************************** // scheduler int scheduler() { inti, t, nextTask; if (numTasks == 0) return -1; // no task ready nextTask = rq[0]; // take 1st (highest priority) for (i = 0; i < (numTasks-1); ++i)// roll to bottom of priority (RR) { if (tcb[rq[i]].priority > tcb[rq[i+1]].priority) break; t = rq[i]; rq[i] = rq[i+1]; rq[i+1] = t; } return nextTask; // return task # to dispatcher } // end scheduler Project 2 - Tasking

  48. Project 2 Task Dispatching int dispatcher(int curTask) { int result; switch(tcb[curTask].state) // schedule task { case S_NEW: tcb[curTask].state = S_RUNNING; // set task to run state if(setjmp(k_context)) break; // context switch to new task temp = (int*)tcb[curTask].stack + (STACK_SIZE-8); SET_STACK(temp)// move to new stack result = (*tcb[curTask].task)(tcb[curTask].argument); tcb[curTask].state = S_EXIT; // set task to exit state longjmp(k_context, 1); // return to kernel case S_READY: tcb[curTask].state = S_RUNNING; // set task to run case S_RUNNING: if(setjmp(k_context)) break;// return from task if (signals()) break; longjmp(tcb[curTask].context, 3);// restore task context case S_EXIT: if(curTask== 0) return -1; // if CLI, then quit scheduler syskillTask(curTask); // kill current task case S_BLOCKED: break; // blocked / exit state } return 0; } // end dispatcher Project 2 - Tasking

  49. Project 2 Project 2 Grading Criteria • 5 pts– Replace the simplistic 2-state scheduler with a 5-state, preemptive, prioritized, round-robin scheduler using ready and blocked task queues. (Be sure to handle the SIGSTOP signal.) • 3 pts– Implement counting semaphores within the semSignal, semWait, and semTryLock functions. Add blocked queues to your semSignal and semWait semaphore functions. Validate that the SEM_SIGNAL / SEM_WAIT / SEM_TRYLOCK binary and counting semaphore functions work properly with your scheduler. • 2 pts– Modify the createTask( ) function to mallocargv arguments and insert the new task into the ready queue. Implement the killTask( ) function such that individual tasks can be terminated and resources recovered. • 2 pts– Add a 10 second timer (tics10sec) counting semaphore to the polling routine (pollInterrupts). This can be done by including the <time.h> header and calling the C function time(time_t *timer). semSignal the tics10sec semaphore every 10 seconds. • 2 pts– Modify the list tasks command to display all tasks in the system queues in execution/priority order indicating the task name, if the task is ready, paused, executing, or blocked, and the task priority. If the task is blocked, list the reason for the block. • 1 pt– Create a reentrant high priority task that blocks (SEM_WAIT) on the 10 second timer semaphore (tics10sec). When activated, output a message with the current task number and time and then block again. Project 2 - Tasking

  50. Project 2 Project 2 Grading Criteria • 5 pts– Upon entering main, schedule your CLI as task 0. Have the project2 command schedule timer tasks 1 through 9 and observe that they are functioning correctly. The “CLI” task blocks (SEM_WAIT) on the binary semaphore inBufferReady, while the “TenSeconds” tasks block on the counting semaphore tics10sec. The “ImAlive” tasks do not block but rather immediately swap (context switches) after incrementing their local counters. The high priority “Signal” tasks should respond immediately when semaphore signaled. Project 2 - Tasking

More Related