1 / 33

Concurrent Servers Process, fork & threads

Concurrent Servers Process, fork & threads. ECE 297. Process-based server. How do you handle cache updates? How do you handle cache invalidation? Keep it simple. Cache. Cache. Cache. Cache. Cache. Process-based server. How do you handle concurrent access to files?. file.

feryal
Télécharger la présentation

Concurrent Servers Process, fork & threads

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrent ServersProcess, fork & threads ECE 297 ECE 297

  2. Process-based server How do you handle cache updates? How do you handle cache invalidation? Keep it simple Cache Cache Cache Cache Cache ECE 297

  3. Process-based server How do you handle concurrent access to files? file Careful with writing to the same file in different processes! ECE 297

  4. Process versus thread I Process • Unit of resource ownership with respect to the execution of a single program • Can encompass more than one thread of execution • E.g., Web browser: More than one thread (process) per window/tab, GUI, rendering engine etc. • E.g., Web server: More than one thread for handling requests Thread • Unit of execution • Belongs to a process • Can be traced (i.e., list the sequence of instructions) ECE 297

  5. Process versus thread II • A.k.a. lightweight process (LWP), threads, multi-threaded processes ECE 297

  6. Per process items Address space Global variables Open files Child processes Pending alarms Signal and signal handlers Accounting information Per thread items Program counter Registers Stack Process versus thread III ECE 297

  7. Use Processes are largely independent and often compete for resources Use Threads are part of the same “job” and are actively and closely cooperating Process 3 Process 1 Process 2 Process Threads Threads OS OS ECE 297

  8. Threads Process Thread 1’s stack Threads OS ECE 297

  9. Thread-based server • Server design alternatives • Thread-per-request • Thread-per-client • Thread-per connection • The new thread can access all resources held by the process that created it • For example, the cache, open data files, global variables are all available to the threads • Unlike for process-based servers

  10. p is for POSIX pthreads API overview • pthread_create(…): creates a thread • pthread_wait(…): waits for a specific thread to exit • pthread_exit(…): terminates the calling thread • pthread_yield(…): calling thread passes control voluntarily to another thread ECE 297

  11. p is for POSIX pthreads API I Thread priority, initial stack size, …; NULL for defaults #include <pthread.h> pthread_create(pthread_t *tid, const pthread_attr_t *attr, void *(*func) (void *), void *arg); Returns 0, if OK, positive Exx on error Thread ID Function to execute; the actual “thread” Pointer to argument for function ECE 297

  12. pthreads API IV pthread_self(void) Returns thread ID to caller pthread_detach(pthread_t thread) Indicates to system that storage for thread can be reclaimed There are many other pthread API calls, the above should suffice for our purposes p is for POSIX ECE 297

  13. Thread-based server void *thread(void *vargp); int *connfdp; int main(int argc, char **argv) { … pthread_t tid; … listenfd = socket(…); … listen(listenfd, …) // main server loop for( ; ; ) { connfdp = malloc(sizeof(int)); … *connfdp = accept(listenfd, (struct sockaddr *) &clientaddr, &clientlen); pthread_create(&tid, NULL, thread, (void *) connfdp); } // for } // main We create the thread to handle the connected client. ECE 297

  14. The actual thread to handle the client void *thread(void *vargp) { int connfd; // detached to avoid a memory leak pthread_detach(pthread_self()); connfd = *((int *)vargp); free(vargp); // do the work, service the client close(connfd); return NULL; } This is where the client gets serviced ECE 297

  15. listenfd = socket(AF_INET, SOCK_STREAM, 0) … bind(listenfd, …) listen(listenfd, …) for( ; ; ){ … connfd = accept(listenfd, …); … if ( (childPID = fork()) == 0 ){// The Child! close(listenfd); //Close listening socket do the work //Process the request exit(0); } … close(connfd); //Parent closes connfd } Concurrent server template ECE 297

  16. Issues with thread-based servers • Must be careful to avoid unintended sharing of variables • For example, what happens if we pass the address of connfd to the thread routine? pthread_create(&tid, NULL, thread, (void *)&connfd); • Must protect access to intentionally shared data • Here, we got around this by creating a new variable, but in general … Would be a shared variable ECE 297

  17. ! Complications • Imaging a global variablecounter in the process • For example the storage server in-memory cache (more complex structure) • Or the connfd variable Let’s dissect the issue in detail

  18. Shared data & synchronization Table Table Table What happens if multiple threads concurrently access shared process state (i.e., memory)? ECE 297

  19. Concurrently manipulating shared data • Two threads execute concurrently as part of the same process • Shared variable (e.g., global variable) • counter = 5 • Thread 1 executes • counter++ • Thread 2 executes • counter— • What are the possible values of counter after Thread 1 and Thread 2 executed? counter ECE 297

  20. Machine-level implementation • Implementation of “counter++” register1 = counter register1 = register1 + 1 counter = register1 • Implementation of “counter--” register2 = counter register2 = register2 – 1 counter = register2 ECE 297

  21. Possible execution sequences counter++ counter-- Context Switch Context Switch Context Switch Context Switch counter++ counter-- Context Switch ECE 297

  22. Interleaved execution • Assume counter is 5 and interleaved execution of counter++ (P) and counter– (C) T1: r1 = counter (register1 = 5)T1: r1 = r1 + 1 (register1 = 6)T2: r2 = counter (register2 = 5)T2: r2 = r2– 1(register2 = 4)T1 : counter = r1 (counter = 6)T2: counter = r2 (counter = 4) • The value of counter may be either 4 or 6, where the correct result should be 5. context switch ECE 297

  23. Race condition • Race condition: • Several threads manipulateshared data concurrently. The final value of the data depends upon which thread finishes last. • In our example (interleaved execution) for c++ last, result would be 6, and for c-- last, result would be 4 (correct result should be 5) • To prevent race conditions, concurrent processes must be synchronized. ECE 297

  24. The moral of this story • The statementscounter++;counter--;must each be executed atomically. • Atomic operation means an operation that completes in its entirety without interruption. • This is achieved through synchronization primitives (semaphores, locks, condition variables, monitors, disabling of IRPs …). ECE297

  25. Synchronization primitives • Semaphore (cf. ECE344) • Monitor (cf. ECE344) • Condition variable (cf. ECE344) • Lock • Prevent data inconsistencies due to race conditions • A.k.a. mutex (mutual exclusion) • Use to protect shared data within a process • Can not be used across processes • Need to use semaphore instead ECE 297

  26. Mutex: Mutual exclusion pthread_mutex_lock(pthread_mutex_t *mtpr) pthread_mutex_unlock(pthread_mutex_t *mtpr) Returns 0, if OK, positive Exx on error • There are other abstractions, but the mutex should suffice for us • NB: In ECE344 we learn how to implement locks. ECE 297

  27. The pthreads mutex (lock) pthread_mutex_tmy_cnt_lock = PTHREAD_MUTEX_INITIALIZER; int counter=0; pthread_mutex_lock( & my_cnt_lock ); counter++; pthread_mutex_unlock( & my_cnt_lock ); … ECE 297

  28. Mutex is for mutual exclusion For statically allocated mutexes. pthread_mutex_lock(& my_cnt_lock); counter--; pthread_mutex_unlock(& my_cnt_lock); Guaranteed to execute atomically pthread_mutex_lock(& my_cnt_lock); counter++; pthread_mutex_unlock(& my_cnt_lock); Guaranteed to execute atomically pthread_mutex_t my_cnt_lock = PTHREAD_MUTEX_INITIALIZER ECE 297

  29. Possible execution sequences lock lock lock counter++ counter-- lock unlock unlock Context Switch Context Switch lock lock lock counter++ counter-- lock unlock unlock ECE 297

  30. Watch out for I • For all shared data access you must use a synchronization mechanism • For Milestone 4 based on threads, you can get by with the mutexes • Other useful mechanisms in pthreads are • pthread_join(…) • pthread_cond_wait(…) & pthread_cond_signal() • Bugs due to race conditions are extremely difficult to track down • Non-deterministic behaviour of code ECE 297

  31. Watch out for II • You can not make any assumption about thread execution order or relative speed • Threaded code must use thread-safe functions • Functions that use no static variables, no global variables, don’t return pointers to static variables • Otherwise need to protect call to non-thread-safe code with mutexes • Non-thread-safe code also called non-reentrant code • Function local data is allocated on the stack • Deadlocks • Code halts, as threads may wait indefinitely on locks • Cause is programmer error or poorly written code ECE 297

  32. Pros & cons of threads-based servers • Probably the simplest option • No zombies, no signal handling, no onerous data structures • “Easy” to share data structures between threads • Logging information, data files, cache, … • Thread creation is more efficient than process creation • Enables concurrent processing of requests from multiple clients ECE 297

  33. Pros & cons cont.’d • Unintentional sharing can introduce subtle and hard to reproduce race conditions • malloc an argument (struct) for each thread and pass pointer to variable to thread and free after use • Keep global variables to a minimum • If a thread references a global variable • protect it with a mutex or • think carefully about whether unprotected variable is safe • e.g., one writer thread vs. multiple readers is OK. ECE 297

More Related