1 / 26

T5-multithreading

T5-multithreading. SO-Grade 2013-2014-Q2. Index. Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section Mutual exclusion access. Processes vs. Threads. Until now…

kostya
Télécharger la présentation

T5-multithreading

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. T5-multithreading SO-Grade 2013-2014-Q2

  2. Index • Processes vs. Threads • Thread libraries • Communication based on shared memory • Race condition • Critical section • Mutual exclusion access

  3. Processes vs. Threads • Until now… • Just one sequence of execution: just one program counter and just one stack • There is not support to execute different concurrent functions inside one process • But there can be some independent functions that could exploit concurrency

  4. Example: client-server application Client 1 { .. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while(){ Wait_request(); Prepare_response(); Send_response(); } } • Single-process server: Server cannot serve more than one client at the same time • It is not possible to exploit advantage of concurrency and parallelism • Multi-process server: one process per simultaneous client to be served • Concurrent and/or parallel execution • But… there exists resource wasting • Replication of data structures that keep the same information, replication of logical address spaces, inefficient communication mechanisms,… Client 2 { .. Send_request(); Wait_response(); Process_response(); … } Client N { .. Send_request(); Wait_response(); Process_response(); … }

  5. Example: client-server application Client 1 { .. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } } Client 2 { .. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } } GLOBAL DATA Server { while() { START_process Wait_request(); Prepare_response(); Send_response(); END_process } } ClientN { .. Send_request(); Wait_response(); Process_response(); … }

  6. Example: server application • Alternative: multithreaded server • Enable several concurrent executions associated to one process • What is it necessary to describe one execution sequence? • Stack • Program counter • Values of the general purpose registers • Rest of process characteristics can be shared (rest of the logical address space, information about devices, signals management, etc.)

  7. Processes vs. Threads • Most of resources are assigned to processes • Characteristics/resources per thread: • Next instruction to execute (PC value) • A memory region to hold its stack • Value of general purpose registers • An identifier • Scheduling unit is thread (each thread requires a CPU) • Rest of resources/characteristics are shared by all threads in a process • Traditional process: contains just one execution thread

  8. Example: client-server application Client1 { .. Send_request(); Wait_response(); Process_response(); … } GLOBAL DATA Server { while() { INICIO_FLUJO Esperar_peticion(); Preparar_respuesta(); Enviar_respuesta(); FIN_FLUJO } } Client2 { .. Send_request(); Wait_response(); Process_response(); … } • START_thread • Wait_request(); • Prepare_response(); • Send_response(); • END_thread • START_thread • Wait_request(); • Prepare_response(); • Send_response(); • END_thread • START_thread • Wait_request(); • Prepare_response(); • Send_response(); • END_thread • START_thread • Wait_request(); • Prepare_response(); • Send_response(); • END_thread • START_thread • Wait_request(); • Prepare_response(); • Send_response(); • END_thread Client N { .. Send_request(); Wait_response(); Process_response(); … }

  9. Internals: Processes vs. Threads • 1 process with N threads  1 PCB • N different code sequences can be executed concurrently • PCB allocates space to store execution context all threads • Address space • 1 code region • 1 data region • 1 heap region + N stack regions (1 per thread)

  10. Internals: Processes vs. Threads • Memory Sharing • Between processes • All process memory is private by default: no other process can access it (there are system calls to ask explicitly for shared memory between processes) • Between threads • All threads in a process can access all process address space. • Some considerations • Each thread has its own stack region, to keep its local variables, parameters and values to control the execution flow • However, all stacks regions are also accessible by all threads in the process • Variables/parameters scope vs. permission of access to memory

  11. Usingconcurrentapplications • Potential scenarios for multithreaded or multiprocess applications • Exploiting parallelism and concurrency • Improving modularity • I/O bounded applications • Processes or threads dedicated just to implement device accesses • Server applications

  12. Benefits from using threads • Benefits from using threads compared to using processes • Management costs: creation, destruction and context switch • Improve resource exploitations • Communication mechanism is very simple: shared memory

  13. User level management: thread libraries • There is not a standard interface common to all OS kernels: applications using kernel interface are not portable • POSIX threads (Portable Operating System Interface, defined IEEE) • Thread management interface in user-level • Creation and destruction • Synchronization • Scheduling configuration • It uses the OS system calls as required • There exist implementations for all OS: using this interface applications become portable • API is very complete and for some OS it is only partially implemented

  14. Pthreadmanagementservices • Creation • Processesfork() • Threads pthread_create(outPth_id,in NULL, in function_name, in Pparam) • Identification • Processes : getpid() • Threads : pthread_self() • Ending • Processes : exit(exit_code) • ThreadsPthexit_code) • Synchronizationwiththeend of execution • Processes : waitpid(pid,ending_status, FLAGS) • Threads: pthread_join(in thread_id, outPPexit_code) • Check in the web the interfaces (manpages are notinstalled in thelabs)

  15. Thread creation • pthread_create • Creates a new thread that will execute start_routine using arg parameter #include <pthread.h> intpthread_create(pthread_t *th, pthread_attr_t *attr, void *(*start_routine)(void *), void *arg); th: will hold the thread identifier attr: initial characteristics of the thread (if NULL thread start the execution with the default characteristics) start_routine: routine @ that will execute the new thread (in C, the name of a function represents its starting address). This routine can receive just one parameter of void* type arg: routine parameter Returns 0 if creation ends ok or an error code otherwise

  16. Thread identification • pthread_self • Returns the identifier of the thread that executes this function #include <pthread.h> intpthread_self(void); Returns thread identifier

  17. Thread destruction • pthread_exit • It is executed by the thread that ends the execution • Its parameter is the thread ending code #include <pthread.h> intpthread_exit(void *status); status: thread return value (ending code) Retunrs0 if creation ends ok or an error code otherwise

  18. Shared memory communication • Threads in a process can exchange information through memory (all memory is shared between all threads in a process) • Accessing same variables • Risk: race condition • There is a race condition when results of the execution depends on the relative execution order between the instructions of threads (or processes)

  19. Example : race condition int first= 1 /* shared variable*/ /* thread 1 */ if (first) { first--; task1(); } else { task2(); } /* thread 2 */ if (first) { first--; task1(); } else { task2(); } WRONG RESULT Programmer goal: use firstboolean to distribute task 1 and task 2 between two threads. But using non-atomic operations!!!

  20. Assemblercode Do_task: pushl %ebp movl %esp, %ebp subl $8, %esp movlfirst, %eax testl %eax, %eax je .L2 movlfirst, %eax subl $1, %eax movl %eax, first calltask1 jmp .L5 .L2: calltask2 .L5: leave ret Thisisifcode more tan 1 instruction Thisissubstractioncode more tan 1 instruction Thisiselsecode Whichwill be theeffectsifafterexecutingmovlinstruction in theifsection happens a contextswitch?

  21. Whathappens?…eaxisalready set to 1 THREAD 2 THREAD 1 Do_task: pushl %ebp movl %esp, %ebp subl $8, %esp movlfirst, %eax testl %eax, %eax je .L2 movlfirst, %eax subl $1, %eax movl %eax, first calltask1 jmp .L5 .L2: calltask2 .L5: leave ret Do_task: pushl %ebp movl %esp, %ebp subl $8, %esp movlfirst, %eax testl %eax, %eax je .L2 movlfirst, %eax subl $1, %eax movl %eax, first calltask1 jmp .L5 .L2: calltask2 .L5: leave ret Contextswitch! Contextswitch!

  22. Critical section • Critical section • Sequence of code lines that contains race conditions that may cause wrong results • Sequence of code lines that access shared changing variables • Solution • Mutual exclusion access to that code regions • Avoid context switching?

  23. Mutual exclusion access • Ensures that access to a critical section it is sequential • Only one thread can execute code in a critical section at the same time (even if a context switch happens) • Programmer responsibilities: • Identify critical sections in the code • Mark starting point and ending point of each critical section using toolds provided by OS • OS provides programmers with system calls to mark starting point and ending point of a critical section: • Starting point: if there is not other thread with permission to access the critical section, this thread gets the permission to access and continues with the code execution. Otherwise, this thread waits until access to critical section is released. • Ending point: critical section is released and gives permission to one thread waiting for accessing the critical section, if there is any waiting.

  24. Mutual exclusion: pthread interface • To consider: • Each critical section is identified through a global variable of type pthread_mutex_t. It its necessary to define one variable per type of critical section. • It is necessary to initialize this variable before using it. Ideally, this initialization should be performed before creating the pthreads that will use it.

  25. Exemple: Mutex int first= 1 // shared variable pthread_mutex_t rc1; // New shared variable pthread_mutex_init(& rc1,NULL); // INITIALIZE rc1 VARIABLE: JUST ONCE ….. pthread_mutex_lock(& rc1); // BLOCK ACCESS if (first) { first--; pthread_mutex_unlock (& rc1); //RELEASE ACCESS task1(); } else { pthread_mutex_unlock(& rc1); // RELEASE ACCESS task2(); }

  26. Mutual exclusion: considerations • Programming considerations • Critical sections should be as small as possible in order to maximize concurrency • Mutual exclusion access is driven by the identifier (variable) used in the starting and ending point • It is not necessary to have the same code in related critical sections • If there exists several independent shared variable may be convenient to use different identifiers to protect them

More Related