1 / 27

Phones OFF Please

Phones OFF Please. Inter-Process Communication (IPC) Parminder Singh Kang Home: www.cse.dmu.ac.uk/~pkang Email: pkang@dmu.ac.uk. IPC Most modern operating systems are multi-tasking (e.g. Unix, O/S2, Linux, Windows Nt, Windows 95).

clare
Télécharger la présentation

Phones OFF Please

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Phones OFF Please Inter-Process Communication (IPC) Parminder Singh Kang Home: www.cse.dmu.ac.uk/~pkang Email: pkang@dmu.ac.uk

  2. IPC • Most modern operating systems are multi-tasking (e.g. Unix, O/S2, Linux, • Windows Nt, Windows 95). • Distributed environment, client/server (processes could be on different CPU's.) • IPC; a mechanism to allow processes to communicate and to synchronize • their actions without sharing same address space. distributed computing • environment; where processes may reside on different machines over network. • Often processes want to communicate. We can split them into two types:- • Uniprocessor IPC multiprocessor IPC • / \ • Distributed systems Multi-CPU systems • (Networked) (Shared bus) • The O/S provides most mechanisms, but some are language dependent • Usually processes form a client/server or consumer/producer relationship.

  3. 2 Producer/Consumer relationship • One process generates data, the other receives it, i.e. producer/consumer • Synchronization, e.g. we have a loop where • Producer generates some data • Consumer reads the data • Condition; • Consumer needs to know when data has been produced. • Producer needs to know when it can produce more data. • Communication is based on handshaking (as in hardware and I/O devices) • Producer/Consumer process is • Producer  Buffer  Consumer • Need a variable – freeslots –indicates if anything is in buffer

  4. Producer algorithm: loop if freeslots > 0 then add an item decrease freeslots by 1 endloop Consumer algorithm: loop if freeslots < BUFSIZE then remove an item increase freeslots by 1 endloop

  5. 2 Peer to Peer model and client/server model • Peer to Peer • Process A Process B • send message  rec message • rec message  send message • Client/Server • Client Server • loop • send message  wait for message • rec message  send message • end loop • Note server runs continuously in a loop. Client runs then finishes. • UNIX servers called daemons, e.g. • Clients telnet, ftp … • Sever daemons are telnetd, ftpd, …

  6. 3 Overview of I.P.C. Mechanisms • Shared memory: • only one process can access memory at a time (mutual exclusion). We can use; • Signals - unreliable. Signals aren't queued, so some can get lost. • Semaphores - low level. Make a mistake and everything goes to sleep. • Event-counters - simpler than semaphores. Not very common but not guaranteed. • Message passing • Pipes or unbound sockets - only used by related processes. Easy to use. • Synchronisation controlled by O/S • Sockets - Similar to pipes/file access. Can be used across network. Different • varieties - guaranteed delivery (virtual circuits), unreliable (datagram) • Rendezvous - similar to pipes/sockets but no buffering. Used in Occam and Ada.

  7. Shared files - similar to shared memory in virtual memory systems, since the file • can be memory mapped. Can include record locking to enable synchronising. • Named pipes - similar to pipes, but used by unrelated processes. • Message passing. Similar to pipes, sockets, etc, but message boundaries preserved. • Remote Procedure Call (RPC)

  8. 4. Shared Memory • The most efficient IPC mechanism is probably shared memory. • .------------. .------------. .------------. • | | | Shared | | | • | Process A |-------->| Memory |----------> | Process B | • | | | | | | • ------------ ------------ ------------ • If we allow uncontrolled access we can get race conditions – • Results depend on which process accesses memory. • For example, consider process A is a producer, and process B is a consumer. • Process A is putting data into a buffer (the shared memory), process B • is removing it. • To control access we have a shared variable called ItIsFree, which is true • when no process is accessing the buffer. The two processes might look like :-

  9. Process A Process B • LOOP LOOP • IF ItIsFree then IF ItIsFree then • ItIsFree := FALSE ItIsFree := FALSE • put data in buffer take data from buffer • ItIsFree := TRUE ItIsFree := TRUE • END END • END END • Problems can occur if testing and setting ItIsFree is not a single operation. • Process A Process B • | • does test - ItIsFree is TRUE does test - ItIsFree is TRUE • set ItIsFree FALSE set ItIsFree FALSE • now both accessing the buffer at the same time

  10. Note: The section of code which accesses the shared memory is called the critical section. To ensure race conditions cannot occur we need mutual exclusion - only one process is in its critical section at a time. • Avoidance:   • No two processes simultaneously in their critical sections. • No assumption should be made about speed or number of CPU's. • Processes outside their critical region should not be able to block other processes. • No process should have to wait forever to enter its critical section.

  11. 4.1 Access control - TAS instruction • To control access we want an indivisible instruction which can test and set a variable in one operation. • label: TAS variable ; test the access variable • BNE label ; wait for zero • : • critical section of code using shared region • : • CLR.B variable ; set to 0 to allow access • This is of little use since it uses busy-waiting (the TAS loop) and is a waste of CPU time.

  12. 4.2 Access control - Sleep and Wakeup • Process should sleep until the region is clear, then be woken up. • system calls: • sleep send process to sleep • wakeup wake process up. (i.e. make it ready to run) • The mechanism; • producer has produced some output, wakes the consumer and goes • to sleep itself. • When the consumer has consumed the input it wakes the producer • and sends itself to sleep. So the two processes alternate and wake each other. • What if signal Lost? • the producer reads the flag and finds it zero • but before it can go to sleep the consumer is scheduled

  13. the consumer sends a wakeup, which gets lost, since nothing is asleep yet • the consumer then goes to sleep, waiting for the producer to wake it • the producer resumes and completes its sleep operation. • Both then sleep forever. • The solution is to store the wakeups - we then have the semaphore.

  14. 4.3 Access control - Semaphores • A semaphore is a cardinal variable, with an associated list of sleeping processes. • plus two atomic operations, which can be performed on it. • The two operations are DOWN(s) and UP(s) both will be system calls • (s is the semaphore. Definition:- • DOWN(s): If s = 0 then • send process to sleep • Else • decrement s • End • UP(s): If any processes asleep on s then • wake one (i.e. make it ready to run) • Else • increment s • End • If s only takes the values 0 and 1 it is a binary semaphore, • otherwise it is a general semaphore.

  15. Semaphores are only used to enable processes to synchronise themselves. • i.e. to ensure only one process enters a critical region at a time. • In typical use each process will have the following: • DOWN(s) • : • critical section • : • UP(s) • Initially s is set to 1. • If process A runs first and does a DOWN, Set S to 0. • Once A has completed the DOWN it can be interrupted, even if it is still in • its critical section. Suppose this happens and process B is set running. • If it does the DOWN it will be sent to sleep. Process A will eventually be • set running again and will do an UP on exiting its critical region. • This will wake up B. Note we can have more than two processes.

  16. 4.4 Access control - event counters • Like the semaphore an event counter is a cardinal variable with • an associated list of sleeping processes. • Apart from initialising the are 3 operations which can be performed; • READ(e) read the value of 'e‘ • ADVANCE(e) increment 'e‘ • AWAIT(e,v) wait until 'e' has a value of 'v' or more. • Like semaphores they are atomic system calls. • For example in the producer/consumer situation, we could have: • ine = 0; oute = 1; • pseq = 0; cseq = 0;

  17. then the producer would be • LOOP • : • produce an item • inc(pseq) • AWAIT(oute, pseq) • put item in the buffer • ADVANCE(ine) • : • END • and the consumer would be • LOOP • : • Inc(cseq) • AWAIT(ine, cseq) • get item from buffer • ADVANCE(oute) • : • END

  18. 5 UNIX IPC • 5.1 Pipes • Unix originally just had the pipe as the IPC mechanism. • Pipe is a convenient high level mechanism; created with system call, • int pipe(int filedes[2]); • pipe creates a pair of file descriptors, pointing to a pipe i-node • places them in the array pointed to by filedes. • filedes[0] is for reading • filedes[1] is for writing. • On success, zero is returned, on error, -1 is • #include <unistd.h> • int fd[2], rval; • if ((rval = pipe(fd)) < 0) cout << " error creating pupe";

  19. Performing Read and Write: • Read; block an empty pipe until some data is put into the pipe. • Process fills a pipe it blocks until some bytes are removed from the pipe. • The read command will return with zero bytes when end of file is met, • When the pipe has no writing processes left. • A single pipe can be used to send information one way or the other, i.e.: • ----------- ------------ • | | write(p[1]) read(p[0]) | | • | |======>===========--==========>======= | | • | parent| || | child | • | |======<===========__==========<======= | | • | | read(p[0]) write(p[1])| | • ----------- ------------

  20. Either the child or parent writes; not both at the same time. • To have simultaneous two way communication you can have two pipes, • say p and q. It usual to close the ends of the pipe down you don't want, • e.g. after the fork command the parent can close q[0] and p[1], • Child can close q[1] and p[0]. Thus the parent writes into q and reads • from p, and the child writes to p and reads from q.

  21. 6. Message Passing System • Communication among user processes is accomplished by passing messages. • At lease two operations are required; • send (Message) • receive (Message) • Messages can be of variable or fixed size. • Fixed size; system level implementation is easy but programming task is difficult. • Variable size; Complex system level implementation but programming is much simpler.

  22. Several Methods for logical implementation • Direct or indirect communication. • Symmetric or Asymmetric communication. • Automatic or Explicit buffering. • Send by Copy or Send by Reference. • Fixed size or Variable size messages.

  23. 6.1 Direct communication: Direct Communication Symmetric naming • sender and receiver have name of each process. • send(p,message) -> sends message to p • receive(q,message)-> receive message from q • Asymmetric naming • only sender names recipient; receiver uses id instead of name. • send (p,message)  send message to p • receive(id,message)  receive message from any process.

  24. conditions; • automatic link should be established between two processes. • a link should be associated only with two processes. • exactly one link should exists between pair of processes

  25. 6.2 Indirect communication: • Messages are sent or received from mailboxes or port. • Implementation; • create new mailbox. • send and receive message through mail box. • delete mailbox. • send(A,message)  send message to mailbox A • receive(a,message)  receive message from mailbox A • Communication link properties; • Link is only established if both processes r member of same mailbox. • Link can be associated with more than one processes. • A number of different links can be exists between each pair of communicating process.

  26. 6.3 Synchronization and buffering • Synchronization • blocking send • non blocking send • blocking receive • non blocking receive • Buffering • zero capacity • finite capacity • infinite capacity

More Related