550 likes | 921 Vues
Distributed Operating Systems CS551. Colorado State University at Lockheed-Martin Lecture 3 -- Spring 2001. CS551: Lecture 3. Topics Real Time Systems (and networks) Interprocess Communication (IPC) message passing pipes sockets remote procedure call (RPC) Memory Management.
E N D
Distributed Operating SystemsCS551 Colorado State University at Lockheed-Martin Lecture 3 -- Spring 2001
CS551: Lecture 3 • Topics • Real Time Systems (and networks) • Interprocess Communication (IPC) • message passing • pipes • sockets • remote procedure call (RPC) • Memory Management CS-551, Lecture 3
Real-time systems • Real-time systems: • systems where “the operating system must ensure that certain actions are taken within specified time constraints” Chow & Johnson, Distributed Operating Systems & Algorithms, Addison-Wesley (1997). • systems that “interact with the external world in a way that involves time … When the answer is produced is as important as which answer is produced.” Tanenbaum, Distributed Operating Systems, Prentice-Hall (1995). CS-551, Lecture 3
Real-time system examples • Examples of real-time systems • automobile control systems • stock trading systems • computerized air traffic control systems • medical intensive care units • robots • space vehicle computers (space & ground) • any system that requires bounded response time CS-551, Lecture 3
Soft real-time systems • Soft real-time systems • “missing an occasional deadline is all right” Tanenbaum, Distributed Operating Systems, Prentice-Hall (1995). • “have deadlines but are judged to be in working order as long as they do not miss too many deadlines” Chow & Johnson, Distributed Operating Systems & Algorithms, Addison-Wesley (1997). • Example: multimedia system CS-551, Lecture 3
Hard real-time systems • Hard real-time systems • “only judged to be correct if every task is guaranteed to meet its deadline” Chow & Johnson, Distributed Operating Systems & Algorithms, Addison-Wesley (1997). • “a single missed deadline … is unacceptable, as this might lead to loss of life or an environ-mental catastrophe” Tanenbaum, Distributed Operating Systems, Prentice-Hall (1995). CS-551, Lecture 3
Hard/Soft real-time systems • “A job should be completed before its deadline to be of use (in soft real-time systems) or to avert disaster (in hard real-time systems). The major issue in the design of real-time operating systems is the scheduling of jobs in such a way that a maximum number of jobs satisfy their deadlines.” Singhal & Shivaratri, Advanced Concepts in Operating Systems, McGraw-Hill (1994). CS-551, Lecture 3
Firm real-time systems • Firm real-time systems • “similar to a soft real-time system, but tasks that have missed their deadlines are discarded” Tanenbaum, Distributed Operating Systems, Prentice-Hall (1995). • “where missing a deadline means you have to kill off the current activity, but the consequence is not fatal” Chow & Johnson, Distributed Operating Systems & Algorithms, Addison-Wesley (1997). • E.g. partially filled bottle on assembly line CS-551, Lecture 3
Types of real-time systems • Re-active: • interacts with environment • Embedded: • controls specialized hardware • Event-triggered: • unpredictable, asynchronous • Time-Triggered: • predictable, synchronous, periodic CS-551, Lecture 3
Myths of real-time systems (Tanenbaum) • “Real-time systems are about writing device drivers in assembly code.” • involve more than device drivers • “Real-time computing is fast computing.” • computer-powered telescopes • “Fast computers will make real-time systems obsolete.” • no -- only more will be expected of them CS-551, Lecture 3
Message passing • A form of communication between two processes • A physical copy of message is sent from one process to the other • Blocking vs Non-blocking CS-551, Lecture 3
Blocking message passing • Sending process must wait after send until an acknowledgement is made by the receiver • Receiving process must wait for expected message from sending process • A form of synchronization • Receipt determined • by polling common buffer • by interrupt CS-551, Lecture 3
Figure 3.1 Blocking Send and Receive Primitives: No Buffer.(Galli, p.58) CS-551, Lecture 3
Figure 3.2 Blocking Send and Receive Primitives with Buffer.(Galli, p.58) CS-551, Lecture 3
Non-blocking message passing • Asynchronous communication • Sending process may continue immediately after sending a message -- no wait needed • Receiving process accepts and processes message -- then continues on • Control • buffer -- receiver can tell if message still there • interrupt CS-551, Lecture 3
Process Address • One-to-one addressing • explicit • implicit • Group addressing • one-to-many • many-to-one • many-to-many CS-551, Lecture 3
One-to-one Addressing • Explicit address • specific process must be given as parameter • Implicit address • name of service used as parameter • willing to communicate with any client • acts like send_any and receive_any CS-551, Lecture 3
Figure 3.3 Implicit Addressing for Interprocess Communication. (Galli, p.59) CS-551, Lecture 3
Figure 3.4 Explicit Addressing for Interprocess Communication. (Galli,p.60) CS-551, Lecture 3
Group addressing • One-to-many • one sender, multiple receivers • broadcast • Many-to-one • multiple senders, but only one receiver • Many-to-many • difficult to assure order of messages received CS-551, Lecture 3
Figure 3.5 One-to-Many Group Addressing. (Galli, p.61) CS-551, Lecture 3
Many-to-many ordering • Incidental ordering • least structured, fastest • acceptable if all related messages received in any order • Uniform ordering • all receivers receive messages in same order • Universal ordering • all messages must be received in exactly the same order as sent CS-551, Lecture 3
Figure 3.6 Uniform Ordering. (Galli, p.62) CS-551, Lecture 3
Pipes • “interprocess communication APIs” • “implemented by a finite-size, FIFO byte-stream buffer maintained by the kernel” • “serves as a unidirectional communication link so that one process can write data into the tail end of the pipe while another process may read from the head end of the pipe” • Chow & Johnson, Distributed Operating Systems & Algorithms, Addison-Wesley (1997). CS-551, Lecture 3
Pipes, continued • “created by a pipe system call, which returns two pipe descriptors (similar to a file descriptor), one for reading and the other for writing … using ordinary write and read operations” (C&J) • “exists only for the time period when both reader and writer processes are active” (C&J) • “the classical producer and consumer IPC problem” (C&J) CS-551, Lecture 3
Figure 3.7 Interprocess Communication Using Pipes.(Galli, p.63) CS-551, Lecture 3
Unnamed pipes • “Pipe descriptors are shared by related processes” (e.g. parent, child) • Chow & Johnson, Distributed Operating Systems & Algorithms, Addison-Wesley (1997). • Such a pipe is considered unnamed • Cannot be used by unrelated processes • a limitation CS-551, Lecture 3
Named pipes • “For unrelated processes, there is a need to uniquely identify a pipe since pipe descriptors cannot be shared. One solution is to replace the kernel pipe data structure with a special FIFO file. Pipes with a path name are called named pipes.” (C&J) • “Since named pipes are files, the communicating processes need not exist concurrently” (C&J) CS-551, Lecture 3
Named pipes, continued • “Use of named pipes is limited to a single domain within a common file system.” (C&J) • a limitation • …. Therefore, sockets…. CS-551, Lecture 3
Sockets • “a communication endpoint of a communication link managed by the transport services” (C&J) • “created by making a socket system call that returns a socket descriptor for subsequent network I/O operations, including file-oriented read/write and communication-specific send/receive” (C&J) CS-551, Lecture 3
Figure 1.4 The ISO/OSI Reference Model.(Galli, p.9) CS-551, Lecture 3
Sockets, continued • “A socket descriptor is a logical communication endpoint (LCE) that is local to a process; it must be associated with a physical communication endpoint (PCE) for data transport. A physical communication endpoint is specified by a network host address and transport port pair. The association of a LCE with a PCE is done by the bind system call.” (C&J) CS-551, Lecture 3
Types of socket communication • Unix • local domain • a single system • Internet • world-wide • includes port and IP address CS-551, Lecture 3
Types , continued • Connection-oriented • uses TCP • “a connection-oriented reliable stream transport protocol” (C&J) • Connectionless • uses UDP • “a connectionless unreliable datagram transport protocol” (C&J) CS-551, Lecture 3
Connection-oriented socket communication Server Client • Adapted from Chow & Johnson socket socket bind listen rendezvous connect accept request read write reply write read CS-551, Lecture 3
Connectionless socket communication • Adapted from Chow & Johnson Peer Processes Peer Processes socket socket LCE LCE bind bind PCE Sendto /recvfrom PCE CS-551, Lecture 3
Socket support • Unix primitives • socket, bin, connect, listen, send, receive, shutdown • available through C libraries • Java classes • Socket • ServerSocket CS-551, Lecture 3
Figure 3.8 • Socket • Analogy • (Galli,p.66) CS-551, Lecture 3
Figure 3.9 Remote Procedure Call Stubs.(Galli,p.73) CS-551, Lecture 3
Figure 3.10 Establishing Communication for RPC. (Galli,p.74) CS-551, Lecture 3
Table 3.1 Summary of IPC Mechanisms and Their Use in Distributed System Components. • (Galli, p.77) CS-551, Lecture 3
Memory Management • Review • Simple memory model • Shared memory model • Distributed shared memory • Memory migration CS-551, Lecture 3
Virtual memory (pages, segments) • Virtual memory • Memory management unit • Pages - uniform sized • Segments - of different sizes • Internal fragmentation • External fragmentation CS-551, Lecture 3
Figure 4.1 Fragmentation in Page-Based Memory versus a Segment-Based Memory. (Galli, p.83) CS-551, Lecture 3
Page replacement algorithms • Page fault • Thrashing • First fit • Best fit • LRU (NRU) • second chance • Worst fit CS-551, Lecture 3
Figure 4.2 • Algorithms • for • Choosing • Segment • Location • (Galli,p.84) CS-551, Lecture 3
Simple memory model • Parallel UMA systems • thrashing can occur when parallel processes each want its own pages in memory • time to service all memory requests expensive, unless memory large • virtual memory expensive • caching can be expensive CS-551, Lecture 3
Shared memory model • NUMA • Memory bottleneck • Cache consistency • Snoopy cache • enforce critical regions • disallow caching shared memory data CS-551, Lecture 3
Figure 4.3 • Snoopy • Cache • (Galli, p.89) CS-551, Lecture 3
Distributed shared memory • BBN Butterfly • Reader-writer problems CS-551, Lecture 3