1 / 72

LINF 2345 Processes

LINF 2345 Processes. Seif Haridi Peter Van Roy. Processes. Threads and processes Threads in distributed systems Code migration Software agents. Threads and Processes. Basic Concepts (1).

rafael-reed
Télécharger la présentation

LINF 2345 Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LINF 2345Processes Seif Haridi Peter Van Roy S. Haridi and P. Van Roy, LINF2345

  2. Processes • Threads and processes • Threads in distributed systems • Code migration • Software agents S. Haridi and P. Van Roy, LINF2345

  3. Threads and Processes S. Haridi and P. Van Roy, LINF2345

  4. Basic Concepts (1) • Processor: An active entity that understands a set of instructions, that is, it is able to execute these instructions • Processors can be hardware (physical) or software (virtual) • Virtual processor: A processor built in software, on top of one or more physical processors • The operating system creates virtual processors, each called a process (often defined as “a program in execution”) • Great care is taken to ensure that these virtual processors cannot maliciously or inadvertently affect the correctness of each other’s behavior (“concurrency transparency”) • Process context: The state needed to specify a process execution. It is voluminous: address translation, resources in use, accounting information. S. Haridi and P. Van Roy, LINF2345

  5. Basic Concepts (2) • Thread: A minimal software processor in whose contexta series of instructions can be executed • No effort is taken to isolate threads from each other: they share data and resources directly • Threads implement partial order of instruction execution, nothing more • Thread context: A small amount of state, usually CPU state (registers, pointer to thread stack) with some extra state (currently blocked on mutex variable, for example) • Performance of a multithreaded application is not worse than its single-threaded counterpart S. Haridi and P. Van Roy, LINF2345

  6. Basic Concepts (3) • Competitive concurrency • Execution of processes in the context of a physical processor. Each process has its own goals to achieve and they compete for system resources. Usually they execute independently written programs. • Cooperative concurrency • Execution of threads in the context of a process. Threads are designed to work together to achieve a common goal. Usually they are execute within a single program. S. Haridi and P. Van Roy, LINF2345

  7. Context Switching • The act of changing which concurrent entity (process/thread) is currently executing • Process context • The information needed to specify the execution of a process. Assumes that the processor does not change. • Thread context • The information needed to specify the execution of a thread. Assumes that the process does not change. S. Haridi and P. Van Roy, LINF2345

  8. Context Switching • Observation 1 • Threads share the same address space. Thread context switching can be done entirely independent of the operating system. • Observation 2 • Process switching is generally more expensive as it involves getting the OS in the loop, i.e., trapping to the kernel • Observation 3 • Creating and destroying threads is much cheaper than doing so for processes. S. Haridi and P. Van Roy, LINF2345

  9. Threads andOperating Systems • Main issue: Should an OS kernel provide threads, or should they be implemented as user-level packages? S. Haridi and P. Van Roy, LINF2345

  10. User Space Solution • We’ll have nothing to do with the kernel, so all operations can be completely handled within a single process • Implementations can be extremely efficient • Observation: All services provided by the kernel are done on behalf of the process in which a thread resides • If the kernel decides to block a thread, the entire process will be blocked • Conclusion: All I/O and external events should be non-blocking S. Haridi and P. Van Roy, LINF2345

  11. User Space Solution • Threads are used when there are lots of external events • Threads block on a per-event basis • The process never blocks • If the kernel can’t distinguish threads, how can it support signaling events to them? • Solution: Central routine • Event requests are indexed by threads • Checks regularly for new external events • Moves threads from blocking to running and vice-versa S. Haridi and P. Van Roy, LINF2345

  12. Kernel Space Solution • The whole idea is to have the kernel contain the implementation of a thread package. This means that all operations return as system calls. S. Haridi and P. Van Roy, LINF2345

  13. Kernel Space Solution • Operations that block a thread are no longer a problem: the kernel schedules another available thread within the same process • Handling external events is simple: the kernel (which catches all events) schedules the thread associated with the event • If there are multiple physical processors, performance can be increased by mapping threads to different processors • One problem is the loss of efficiency due to the fact that each thread operation requires a trap to the kernel • Another problem is loss of portability: an implementation is limited to a single operating system, porting to another is difficult S. Haridi and P. Van Roy, LINF2345

  14. Threads and OS • Many systems mix user-level and kernel-level threads • Lightweight process (LWP): several per (heavy-weight) process, supported by kernel • Complemented by user-level thread package • Others, like Mozart/Oz and Erlang, do not • www.mozart-oz.org • www.erlang.org • Supports only user-level threads • Gives portability and efficiency S. Haridi and P. Van Roy, LINF2345

  15. Solaris Threads • Basic idea: Introduce a two-level threading approach: lightweight processes (LWP) that can execute user-level threads S. Haridi and P. Van Roy, LINF2345

  16. Solaris Threads S. Haridi and P. Van Roy, LINF2345

  17. Solaris Threads • When a user-level thread does a system call, the LWP that is executing that thread blocks. The thread remains bound to the LWP. • The kernel can simply schedule another LWP having a runnable thread bound to it. Note that this thread can switch to any other runnable thread currently in user space. S. Haridi and P. Van Roy, LINF2345

  18. Solaris Threads • When a thread calls a blocking user-level operation, we can simply do a context switch to a runnable thread, which is then bound to the same LWP (user-level context switch) • When there are no threads to schedule, an LWP may remain idle, and may even be removed (destroyed) by the kernel S. Haridi and P. Van Roy, LINF2345

  19. Threads inDistributed Systems S. Haridi and P. Van Roy, LINF2345

  20. Threads inDistributed Systems • Let’s take a look at threads in the context of a common distributed programming idiom, namely the client/server • Multithreaded clients and servers • Other aspects of clients and servers • Later on in the course, when we look at transparent distribution, we will see other ways to use threads • Dataflow execution • Decentralized (peer-to-peer) execution S. Haridi and P. Van Roy, LINF2345

  21. Multithreaded Clients • Main issue is hiding network latency S. Haridi and P. Van Roy, LINF2345

  22. Multithreaded Clients • Main issue is hiding network latency • Multithreaded Web client • Web browser scans an incoming HTML page, and finds that more files need to be fetched • Each file is fetched by a separate thread, each doing a (blocking) HTTP request • As files come in, the browser displays them S. Haridi and P. Van Roy, LINF2345

  23. Multithreaded Clients • Main issue is hiding network latency • Multiple RPCs/RMIs • A client does several RPCs at the same time, each one by a different thread • It then waits until all results have been returned • Note: if RPCs are to different servers, we may have a linear speed-up compared to doing RPCs one after the other S. Haridi and P. Van Roy, LINF2345

  24. Multithreaded Servers • Main issue is improved performance and better program structure S. Haridi and P. Van Roy, LINF2345

  25. Improved Performance in Servers • Starting a thread to handle an incoming request is much cheaper than starting a new process • Having a single-threaded server prohibits simply scaling the server to a multiprocessor system • As with clients: hide network latency by reacting to next request while previous one is being replied S. Haridi and P. Van Roy, LINF2345

  26. Better Program Structure in Servers • Most servers have high I/O demands. Using simple, well-understood blocking kernel calls simplifies the overall structure. • Multithreaded programs tend to be smaller and easier to understand due to simplified flow of control as it reflects the structure on the (concurrent) model S. Haridi and P. Van Roy, LINF2345

  27. Clients • User interfaces • Other client-side software S. Haridi and P. Van Roy, LINF2345

  28. User interfaces Essence: A major part of client-side software is focused on (graphical) user interfaces X kernel and X application may be on different machines S. Haridi and P. Van Roy, LINF2345

  29. User interfaces • Compound documents: Make the user interface application-aware to allow inter-application communication • drag-and-drop: move objects to other positions on the screen, possibly invoking interaction with other applications • in-place editing: integrate several applications at user-interface level (word processing + drawing facilities) • Illusion of single desktop, with network operations underneath • User interface transparency • This is one way to hide network operations; we will see many more! S. Haridi and P. Van Roy, LINF2345

  30. Client-Side Software • Essence: Often focused on providing distribution transparency S. Haridi and P. Van Roy, LINF2345

  31. Client-Side Software • Essence: Often focused on providing distribution transparency • access transparency • client-side stubs for RPCs and RMIs • location/migration transparency • let client-side software keep track of actual location S. Haridi and P. Van Roy, LINF2345

  32. Client-Side Software • Essence: Often focused on providing distribution transparency • replication transparency: multiple invocations handled by client stub • failure transparency: can often be placed only at client (we’re trying to mask server and communication failures) S. Haridi and P. Van Roy, LINF2345

  33. Servers • General server organization • Object servers S. Haridi and P. Van Roy, LINF2345

  34. General Organization Basic model • A server is a process that waits for incoming service requests at a specific transport address • In practice, there is a one-to-one mapping between a port and a service S. Haridi and P. Van Roy, LINF2345

  35. General Organization Basic model • In practice, there is a one-to-one mapping between a port and a service S. Haridi and P. Van Roy, LINF2345

  36. Super Servers • Servers that listen to several ports, i.e., provide several independent services. • In practice, when a service request comes in, they start a process to handle the request S. Haridi and P. Van Roy, LINF2345

  37. Iterative vs. concurrent servers • Iterative servers can handle only one client at a time, in contrast to concurrent servers • Iterative servers are sequential programs • Concurrent servers use multiple threads • New thread for each request (multithreaded) • New process for each request (typical Unix style) S. Haridi and P. Van Roy, LINF2345

  38. Out-of-Band Communication • Issue: Is it possible to interrupt a server once it has accepted (or is in the process of accepting) a service request? S. Haridi and P. Van Roy, LINF2345

  39. Out-of-Band CommunicationSolution 1 • Issue: Is it possible to interrupt a server once it has accepted (or is in the process of accepting) a service request? • Use a separate port for urgent data (possibly per service request) • Server has a separate thread (or process) waiting for incoming urgent messages • When an urgent message comes in, the associated service request is put on hold • Note: we require OS to support high-priority scheduling of specific threads or processes S. Haridi and P. Van Roy, LINF2345

  40. Out-of-Band CommunicationSolution 2 • Issue: Is it possible to interrupt a server once it has accepted (or is in the process of accepting) a service request? • Use out-of-band communication facilities of the transport layer • Example: TCP allows to send urgent messages in the same connection • Urgent messages can be caught using OS signaling techniques S. Haridi and P. Van Roy, LINF2345

  41. Servers and StateStateless Servers • Never keep accurate information about the status of a client after having handled a request • Don’t record whether a file has been opened (simply close it again after access) • Don’t promise to invalidate a client’s cache • Don’t keep track of your clients S. Haridi and P. Van Roy, LINF2345

  42. Servers and StateStateless Servers: Consequences • Clients and servers are completely independent • State inconsistencies due to client or server crashes are reduced • Possible loss of performance because, e.g., a server cannot anticipate client behavior (think of prefetching file blocks) S. Haridi and P. Van Roy, LINF2345

  43. Servers and StateStateless Servers: Consequences • Clients and servers are completely independent • State inconsistencies due to client or server crashes are reduced • Possible loss of performance because, e.g., a server cannot anticipate client behavior (think of prefetching file blocks) • Question: Does connection-oriented communication fit into a stateless design? S. Haridi and P. Van Roy, LINF2345

  44. Servers and StateStateful Servers • Stateful servers keep track of the status of clients: • Record that a file has been opened, so that prefetching can be done • Know which data a client has cached, and allow clients to keep local copies of shared data • For Web server, server state stored at client (“cookie”) gets the effect of a stateful server while keeping the server stateless • Observation: The performance of stateful servers can be extremely high, provided clients are allowed to keep local copies. As it turns out, reliability is not a major problem. S. Haridi and P. Van Roy, LINF2345

  45. Object Servers • A server tailored to support distributed objects • Some concepts: servant, skeleton, object adapter • Servant: The actual implementation of an object, sometimes containing only method implementations • Java or C++ classes • Doesn’t need to be in an OO language • Collection of C or COBOL functions, that act on structs, records, database tables, etc. S. Haridi and P. Van Roy, LINF2345

  46. Object Servers • Skeleton: Server-side stub for handling network I/O • Unmarshalls incoming requests, and calls the appropriate servant code • Marshalls results and sends reply message • Generated from interface specifications S. Haridi and P. Van Roy, LINF2345

  47. Object Servers • Object adapter: The “manager” of a set of objects • Inspects (as first) incoming requests • Ensures referenced object is activated (requires identification of servant) • Passes request to appropriate skeleton, following specific activation policy • Responsible for generating object references S. Haridi and P. Van Roy, LINF2345

  48. Object Servers • Observation: Object servers determine how their objects are constructed S. Haridi and P. Van Roy, LINF2345

  49. Code Migration S. Haridi and P. Van Roy, LINF2345

  50. Code Migration • Approaches to code migration • Migration and local resources • Migration in heterogeneous systems S. Haridi and P. Van Roy, LINF2345

More Related