1 / 32

Distributed Programs

Distributed Programs. Clients and servers. Distributed computations. Concurrent programs Processes communicate by message passing Execution typically on network architectures Networks of workstations Distributed memory parallel machines. Paradigms for process interactions. Several models

Télécharger la présentation

Distributed Programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Programs Clients and servers

  2. Distributed computations • Concurrent programs • Processes communicate by message passing • Execution typically on network architectures • Networks of workstations • Distributed memory parallel machines

  3. Paradigms for process interactions • Several models • Networks of filters, clients, servers • Heartbeat algorithms • Probe/echo algorithms • Broadcast algorithms • Token-pasing algorithms • Decentralised servers • Bags of tasks

  4. Concurrent programming • Construct a program containing multiple processes that cooperate in performing some task • Design task given a problem: • How many processes? • How should they interact? • Application and hardware dependent • Critical problem: communication between processes

  5. Historical perspective • Operating systems first • Independent device controller tasks • User tasks • Single processor systems used multiprogramming • Processes executed in interleaving manner • Multiprocessor systems • Shared memory multiprocessor • Multicomputers • Networks

  6. Historical perspective con’t • Examples of concurrent programs • Operating systems • Window systems on PC:s or workstations • Transaction processing in databases • File servers in networks • Scientific computations with much data • Often called parallel program

  7. Distributed programs • Processes communicate by message passing • Execute typically on distributed architectures • Typically (but not exclusively) no shared memory

  8. Distributed programs • Four basic kind of processes • Filters • Clients • Servers • Peers

  9. Filter • Data transformer • Receives streams of data from its input channels • Performs some computation on those values • Sends streams of results to its output channels • Many UNIX user-level commands are filters

  10. Client and server • Client is a triggering process, server is a reactive process • Clients make requests that trigger reactions from servers • Client initiates activity • At times of its choosing • Often delays until request has been servised • Server waits for requests • Reacts to them • Usually a nonterminating process • Provides service to several clients

  11. Client and server con’t • File server in a distributed system • Manages a collection of files • Services requests from any client

  12. Peer • A peer is one of a collection of identical processes • Interact to provide service or • Compute a result • E.g two peers each manage a copy of a file and interact to keep the copies consistent • E.g. several peers might interact to solve a parallel programming problem, each solving a piece of the problem

  13. Process-interaction paradigms • One-way data flow through networks of filters • Requests and replies between cleints and servers • Back-and-forth (heartbeat) interaction between neighboring processes • Probes and echoes in graphs • Broadcasts between processes in complete graphs • Token passing along edges in graphs • Coordination between decentralised server processes • Replicated workers sharing a bag of tasks

  14. On notation • Channels • Abstraction of a physical communication network • Accessed by two primities: send and receive • Global vs associated with processes • One-way vs two-way information flow • Asynchronous (nonblocking) vs synchronous (blocking) communication

  15. On notation con’t • Asynchronous message passing • Channels conceptually unbounded capacity => nonblocking • Synchronous message passing • Communication and synchronisation tightly coupled => sending process delays until receiver ready to receive • Message exchange is a sunchronisation point, no need to store messages in channels • Buffered message passing with fixed capacity

  16. On notation con’t • Generative communication • Processes share a single communication channel called tuple space • Associative naming used to distinguish different kind of messages vs different channels in asynchronous message passing

  17. On notation con’t • Remote procedure calls (RPC) and rendezvous • Combination of monitors and synchronous message passing • Process exports operations invoked by a call statement (monitors) • Execution of a call is synchronous • Calling process delays untill invokation and result • Hence, a two-way communication channel

  18. Filters: a sorting network • Sort a list on n numbers into ascending order • Straightforward: one filter Chaninput(int), output(int) Sort:: decllaration of local variables Receive all numbers from input Sort the numbers Send the sorted numbers to output SORT: for all i: 1 ≤ i < n: sent[I] ≤ sent[I+1] and values sent to output are a permutation of values received from input

  19. Filters: a sorting network con’t • A practical consideration: • Receive is blocking so how do we know when all numbers have been receved! • Sort knows n in advance? • The first input is n? • End input stream with a sentinel ! • Often most efficient approach to sorting • Network of processes!

  20. Filters: a sorting network con’t • A merge network • Merge repeatedly and in parallel two sorted lists into a larger sorted list • Merge filters • Receive values from two ordered input streams in1, in2 • Merge the values • Produce one output stream out • Use sentinel EOS to mark end of stream

  21. Filters: a sorting network con’t • MERGE: in1 and in2 are empty and sent[n+1] = EOS and for all i: 1≤ i < n: sent[I] ≤ sent[I+1] and values sent to out are a permutation of values received from in1 and in2

  22. Filters: a sorting network con’t • How to implement Merge? • Receive all input, merge them and send the merged list to output • Repeatedly compare th next two values from in1 and in2 and send the smaller to out • When all values from one input are concumed append further values from the other input to out

  23. Filters: a sorting network con’t • Sorting network • A collection of Merge processes and arrays of input and output channels connected together • Input and output channels need to be shared • Static vs dynamic naming • Filters can be connected in different ways • Output needs to meet the input assumptions • Sort can be replaced by Merge plus a process distributing the input values • Sort can be replaced by any sorting strategy

  24. Clients and servers • A server is a process that repeatedly handles requests from clients • Two examples: • How to turn monitors into servers • How to implement disk scheduler and a disk server?

  25. Centralised servers: Active monitors • A centralised server is a resource manager: • Local variables record the state of the resource • It services requests to access that resource • Similar to a monitor • Server is active, monitor is passive

  26. Monitor-based Permanent variables Procedure identifiers Procedure call Monitor entry Procedure return Wait statement Signal statement Procedure bodies Message-based Local server vars Request channel and operation kinds Send request Receive reply Receive request Send reply Save pending request Retrieve and process pending request Arms of case statement on operation kind Centralised servers: Active monitors

  27. Heartbeat algorithms • Consider the problem of computing the topology of a network consisting of processors connected by bidirectional communication channels. • Each processor can communicate only with its neighbors and know only about links to the neighbors. • Each processor should determine the topology of the entire network

  28. Heartbeat algorithms con’t • Each processor modelled as a process • Communication links modelled as shared channles • Two solutions • Shared memory solution • Distributed computation

  29. Probe/echo algorithms • How to be sure that a message reaches all nodes in a network? • Broadcast in a network • Using trees • Topology of a network • Cyclic/acyclic graphs

  30. Broadcast algorithms • In LAN:s processors share a common communication channel (Ethernet, token ring) • Each processor can directly communicate with every other processor • How to implement broadcast? • Logical clocks and event ordering • Distributed semaphores

  31. Token-passing algorithms • Token is a special message used to convey permission to some action • Two synchronisation problems: • Distributed mutual exclusion • Termination detection in a ring/graph

  32. Replicated servers • Multiple server processes that each do the same • Replication of servers? • Increase accessability to data or services • Multiple worker processes

More Related