1 / 90

Client-Server concept

Client-Server concept. Server program is shared by many clients RPC protocol typically used to issue requests Server may manage special data, run on an especially fast platform, or have an especially large disk Client systems handle “front-end” processing and interaction with the human user.

saxton
Télécharger la présentation

Client-Server concept

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Client-Server concept • Server program is shared by many clients • RPC protocol typically used to issue requests • Server may manage special data, run on an especially fast platform, or have an especially large disk • Client systems handle “front-end” processing and interaction with the human user

  2. Server and its clients

  3. Examples of servers • Network file server • Database server • Network information server • Domain name service • Microsoft Exchange • Kerberos authentication server

  4. Summary of typical split • Server deals with bulk data storage, high perf. computation, collecting huge amounts of background data that may be useful to any of several clients • Client deals with the “attractive” display, quick interaction times • Use of caching to speed response time

  5. Statefulness issues • Client-server system is stateless if: Client is independently responsible for its actions, server doesn’t track set of clients or ensure that cached data stays up to date • Client-server system is stateful if: Server tracks its clients, takes actions to keep their cached states “current”. Client can trust its cached data.

  6. Best known examples? • The UNIX NFS file system is stateless. • Database systems are usually stateful: Client reads database of available seats on plane, information stays valid during transaction

  7. Typical issues in design • Client is generally simpler than server: may be single-threaded, can wait for reply to RPC’s • Server is generally multithreaded, designed to achieve extremely high concurrency and throughput. Much harder to develop • Reliability issue: if server goes down, all its clients may be “stuck”. Usually addressed with some form of backup or replication.

  8. Use of caching • In stateless architectures, cache is responsibility of the client. Client decides to remember results of queries and reuse them. Example: caching Web proxies, the NFS client-side cache. • In stateful architectures, cache is owned by server. Server uses “callbacks” to its clients to inform them if cached data changes, becomes invalid. Cache is “shared state” between them.

  9. Example of stateless approach • NFS is stateless: clients obtain “vnodes” when opening files; server hands out vnodes but treats each operation as a separate event • NFS trusts: vnode information, user’s claimed machine id, user’s claim uid • Client uses write-through caching policy

  10. Example of stateful approach • Transactional software structure: • Data manager holds database • Transaction manager does begin op1 op2 ... opn commit • Transaction can also abort; abort is default on failure • Transaction on database system: • Atomic: all or nothing effects • Concurrent: can run many transactions at same time • Independent: concurrent transactions don’t interfere • Durable: once committed, results are persistent

  11. Why are transactions stateful? • Client knows what updates it has done, what locks it holds. Database knows this too • Client and database share the guarantees of the model. See consistent states • Approach is free of the inconsistencies and potential errors observed in NFS

  12. Current issues in client-server systems • Research is largely at a halt: we know how to build these systems • Challenges are in the applications themselves, or in the design of the client’s and servers for a specific setting • Biggest single problem is that client systems know the nature of the application, but servers have all the data

  13. Typical debate topic? • Ship code to the data (e.g. program from client to server)? • ... or ship data to the code? (e.g. client fetches the data needed) • Will see that Java, Tacoma and Telescript offer ways of trading off to avoid inefficient use of channels and maximum flexibility

  14. Message Oriented Middleware • Emerging extension to client-server architectures • Concept is to weaken the link between the client and server but to preserve most aspects of the “model” • Client sees an asynchronous interface: request is sent independent of reply. Reply must be dequeued from a reply queue later

  15. MOMS: How they work • MOM system implements a queue in between clients and servers • Each sends to other by enqueuing messages on the queue or queues for this type of request/reply • Queues can have names, for “subject” of the queue • Client and server don’t need to be running at the same time.

  16. MOMS: How they work client MOMS Client places message into a “queue” without waiting for a reply. MOMS is the “server”

  17. MOMS: How they work server MOMS Server removes message from the queue and processes it.

  18. MOMS: How they work server MOMS Server places any response in a “reply” queue for eventual delivery to the client. May have a timeout attached (“delete after xxx seconds”)

  19. MOMS: How they work client MOMS Client retrieves response and resumes its computation.

  20. Pros and Cons of MOMS • Decoupling of sender, destination is a plus: can design the server and client without each knowing much about the other, can extend easily • Performance is poor, a (big) minus: overhead of passing through an intermediary • Priority, scheduling, recoverability are pluses .... use this approach if you can afford the perfor-mance hit, a factor of 10-100 compared to RPC

  21. Remote Procedure Call • Basic concepts • Implementation issues, usual optimizations • Where are the costs? • Firefly RPC, Lightweight RPC • Reliability and consistency • Multithreading debate

  22. A brief history of RPC • Introduced by Birrell and Nelson in 1985 • Idea: mask distributed computing system using a “transparent” abstraction • Looks like normal procedure call • Hides all aspects of distributed interaction • Supports an easy programming model • Today, RPC is the core of many distributed systems

  23. More history • Early focus was on RPC “environments” • Culminated in DCE (Distributed Computing Environment), standardizes many aspects of RPC • Then emphasis shifted to performance, many systems improved by a factor of 10 to 20 • Today, RPC often used from object-oriented “CORBA” systems. Reliability issues are more evident than in the past.

  24. The basic RPC protocol client server “binds” to server prepares, sends request unpacks reply registers with name service receives requestinvokes handlersends reply

  25. Compilation stage • Server defines and “exports” a header file giving interfaces it supports and arguments expected. Uses “interface definition language” (IDL) • Client includes this information • Client invokes server procedures through “stubs” • provides identical interface as server does • responsible for building the messages and interpreting the reply messages

  26. Binding stage • Occurs when client and server program first start execution • Server registers its network address with name directory, perhaps with other information • Client scans directory to find appropriate server • Depending on how RPC protocol is implemented, may make a “connection” to the server, but this is not mandatory

  27. Request marshalling • Client builds a message containing arguments, indicates what procedure to invoke • Data representation a potentially costly issue! • Performs a send operation to send the message • Performs a receive operation to accept the reply • Unpacks the reply from the reply message • Returns result to the client program

  28. Costs in basic protocol? • Allocation and marshalling data into message (costs more for a more general solution) • Two system calls, one to send, one to receive, hence context switching • Much copying all through the O/S: application to UDP, UDP to IP, IP to ethernet interface, and back up to application

  29. Typical optimizations? • Compile the stub “inline” to put arguments directly into message • If sender and dest. have same data represen-tations, skip host-independent formatting • Use a special “send, then receive” system call • Optimize the O/S path itself to eliminate copying

  30. Fancy argument passing • RPC is transparent for simple calls with a small amount of data passed • What about complex structures, pointers, big arrays? These will be very costly, and perhaps impractical to pass as arguments • Most implementations limit size, types of RPC arguments. Very general systems less limited but much more costly.

  31. Overcoming lost packets client server sends request retransmit ack for request reply

  32. Overcoming lost packets client server sends request retransmit ack for request reply ack for reply

  33. Costs in fault-tolerant version? • Acks are expensive. Try and avoid them, e.g. if the reply will be sent quickly supress the initial ack • Retransmission is costly. Try and tune the delay to be “optimal” • For big messages, send packets in bursts and ack a burst at a time, not one by one

  34. Big packets client server sends request as a burst ack entire burst reply ack for reply

  35. RPC “semantics” • At most once: request is processed 0 or 1 times • Exactly once: request is always processed 1 time • At least once: request processed 1 or more times ... exactly once is impossible because we can’t distinguish packet loss from true failures! In both cases, RPC protocol simply times out.

  36. Implementing at most/least once • Use a timer (clock) value and a unique id, plus sender address • Server remembers recent id’s and replies with same data if a request is repeated • Also uses id to identify duplicates and reject them • Very old requests detected and ignored via time.

  37. RPC versus local procedure call • Restrictions on argument sizes and types • New error cases: • Bind operation failed • Request timed out • Argument “too large” can occur if, e.g., a table grows • Costs may be very high ... so RPC is actually not very transparent!

  38. RPC costs in case of local dest • Caller builds message • Issues send system call, blocks, context switch • Message copied into kernel, then out to dest. • Dest is blocked... wake it up, context switch • Dest computes result • Entire sequence repeated in reverse direction • If scheduler is a process, context switch 6 times!

  39. RPC example Dest on same site O/S Source does xyz(a, b, c)

  40. RPC in normal case Destination and O/S are blocked Dest on same site O/S Source does xyz(a, b, c)

  41. RPC in normal case Source, dest both block. O/S runs its scheduler, copies message from source out-queue to dest in-queue Dest on same site O/S Source does xyz(a, b, c)

  42. RPC in normal case Dest runs, copies in message Dest on same site O/S Source does xyz(a, b, c) Same sequence needed to return results

  43. Important optimizations: LRPC • Lightweight RPC (LRPC): for case of sender, dest on same machine (Bershad et. al.) • Uses memory mapping to pass data • Reuses same kernel thread to reduce context switching costs (user suspends and server wakes up on same kernel thread or “stack”) • Single system call: send_rcv or rcv_send

  44. LRPC O/S and dest initially are idle Dest on same site O/S Source does xyz(a, b, c)

  45. LRPC Control passes directly to dest Dest on same site O/S Source does xyz(a, b, c) arguments directly visible through remapped memory

  46. LRPC performance impact • On same platform, offers about a 10-fold improvement over a hand-optimized RPC implementation • Does two memory remappings, no context switch • Runs about 50 times faster than standard RPC by same vendor (at the time of the research) • Semantics stronger: easy to ensure exactly once

  47. Broad comments on RPC • RPC is not very transparent • Failure handling is not evident at all: if an RPC times out, what should the developer do? • Performance work is producing enormous gains: from the old 75ms RPC to RPC over U/Net with a 75usec round-trip time: a factor of 1000!

  48. Contents of an RPC environment • Standards for data representation • Stub compilers, IDL databases • Services to manage server directory, clock synchronization • Tools for visualizing system state and managing servers and applications

  49. Examples of RPC environments • DCE: From OSF, developed in 1987-1989. Widely accepted, runs on many platforms • ONC: Proposed by SUN microsystems, used in the NFS architecture and in many UNIX services • OLE, CORBA: next-generation “object-oriented” environments.

  50. Multithreading debate • Three major options: • Single-threaded server: only does one thing at a time, uses send/recv system calls and blocks while waiting • Multi-threaded server: internally concurrent, each request spawns a new thread to handle it • Upcalls: event dispatch loop does a procedure call for each incoming event, like for X11 or PC’s running Windows.

More Related