1 / 38

Evaluation of Fl m ke IPC Semantics for Remote IPC

Evaluation of Fl m ke IPC Semantics for Remote IPC. Linus Kamb Master’s Thesis Defense December 16th, 1997. Communication between two applications. IPC Inter-process communication. CLIENT. SERVER. KERNEL. Motivation. SERVER. Distributed computing Components Client-Server applications

sue
Télécharger la présentation

Evaluation of Fl m ke IPC Semantics for Remote IPC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation of Flmke IPC Semantics for Remote IPC Linus Kamb Master’s Thesis Defense December 16th, 1997

  2. Communication between two applications. IPCInter-process communication CLIENT SERVER KERNEL

  3. Motivation SERVER • Distributed computing • Components • Client-Server applications • Communication is fundamental! CLIENT CLIENT CLIENT SERVER SERVER

  4. Motivation SERVER • Communication issues • simple CLIENT CLIENT CLIENT SERVER SERVER

  5. Motivation SERVER • Communication issues • simple • efficient CLIENT CLIENT CLIENT SERVER SERVER

  6. Motivation SERVER • Communication issues • simple • efficient • transparent CLIENT CLIENT CLIENT SERVER SERVER

  7. Motivation SERVER • Communication issues • simple • efficient • transparent • optimized for client-server architectures. CLIENT CLIENT CLIENT SERVER SERVER

  8. Outline • Fluke architecture and IPC semantics. • Implementation of network IPC system for Fluke. • Evaluation of issues affecting remote IPC. • Performance numbers. • Future research and other issues.

  9. Communication between client and server. Client - Server IPC CLIENT SERVER KERNEL

  10. Remote IPC • How is this different from local IPC? CLIENT REF SERVER PORT KERNEL KERNEL NETWORK

  11. Thesis Work • Implementation of remote IPC for Fluke. • Analysis of the issues affecting remote IPC. • Fluke architecture • Fluke IPC semantics and mechanisms • Identify elements suited to a distributed environment and those which complicate the remote IPC implementation.

  12. Modular architecture IPC is critical Capability-based Ports Opaque references Interposition Fluke IPC KERNEL

  13. Fluke IPC • Client connects to server. CLIENT REF SERVER PORT KERNEL

  14. Fluke IPC • Thread-to-thread connection established. CLIENT REF SERVER PORT KERNEL

  15. Fluke IPC • Server replies. CLIENT REF SERVER PORT KERNEL

  16. Fluke IPC • Client can repeat, if desired. • and so on. CLIENT REF SERVER PORT KERNEL

  17. Remote IPC • Network IPC system: NetIPC. • Proxy ports. CLIENT REF PROXY SERVER PORT NetIPC NetIPC KERNEL KERNEL NETWORK

  18. NetIPC Architecture Proxy ports NODE Proxy Server threads Client Server Proxy Client threads NetIPC System Local refs KERNEL Network messages

  19. Proxy Ports • Server sends a reference to its port. • The NetIPC system keeps this local reference. CLIENT NetIPC NetIPC SERVER PORT KERNEL KERNEL NETWORK

  20. Proxy Ports • NetIPC system sends a remote reference. • The remote NetIPC system creates a proxy port. CLIENT NetIPC NetIPC SERVER PORT KERNEL KERNEL NETWORK

  21. Proxy Ports • A reference to the proxy port is sent to the client. CLIENT NetIPC NetIPC SERVER PORT KERNEL KERNEL NETWORK

  22. Bootstrapping Communication • Server mounts port in file system. • Lookup IPC returns reference to server port. • NetIPC system exports local file system. • Remote lookup returns reference. • This creates a local proxy for server’s port.

  23. Evaluation • Ports and References. • General use of references. • IPC Flavors. • Connections. • Buffers.

  24. Fluke IPC • Capability-based messaging. • Ports : Receive points • Port references : ability to send to a port • 3 “Flavors” of IPC with different semantics. • Fully reliable connection, exactly-once delivery. • At-least-once delivery of request. • Connectionless, at-most-once delivery. • Thread-to-thread connections. • Persistent connections.

  25. Ports and References • Proxy port mechanism worked for NetIPC. • References are “opaque”. • Interposition. CLIENT REF PROXY SERVER PORT NetIPC NetIPC KERNEL KERNEL NETWORK

  26. Capability and Port Transfer • No explicit port migration in Fluke : good. • Difficult for distributed system. • Makes process migration difficult. • Reference counting. • Garbage collection of ports. • Difficult in a distributed system.

  27. Remote File Lookup • IPC lookup for each component in the path. • NetIPC creates a proxy for each lookup. • Only used for the next lookup. • Except the last. CLIENT NetIPC NetIPC SERVER PORT KERNEL KERNEL NETWORK

  28. Use of References • Most Fluke kernel objects can be referenced. • References of all types are passed in IPC. • Requires additional external servicing to implement in distributed environment. • The NetIPC system cannot do it alone. • Inherent limitation to semantically equivalent remote IPC.

  29. Fluke’s IPC Flavors • Narrow interfaces. • Not “option-based” • Separate code paths. • Cleaner implementation of each path. • Still provides flexibility.

  30. Connections Proxy ports Proxy Server threads Proxy Client threads Local refs Network messages

  31. Connections Proxy ports • Between specific threads. • required additional demultiplexing of packets. • Persistent connections. • Considered as an optimization. • Added a lot of complication. Proxy Server threads Connected threads IPC msgs Proxy Client threads Local refs Network messages

  32. IPC Buffer Management • Scatter/gather support. • Avoids copy for marshaling/unmarshaling. Headers Client send buffers Serverrecv buffers Network packet

  33. Pluses Ports No port migration Narrow interfaces Buffer management Minuses General references Long connections No reference counting Summary • Port-based model works well for transparent remote IPC. • Reference counting issue is not resolved. • Fluke’s generalized references do not extend through NetIPC. • Keep to a simple interface geared to client-server requirements.

  34. Round Trip IPC Times

  35. Reliable IPC

  36. Future Research • Handling non-port reference transfer • Work in conjunction with external servers • Call-out mechanism • Location transparency • Process migration • Reference counting • Optimizations • network access • reliable packet protocol

  37. Conclusions • Implemented remote IPC system extending Fluke local IPC into a distributed environment. • Evaluated the mechanisms and semantics of Fluke and its IPC system as to their effects on the implementation of remote IPC.

  38. Contributions • Implemented remote IPC system extending Fluke local IPC into a distributed environment. • Evaluated the mechanisms and semantics of Fluke and its IPC system as to their effects on the implementation of remote IPC. • Designing and implementing a network protocol is not trivial.

More Related