1 / 48

TCP Mobility/Splicing

TCP Mobility/Splicing. Francis Chang <francis@cse.ogi.edu> Systems Software Lab OGI. [3] Maltz, “TCP Splicing for Application Layer Proxy Performance” [4] Spatscheck, “Optimizing TCP Forwarder Performance. [2] Snoeren, “An End-to-End Approach to Host Mobility”

Télécharger la présentation

TCP Mobility/Splicing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TCP Mobility/Splicing Francis Chang <francis@cse.ogi.edu> Systems Software Lab OGI

  2. [3] Maltz, “TCP Splicing for Application Layer Proxy Performance” [4] Spatscheck, “Optimizing TCP Forwarder Performance [2] Snoeren, “An End-to-End Approach to Host Mobility” [1] Snoeren, “Fine-Grained Failover Using Connection Migration The Papers Divided into two groups: TCP Splicing TCP Mobility

  3. TCP Splicing Let’s first look at TCP Splicing…

  4. TCP Splicing – The Issue Def’n: Split Connection Proxy - A proxy machine is interposed between the server and the client machine in order to mediate the communication between them. eg. HTTP cache, security firewall, encryption servers, SOCKS proxy Client TCP Proxy TCP Server

  5. TCP Splicing – The Problem Split Connection Proxies are slow! • The proxy must maintain 2 TCP connections. Unoptimized TCP Forwarder Process: Proxy TCP TCP Kernel: IP IP Hardware: Net1 Net2 Client Server

  6. TCP Splicing – The Problem What’s wrong with split connection proxies? • The proxy must maintain 2 TCP connections. Needs a lot of memory & CPU processing • Performance is often CPU limited • Performance degradation due to lost packets (One connection must wait for the other to recover) • Potentially Violate the End-to-End semantics of TCP

  7. TCP Splicing – The Ideal • How can we remove these problems from the proxy concept? • Ideally, we’d like to be able eliminate the middle-man. Client TCP Proxy TCP Server

  8. TCP Splicing – The Ideal • How can we remove these problems from the proxy concept? • Ideally, we’d like to be able eliminate the middle-man. Client TCP Server • But there is no way to “join” 2 connections in TCP. • What’s the next best thing? Splicing!

  9. TCP Splicing – The Solution Q. What is TCP Splicing? A. Joining 2 TCP connection so that the endpoints can communicate as peers. Unoptimized TCP Forwarder Process: Proxy TCP TCP Kernel: IP IP Hardware: Net1 Net2 Client Server

  10. FWD TCP Splicing – The Solution Q. What is TCP Splicing? A. Joining 2 TCP connection so that the endpoints can communicate as peers. TCP Forwarder with Splicing Process: Kernel: IP IP Hardware: Net1 Net2 Client Server

  11. TCP Splicing – How it works • Use an IP forwarder, but munge the packets as they travel between the Server and Client • Simply have to renumber the IP src/dest addresses and ports, as well as the sequence numbers • Spatscheck[4] introduces the idea that if we carefully choose the TCP src/dest ports as well as initial sequence numbers, we don’t even have to touch the TCP portion of the packet

  12. TCP Splicing – Trace A SOCKS Packet Trace

  13. TCP Splicing – Data Now, let’s look at some data to see what TCP Splicing buys us…

  14. TCP Splicing – CPU Usage CPU Utilization under varying number of connections

  15. TCP Splicing - Throughput Throughput of a SOCKS and TCP Splices compared against IP Forwarding throughput

  16. TCP Splicing - Latency Distribution of Forwarding latency of TCP Splice, SOCKS and IP Forwarding

  17. TCP Splicing - Issues TCP Options • The client/proxy connection is negotiated before the proxy/server connection - it is possible that the 2 connections have negotiated incompatible options • 1. Don’t splice incompatible connections • 2. Proxy advertises minimal option set • 3. Strip or map certain options as they pass through the TCP splice See Maltz[3] for a fuller explanation on option mapping (MSS, SACK, etc.)

  18. Host Mobility On to host mobility…

  19. Host Mobility - Intro Host mobility is the idea that a computer can reattach itself to the network, without losing its connections.

  20. Host Mobility – Mobile IP Mobile IP uses the idea of a home agent, as a reflector to direct connections to the mobile host. Packets are sent using triangle routing. Mobile host Home Agent Foreign Client

  21. Host Mobility – A New Take Snoeren[2] introduces a transport layer solution to this problem, by introducing a new migrate-permitted TCP option.

  22. Host Mobility – Locating Snoeren’s Solution [2]: The first problem – How do you locate a machine’s IP address if it’s mobile? No Problem – just use secure DNS updates to announce new location. (This is nothing new) This is only an issue if the computer accepts passive connections.

  23. Host Mobility - Migrating But how do we migrate existing connections? • Snoeren Introduces a new Migrate-Permitted TCP option • The mobile host will announce its new address to existing connections, and re-establish.

  24. Host Mobility – Migrating In existing TCP/IP, connections are uniquely identified by <src addr, src port, dest addr, dest port> With the Migrate TCP option, connections are identified by <src addr, src port, token>

  25. Host Mobility – In Action Mobile Host Client TCP Connect

  26. Host Mobility – In Action Mobile Host Client TCP Session Established

  27. Host Mobility – In Action Mobile Host Migrating… Client Mobile Host

  28. Host Mobility – In Action TCP Connect & Migrate Client Mobile Host

  29. Host Mobility – In Action TCP Connection Re-established Client Mobile Host

  30. Host Mobility - Proxy What if we want to use a proxy? Well, that’s alright. In this way, only the TCP stacks of the proxy, and the mobile host need to be modified to deploy this scheme. Of course, we’re back to the triangle routing/home agent approach…

  31. Host Mobility – Issues • This is an end to end solution. So, both client and server must support the migrate TCP. There needs to be financial incentive for wide-scale deployment. • Simultaneous migrations will not work. • Existing applications have often made assumptions about the stability of their network addresses.

  32. Host Mobility Now that we’ve covered all these cool ideas, what can we do with them all? On to the title paper – Snoeren, Andersen, Balakrishnan, “Fine-Grained Failover Using Connection Migration”, Proc. of the Third Annual USENIX Symposium on Internet Technologies and Systems (USITS), March 2001

  33. The Problem Servers Fail. More often than users want to know… This slide shamelessly copied from - http://nms.lcs.mit.edu/talks/usits01-migrate/sld002.htm

  34. Solution: Server Redundancy Use a healthy one at all times. This slide shamelessly copied from - http://nms.lcs.mit.edu/talks/usits01-migrate/sld003.htm

  35. Server Failover – In Action Server A Client TCP Connect Health Monitor Server B

  36. Server Failover – In Action Server A TCP Session Established Client Health Monitor Server B

  37. Server Failover – In Action Untimely Death.. Server A Client Health Monitor Server B

  38. Server Failover – In Action Client A Client Health Monitor Death Notification Server B

  39. Server Failover – In Action Server A TCP Connect & Migrate Client Health Monitor Server B

  40. Server Failover – In Action Server A TCP Connection Re-established Client Health Monitor Server B

  41. So what Can we do? • So, now that we have this idea in mind, how can we use integrate this failover mechanism with existing services? (Incidentally, this architecture can also be used for load balancing) • We need a way to synchronize state across many machines. This idea works best if there is not much data to synchronize, and if our stream maps directly into a file. • A great example: static content on a web server farm

  42. Web Farm Design • We want to keep compatibility with existing web servers • Let’s use a stream mapper to control all this fail-over mechanism. (aka. The Wedge) Server A Server B Stream Mapper Stream Mapper Client

  43. The Wedge Let’s take a closer look at the wedge What great place to use a TCP Splice!

  44. Synchronizing the Wedges What data do we need to synchronize between the wedges? • We need enough information to restart the TCP connection • Client addr/port • Takeover sequence # • TCP Migrate Fields (connection token) • Application specific object parameters (The URL) But we don’t need to sync last-sent packets!

  45. Syncing the Streams Why don’t we need to sync last-sent packets? The client already has that information. All we need to do is elicit an ACK from the client, and they will tell us what the last sent packet is. Let’s see how this happens -

  46. Network Trace Initial Data Transmission: 0.00000 cl.1065 > sA.8080: . ack 0505 win 31856 --(Erroneous) sA Death Pronouncement Issued-- 0.08014 sA.8080 > cl.1065: P 0505:1953(1448) ack 1 win 31856 Successful Connection Migration to sB: 0.09515 sB.1033 > cl.1065: S 0:0(0) win 0 <migrate PRELOAD 1> 0.09583 cl.1065 > sB.1033: S 0:0(0) ack 1953 win 32120 0.14244 sB.1033 > cl.1065: . ack 1 win 32120 Continued Data Transmission from sA: 0.17370 sA.8080 > cl.1065: P 0505:1953(1448) ack 1 win 31856 0.17376 cl.1065 > sA.8080: R 1:1(0) win 0 Failed Connection Migration Attempt by sC: 0.17423 sC.1499 > cl.1065: S 0:0(0) win 0 <migrate PRELOAD 1> 0.17450 cl.1065 > sC.1499: R 0:0(0) ack 1 win 0 Resumed Data Transmission from sB: 0.24073 sB.1033 > cl.1065: P 1953:3413(1460) ack 1 win 32120 0.25663 cl.1065 > sB.1033: . ack 3413 win 31856 0.33430 sB.1033 > cl.1065: P 3413:4873(1460) ack 1 win 32120 0.42776 sB.1033 > cl.1065: P 4873:6333(1460) ack 1 win 32120 0.42784 cl.1065 > sB.1033: . ack 6333 win 31856 . . . An annotated failover trace (collected at the client) depicting the migration of a connection to one of two candidate servers.

  47. Wedge Overhead The Request overhead of the wedge as a function of request size.

  48. Issues • Dual simultaneous migration is still not possible • Wedge definitely adds some overhead • Application level-sync is still non-trivial - What if content changes? - Still doesn’t solve interactive streams - Some applications will require modifications to take advantage of mobility. • Some apps still assume static IP addresses • What if a server dies before announce its connections? • Needs wide-scale deployment

More Related