1 / 102

15-441 Computer Networking

15-441 Computer Networking. Lecture 29 – Final Review. Fun. TCP/IP drinking game ( http://infohost.nmt.edu/~val/tcpip.html ) Internet map ( http://xkcd.com/c195.html ). Outline – Lec 14. The recurring IP address space problem IPv6. NAT. Tunneling / Overlays Network Management

bo-mooney
Télécharger la présentation

15-441 Computer Networking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 15-441 Computer Networking Lecture 29 – Final Review

  2. Fun • TCP/IP drinking game • (http://infohost.nmt.edu/~val/tcpip.html) • Internet map • (http://xkcd.com/c195.html) Lecture 27: Research Directions

  3. Outline – Lec 14 • The recurring IP address space problem • IPv6. • NAT. • Tunneling / Overlays • Network Management • Autoconfiguration • SNMP (notes only) Lecture 27: Research Directions

  4. IP v6 • “Next generation” IP. • Most urgent issue: increasing address space. • 128 bit addresses • Simplified header for faster processing: • No checksum (why not?) • No fragmentation (?) • Support for guaranteed services: priority and flow id • Options handled as “next header” • reduces overhead of handling options V/Pr Flow label Length Next Hop L Source IP address Destination IP address Lecture 27: Research Directions

  5. Network Address Translation • NAT maps (private source IP, source port) onto (public source IP, unique source port) • reverse mapping on the way back • destination host does not know that this process is happening • Very simple working solution. • NAT functionality fits well with firewalls Priv A IP B IP A B IP Priv A IP A Port B Port B Port A Port Publ A IP B IP B IP Publ A IP B B Port A Port’ A Port’ B Port Lecture 27: Research Directions

  6. Tunneling • Force a packet to go to a specific point in the network. • Path taken is different from the regular routing • Achieved by adding an extra IP header to the packet with a new destination address. • Similar to putting a letter in another envelope • preferable to using IP source routing option • Used increasingly to deal with special routing requirements or new features. • Mobile IP,.. • Multicast, IPv6, research, .. IP1 IP2 Data IP1 IP2 Lecture 27: Research Directions

  7. DHCP • DHCPOFFER • IP addressing information • Boot file/server information (for network booting) • DNS name servers • Lots of other stuff - protocol is extensible; half of the options reserved for local site definition and use. DHCPDISCOVER - broadcast DHCPOFFER DHCPREQUEST DHCPACK Lecture 27: Research Directions

  8. Outline – Lec 15 • Exam discussion • Layering review (bridges, routers, etc.) • Exam section C. • Circuit switching refresher • Virtual Circuits - general • Why virtual circuits? • How virtual circuits? -- tag switching! • Two modern implementations • ATM - teleco-style virtual circuits • MPLS - IP-style virtual circuits Lecture 27: Research Directions

  9. Virtual Circuit Switching:Label (“tag”) Swapping • Global VC ID allocation -- ICK! Solution: Per-link uniqueness. Change VCI each hop. Input Port Input VCI Output Port Output VCI R1: 1 5 3 9 R2: 2 9 4 2 R4: 1 2 3 5 1 3 A R2 2 1 3 4 1 3 R1 R4 Dst 2 4 2 4 1 3 B R3 2 4 Lecture 27: Research Directions

  10. Multi Protocol Label Switching - MPLS • Selective combination of VCs + IP • Today: MPLS useful for traffic engineering, reducing core complexity, and VPNs • Core idea: Layer 2 carries VC label • Could be ATM (which has its own tag) • Could be a “shim” on top of Ethernet/etc.: • Existing routers could act as MPLS switches just by examining that shim -- no radical re-design. Gets flexibility benefits, though not cell switching advantages Layer 3 (IP) header Layer 3 (IP) header MPLS label Layer 2 header Layer 2 header Lecture 27: Research Directions

  11. MPLS + IP • Map packet onto Forward Equivalence Class (FEC) • Simple case: longest prefix match of destination address • More complex if QoS of policy routing is used • In MPLS, a label is associated with the packet when it enters the network and forwarding is based on the label in the network core. • Label is swapped (as ATM VCIs) • Potential advantages. • Packet forwarding can be faster • Routing can be based on ingress router and port • Can use more complex routing decisions • Can force packets to followed a pinned route Lecture 27: Research Directions

  12. Take Home Points • Costs/benefits/goals of virtual circuits • Cell switching (ATM) • Fixed-size pkts: Fast hardware • Packet size picked for low voice jitter. Understand trade-offs. • Beware packet shredder effect (drop entire pkt) • Tag/label swapping • Basis for most VCs. • Makes label assignment link-local. Understand mechanism. • MPLS - IP meets virtual circuits • MPLS tunnels used for VPNs, traffic engineering, reduced core routing table sizes Lecture 27: Research Directions

  13. Outline – Lec 16 • Transport introduction • Error recovery & flow control Lecture 27: Research Directions

  14. Important Lessons • Transport service • UDP  mostly just IP service • TCP  congestion controlled, reliable, byte stream • Types of ARQ protocols • Stop-and-wait  slow, simple • Go-back-n  can keep link utilized (except w/ losses) • Selective repeat  efficient loss recovery • Sliding window flow control • Addresses buffering issues and keeps link utilized Lecture 27: Research Directions

  15. Sender/Receiver State Sender Receiver Next expected Max acceptable Max ACK received Next seqnum … … … … Sender window Receiver window Sent & Acked Sent Not Acked Received & Acked Acceptable Packet OK to Send Not Usable Not Usable Lecture 27: Research Directions

  16. Outline – Lec 17 • TCP flow control • Congestion sources and collapse • Congestion control basics Lecture 27: Research Directions

  17. Important Lessons • Why is congestion control needed? • How to evaluate congestion control algorithms? • Why is AIMD the right choice for congestion control? • TCP flow control • Sliding window  mapping to packet headers • 32bit sequence numbers (bytes) Lecture 27: Research Directions

  18. Window Flow Control: Send Side Packet Received Packet Sent Source Port Dest. Port Source Port Dest. Port Sequence Number Sequence Number Acknowledgment Acknowledgment HL/Flags Window HL/Flags Window D. Checksum Urgent Pointer D. Checksum Urgent Pointer Options… Options... App write acknowledged sent to be sent outside window Lecture 27: Research Directions

  19. Causes & Costs of Congestion • When packet dropped, any “upstream transmission capacity used for that packet was wasted! Lecture 27: Research Directions

  20. Phase Plots • Simple way to visualize behavior of competing connections over time User 2’s Allocation x2 User 1’s Allocation x1 Lecture 27: Research Directions

  21. Phase Plots • What are desirable properties? • What if flows are not equal? Fairness Line Overload User 2’s Allocation x2 Optimal point Underutilization Efficiency Line User 1’s Allocation x1 Lecture 27: Research Directions

  22. Outline – Lec 18 • TCP connection setup/data transfer • TCP reliability Lecture 27: Research Directions

  23. Establishing Connection:Three-Way handshake • Each side notifies other of starting sequence number it will use for sending • Why not simply chose 0? • Must avoid overlap with earlier incarnation • Security issues • Each side acknowledges other’s sequence number • SYN-ACK: Acknowledge sequence number + 1 • Can combine second SYN with first ACK SYN: SeqC ACK: SeqC+1 SYN: SeqS ACK: SeqS+1 Client Server Lecture 27: Research Directions

  24. Tearing Down Connection • Either side can initiate tear down • Send FIN signal • “I’m not going to send any more data” • Other side can continue sending data • Half open connection • Must continue to acknowledge • Acknowledging FIN • Acknowledge last sequence number + 1 A B FIN, SeqA ACK, SeqA+1 Data ACK FIN, SeqB ACK, SeqB+1 Lecture 27: Research Directions

  25. Packets Acks Fast Retransmit Retransmission X Duplicate Acks Sequence No Time Lecture 27: Research Directions

  26. Packets Acks TCP (Reno variant) X X X Now what? - timeout X Sequence No Time Lecture 27: Research Directions

  27. Important Lessons • TCP state diagram  setup/teardown • TCP timeout calculation  how is RTT estimated • Modern TCP loss recovery • Why are timeouts bad? • How to avoid them?  e.g. fast retransmit Lecture 27: Research Directions

  28. Outline – Lec 20 • TCP congestion avoidance • TCP slow start • TCP modeling Lecture 27: Research Directions

  29. TCP Congestion Control • Changes to TCP motivated by ARPANET congestion collapse • Basic principles • AIMD • Packet conservation • Reaching steady state quickly • ACK clocking Lecture 27: Research Directions

  30. Packets Acks Congestion Avoidance Sequence Plot Sequence No Time Lecture 27: Research Directions

  31. TCP Packet Pacing • Congestion window helps to “pace” the transmission of data packets • In steady state, a packet is sent when an ack is received • Data transmission remains smooth, once it is smooth • Self-clocking behavior Pb Pr Sender Receiver As Ar Ab Lecture 27: Research Directions

  32. Slow Start Packet Pacing • How do we get this clocking behavior to start? • Initialize cwnd = 1 • Upon receipt of every ack, cwnd = cwnd + 1 • Implications • Window actually increases to W in RTT * log2(W) • Can overshoot window and cause packet loss Lecture 27: Research Directions

  33. Packets Acks Slow Start Sequence Plot . . . Sequence No Time Lecture 27: Research Directions

  34. TCP Saw Tooth Behavior Congestion Window Timeouts may still occur Time Slowstart to pace packets Fast Retransmit and Recovery Initial Slowstart Lecture 27: Research Directions

  35. TCP Performance • Can TCP saturate a link? • Congestion control • Increase utilization until… link becomes congested • React by decreasing window by 50% • Window is proportional to rate * RTT • Doesn’t this mean that the network oscillates between 50 and 100% utilization? • Average utilization = 75%?? • No…this is *not* right! Lecture 27: Research Directions

  36. TCP Performance • If we have a large router queue  can get 100% utilization • But, router queues can cause large delays • How big does the queue need to be? • Windows vary from W  W/2 • Must make sure that link is always full • W/2 > RTT * BW • W = RTT * BW + Qsize • Therefore, Qsize > RTT * BW • Ensures 100% utilization • Delay? • Varies between RTT and 2 * RTT Lecture 27: Research Directions

  37. Single TCP FlowRouter with large enough buffers for full link utilization Lecture 27: Research Directions

  38. TCP (Summary) • General loss recovery • Stop and wait • Selective repeat • TCP sliding window flow control • TCP state machine • TCP loss recovery • Timeout-based • RTT estimation • Fast retransmit • Selective acknowledgements Lecture 27: Research Directions

  39. TCP (Summary) • Congestion collapse • Definition & causes • Congestion control • Why AIMD? • Slow start & congestion avoidance modes • ACK clocking • Packet conservation • TCP performance modeling • How does TCP fully utilize a link? • Role of router buffers Lecture 27: Research Directions

  40. Outline – Lec 19 • HTTP review and details (more in notes) • Persistent HTTP review • HTTP caching • Content distribution networks Lecture 27: Research Directions

  41. Nonpersistent HTTP issues: Requires 2 RTTs per object OS must work and allocate host resources for each TCP connection But browsers often open parallel TCP connections to fetch referenced objects Persistent HTTP Server leaves connection open after sending response Subsequent HTTP messages between same client/server are sent over connection Persistent without pipelining: Client issues new request only when previous response has been received One RTT for each referenced object Persistent with pipelining: Default in HTTP/1.1 Client sends requests as soon as it encounters a referenced object As little as one RTT for all the referenced objects Persistent HTTP (review) Lecture 27: Research Directions

  42. User configures browser: Web accesses via cache Browser sends all HTTP requests to cache Object in cache: cache returns object Else cache requests object from origin server, then returns object to client Web Proxy Caches origin server Proxy server HTTP request HTTP request client HTTP response HTTP response HTTP request HTTP response client origin server Lecture 27: Research Directions

  43. The content providers are the CDN customers. Content replication CDN company installs hundreds of CDN servers throughout Internet Close to users CDN replicates its customers’ content in CDN servers. When provider updates content, CDN updates servers Content Distribution Networks (CDNs) origin server in North America CDN distribution node CDN server in S. America CDN server in Asia CDN server in Europe Lecture 27: Research Directions

  44. Consistent Hash – Example • Construction • Assign each of C hash buckets to random points on mod 2n circle, where, hash key size = n. • Map object to random position on circle • Hash of object = closest clockwise bucket • Smoothness  addition of bucket does not cause movement between existing buckets • Spread & Load  small set of buckets that lie near object • Balance  no bucket is responsible for large number of objects 0 14 Bucket 4 12 8 Lecture 27: Research Directions

  45. How Akamai Works End-user cnn.com (content provider) DNS root server Akamai server Get foo.jpg 12 11 Get index.html 5 1 2 3 Akamai high-level DNS server 6 4 7 Akamai low-level DNS server 8 Nearby matchingAkamai server 9 10 Get /cnn.com/foo.jpg Lecture 27: Research Directions

  46. HTTP (Summary) • Simple text-based file exchange protocol • Support for status/error responses, authentication, client-side state maintenance, cache maintenance • Workloads • Typical documents structure, popularity • Server workload • Interactions with TCP • Connection setup, reliability, state maintenance • Persistent connections • How to improve performance • Persistent connections • Caching • Replication Lecture 27: Research Directions

  47. Outline • p2p file sharing techniques • Downloading: Whole-file vs. chunks • Searching • Centralized index (Napster, etc.) • Flooding (Gnutella, etc.) • Smarter flooding (KaZaA, …) • Routing (Freenet, etc.) • Uses of p2p - what works well, what doesn’t? • servers vs. arbitrary nodes • Hard state (backups!) vs soft-state (caches) • Challenges • Fairness, freeloading, security, … Lecture 27: Research Directions

  48. Next Topic... • Centralized Database • Napster • Query Flooding • Gnutella • Intelligent Query Flooding • KaZaA • Swarming • BitTorrent • Unstructured Overlay Routing • Freenet • Structured Overlay Routing • Distributed Hash Tables Lecture 27: Research Directions

  49. Publish Napster: Publish insert(X, 123.2.21.23) ... I have X, Y, and Z! 123.2.21.23 Lecture 27: Research Directions

  50. Fetch Query Reply Napster: Search 123.2.0.18 search(A) --> 123.2.0.18 Where is file A? Lecture 27: Research Directions

More Related