1 / 70

The Role of “Awareness” in Internet Protocol Performance

The Role of “Awareness” in Internet Protocol Performance. Carey Williamson Professor/iCORE Senior Research Fellow Department of Computer Science University of Calgary. Introduction. It is an exciting time to be an Internet researcher (or even a user!)

pberes
Télécharger la présentation

The Role of “Awareness” in Internet Protocol Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Role of “Awareness”in Internet Protocol Performance Carey Williamson Professor/iCORE Senior Research Fellow Department of Computer Science University of Calgary

  2. Introduction • It is an exciting time to be an Internet researcher (or even a user!) • The last 10 years of Internet development have brought many advances: • World Wide Web (WWW) • Media streaming applications • “Wi-Fi” wireless LANs • Mobile computing • E-Commerce, mobile commerce • Pervasive/ubiquitous computing

  3. The Wireless Web • The emergence and convergence of these technologies enable the “wireless Web” • the wireless classroom (CRAN) • the wireless workplace • the wireless home • My iCORE mandate: design, build, test, and evaluate wireless Web infrastructures • Holy grail: “anything, anytime, anywhere” access to information (when we want it, of course!)

  4. Some Challenges • Portable computing devices: no problem (cell phones, PDAs, notebooks, laptops…) • Wireless access: not much of a problem (WiFi, 802.11b, BlueTooth, Pringles…) • Security: still an issue, but being addressed • Services: the next big growth area??? • Performance transparency: providing an end-user experience that is hopefully no worse than that in traditional wired Internet desktop environments (my focus!)

  5. Research Theme • Existing layered Internet protocol stack does not lend itself well to providing optimal performance for diversity of service demands and environments • Who should bend: users or protocols? • Explore the role of “awareness” in Internet protocol performance • Identify tradeoffs, evaluate performance

  6. Talk Overview • Introduction • Background • Internet Protocol Stack • TCP 101 • Motivating Examples • Our Work on CATNIP • Concluding Remarks

  7. Application: supporting network applications and end-user services FTP, SMTP, HTTP, DNS, NTP Transport: end to end data transfer TCP, UDP Network: routing of datagrams from source to destination IPv4, IPv6, BGP, RIP, routing protocols Data Link: hop by hop frames, channel access, flow/error control PPP, Ethernet, IEEE 802.11b Physical: raw transmission of bits Application Transport Network Data Link Physical Internet Protocol Stack 001101011...

  8. Viewpoint • “Layered design is good; layered implementation is bad” -Anon. • Good: • unifying framework for describing protocols • modularity, black-boxes, “plug and play” functionality, well-defined interfaces (good SE) • Bad: • increases overhead (interface boundaries) • compromises performance (ignorance)

  9. Tutorial: TCP 101 • The Transmission Control Protocol (TCP) is the protocol that gets your data reliably • Used for email, Web, ftp, telnet, … • Makes sure that data is received correctly: right data, right order, exactly once • Detects and recovers from any problems that occur at the IP network layer • Mechanisms for reliable data transfer: sequence numbers, acknowledgements, timers, retransmissions, flow control...

  10. SYN SYN/ACK ACK GET URL FIN FIN/ACK ACK TCP 101 (Cont’d) • TCP is a connection-oriented protocol YOUR DATA HERE

  11. ACK TCP 101 (Cont’d) • TCP slow-start and congestion avoidance

  12. ACK TCP 101 (Cont’d) • TCP slow-start and congestion avoidance

  13. ACK TCP 101 (Cont’d) • TCP slow-start and congestion avoidance

  14. TCP 101 (Cont’d) • This (exponential growth) “slow start” process continues until either of the following happens: • packet loss: after a brief recovery phase, you enter a (linear growth) “congestion avoidance” phase based on slow-start threshold found • all done: terminate connection and go home

  15. Simple Observation • Consider a big file transfer download: • brief startup period to estimate network bandwidth; most time spent sending data at the “right rate”; small added penalty for lost packet(s) • Consider a typical Web document transfer: • median size about 6 KB, mean about 10 KB • most time is spent in startup period; as soon as you find out the network capacity, you’re done! • if you lose a packet or two, it hurts a lot!!!

  16. The Problem (Restated) • TCP doesn’t realize this dichotomy between optimizing throughput (the classic file transfer model) versus optimizing transfer time (the Web document download model) • Wouldn’t it be nice if it did? (i.e., how much data it was sending, and over what type of network) • Some research starting to explore this...

  17. Motivating Example #1 • Wireless TCP Performance Problems Low capacity, high error rate Wired Internet High capacity, low error rate Wireless Access

  18. Motivating Example #1 • Solution: “wireless-aware TCP” (I-TCP, ProxyTCP, Snoop-TCP, ...)

  19. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  20. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  21. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  22. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  23. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  24. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  25. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  26. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  27. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  28. Motivating Example #2 • Multi-hop “ad hoc” networking Haruna Carey

  29. Motivating Example #2 • Two interesting subproblems: • Dynamic ad hoc routing: node movement can disrupt the IP routing path at any time, disrupting TCP connection; yet another way to lose packets!!!; possible solution: Explicit Loss Notification (ELN) • TCP flow control: the bursty nature of TCP packet transmissions can create contention for the shared wireless channel among forwarding nodes; possible solution: rate-based flow control

  30. Our Work • Context-Aware Transport/Network Internet Protocol (CATNIP) • Motivation: “Like kittens, TCP connections are born with their eyes shut” - CLW • Research Question: How much better could TCP perform if it knew what it was trying to accomplish (e.g., Web document transfer)?

  31. Some Key Observations (I think) • Not all packet losses are created equal • TCP sources have relatively little control • IP routers have all the power!!!

  32. Tutorial: TCP 201 • There is a beautiful way to plot and visualize the dynamics of TCP behaviour • Called a “TCP Sequence Number Plot” • Plot packet events (data and acks) as points in 2-D space, with time on the horizontal axis, and sequence number on the vertical axis

  33. + Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time

  34. TCP 201 (Cont’d) • What happens when a packet loss occurs? • Quiz Time... • Consider a 14-packet Web document • For simplicity, consider only a single packet loss

  35. + Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time

  36. ? Key: X Data Packet + Ack Packet + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time

  37. + X Key: X Data Packet + Ack Packet + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time

  38. + Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time

  39. Key: X Data Packet + Ack Packet X ? + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time

  40. + Key: X Data Packet + Ack Packet X X + + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time

  41. + Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time

  42. Key: X Data Packet + Ack Packet X X X ? + X SeqNum + X + X X + + X + X + X + X + X + X Time

  43. + Key: X Data Packet + Ack Packet X X X X + + + + X SeqNum + X + X X + + X + X + X + X + X + X Time

  44. + Key: X Data Packet + Ack Packet X + X + X + X + X SeqNum + X + X X + + X + X + X + X + X + X Time

  45. Key: X Data Packet + Ack Packet SeqNum ? + X + X Time

  46. + X + X + X + X X + + X Key: X Data Packet + Ack Packet SeqNum + X X X + + + X + X Time

  47. Key: X Data Packet + Ack Packet SeqNum ? Time

  48. X + + X + X + X + X + X Key: X Data Packet + Ack Packet SeqNum + X Time

  49. TCP 201 (Cont’d) • Main observation: • “Not all packet losses are created equal” • Losses early in the transfer have a huge adverse impact on the transfer latency • Losses near the end of the transfer always cost at least a retransmit timeout • Losses in the middle may or may not hurt, depending on congestion window size at the time of the loss

More Related