1 / 22

Data Communications vs. Distributed Computing

Data Communications vs. Distributed Computing. Dr. Craig Partridge Chief Scientist, BBN Technologies Chair, ACM SIGCOMM. A Quick History. In the 1980s, the data comm community largely stopped leading in network application development Overwhelmed by lower layer research problems

mandar
Télécharger la présentation

Data Communications vs. Distributed Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Communications vs. Distributed Computing Dr. Craig Partridge Chief Scientist, BBN Technologies Chair, ACM SIGCOMM

  2. A Quick History • In the 1980s, the data comm community largely stopped leading in network application development • Overwhelmed by lower layer research problems • Other communities stepped in: • OS and distributed systems • Supercomputing and physics

  3. An unfortunate side effect • The two fields most expert in networking don’t talk as much as they should • Indeed, I was invited to talk here because it was considered nice to have a networking perspective...

  4. What’s new in networking • So what have those networking guys been up to for the past ten years or so??? • One person’s perspective • I’ve tried to focus on fun topics • So nothing on TCP performance • Most problems are configurational

  5. Self Similarity • Trouble with queueing theory • By late 1980s, clear that classic models didn’t work for data traffic • Off by factors of 10 or 100 in queue size estimates • Enter Leland, Taqqu, Willinger & Wilson (‘93) • Data traffic is self-similar (fractal)

  6. Self Similarity Example

  7. More Self Similarity • Self-similarity means traffic smooths very slowly • traffic at 100s sample units very similar to traffic at 0.01 second samples • High peak to mean ratios

  8. Self Similarity in practice • Since 1993, we’ve been working to reduce self similarity to practice • Confirming it exists on various types of networks • Creating generator functions for modeling • Understanding why it exists

  9. Quality of Service • A term whose definition is evolving • Bandwidth guarantee? • Loss guarantee? • Delay guarantee? • All three?

  10. The QoS Challenge • How to do QoS in a self-similar world? • Old style Poisson aggregation doesn’t work unless the network loads are very very large • QoS Triumph • Weighted Fair Queuing (Demers, Keshav, Shenker) • PGPS by Parekh

  11. Weighted Fair Queuing • A delightful insight • Transform bit-wise sharing of links into packetized sharing • Work conserving! • Nicely enough, all other work conserving schemes have been shown to be variants of WFQ

  12. Bit by Bit Fair Q’ing Fair Queuing Diagram

  13. Bit by Bit WFQ WFQ Diagram

  14. PGPS • Packetized General Processor Sharing • Work by Parekh • If traffic conforms to a (general) arrival model, we can derive the upper bound on queuing delay • At high speeds, bound is nearly independent of number of queues in the path

  15. What Next for QoS? • WFQ is expensive to implement • Though good approximations exist • General feeling that WFQ+PGPS is overkill • Something simpler should be possible • The community is working through various statistical guarantees

  16. High Performance • Around 1991, the accepted wisdom was that IP was dead because routers couldn’t go fast • Now, widely accepted that routers can achieve petabit speeds

  17. What Happened? • Mostly, good engineering • Router innards re-engineered for speed • But also some new prefix lookup algorithms • Luleå algorithm • WashU algorithm

  18. Ad-Hoc Networks • A new and exciting area • Imagine thousand or millions of wireless nodes in a room • They’re moving • They need to discover and federate (securely) • Managing signal/noise ratio vital for performance

  19. More on Ad-Hoc Networks • Odd desire to say we’re done • Jini • Existing ad-hoc routing protocols • Yet the problems remain huge • Device location hard (user interface harder) • Density challenges existing protocols • Clashes over spectrum

  20. Robustness • To keep the Internet robust we must • Improve device reliability by factor of 10 every two years; OR • Improve our protocols to be more resilient • Assuming something is always going up or down • How to minimize impact • In traffic • In performance • Can PODC community help here?

  21. Lots of other initiatives • Simulation • How do you simulate something 100 times bigger than anything ever built? • Measurement • How much can you learn just from the edge of the network? • Errors • Packets damaged frequently, what to do? • Anycast • Nice idea, how do we make it real?

  22. The Last Slide • There’s lots of fun work in networking • A lot has been happening • A lot will happen • Some of the problems are also of interest to the PODC community • I look forward to talking with you about them.

More Related