1 / 10

nice traffic management without new protocols

nice traffic management without new protocols. Bob Briscoe Chief Researcher, BT Oct 2012. vertical stripes: this season’s colour. rate. increasingly access no longer the bottleneck PON, FTTP, FTTdp, FTTC bottleneck moving deeper becoming similar to campus LAN, data centre

Télécharger la présentation

nice traffic management without new protocols

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. nice traffic managementwithout new protocols Bob BriscoeChief Researcher, BT Oct 2012

  2. vertical stripes: this season’s colour rate • increasingly access no longer the bottleneck • PON, FTTP, FTTdp, FTTC • bottleneck moving deeper • becoming similar to campus LAN, data centre • each customer’s average access utilisation v low • 1-3% average during peak hour(some 100%, but rarely at the same time) • if provision bottleneck for the worst case load seen, it leaves a lot of leeway for much worse cases • traditionally the bottleneck solves this with • (weighted) fair-queuing / round robin* • about isolation from random bad performance • not about skimping on capacity time rate time viewed at second mile * or policed per-customer b/w limits (for higher utilisation customers e.g. DC)

  3. fair queuing: so 1990s rate time • enforces 1/N shares* so that’s fine? • No • when average N is so low • a few more long-running customers than plannedcan increase N significantly • thereby greatly decreasing everyone else’s 1/N share • the problem: • N depends heavily on presence of high util’n customers • usually few, but when many, service seems crap • large buffers make this much worsebut not the only problem • 1/N is ‘fair’ at each instant, but not over time rate time * as does WRR (and TCP sort-of)

  4. ConEx: so y=2014 y = y(now) + 2 the ConEx rationale • FQ + volume limits? • hi vol customer only a problem if with other hi vol cust’s • Lack of complete solution led to non-neutral solutions, then... • Comcast fair-share • limit highest volume customer(s) only if cable congested • better, but penalises hi-vol even if transport yields[LEDBAT] • research goal: we ain’t seen nothing yet on the Internet...if we designed the network for un-lame transports • ConEx rationale is actually in two parts: • rationale for using congestion-volume as a metric • need a change to IP (ConEx) to see congestion-volume Is there an 80% solution without changing IP?

  5. simpler to code than draw foreach pkt { i = classify_user(pkt) di += wi*(tnow-ti) //fill ti = tnow di -= s * p//drain if (di<0) {drop(pkt)} } s: packet size p: drop prob of AQM bottleneck congestion policer: next season’s colour backhaul & access network FIFObuffer policer AQM incomingpacketstream Y outgoing packet stream p(t) w1 w2 wi congestiontokenbucket ci … di(t) … meter

  6. bottleneck congestion policer (BCP): features • predictable quality for the many • keeps queue v short • by focusing discard on those who most push at the queue* • tends to WRR if customer traffic becomes continuous • app-neutral • applicability • same as any per-customer limiter • state: per-customer configured allowance and usage-level • drop-in to: BRAS / MSE, RNC, OLT, DC access • few simultaneous customers (or many) • where bottleneck location varies, still need to evolve to ConEx anomalous typical * relative to their allowance to do so

  7. next steps (next season’s shoes) rate time • plan to open-source BCP • not yet fully agreed internally • baby step • gets industry used to congestion-volume as the metric rate time

  8. nice traffic managementwithout new protocols Q&A discussion spare slide

  9. measuring contribution to congestion bit-rate = bytes weighted by congestion level= bytes dropped (or ECN-marked)= ‘congestion-volume’  marginal cost of capacity • as simple to measure as volume time congestion 1% 0.01% time 0.01% congestion 1% congestion 3MB 300MB 10GB 100MB 1MB 1MB

  10. actually each bucket needs to be two bucketsto limit bursts of congestion AQM policer police if either bucket empties p(t) kwi wi ci di(t) C main congestiontoken bucket congestionburst limiter meter

More Related