1 / 83

QoS Architecture

QoS Architecture. Components (Technologies) of a QoS Network At routers: Packet Classification Packet Scheduling or Queuing method At network entrance Traffic conditioning Policing Marking Shaping At routers or somewhere in the network Admission Control Between hosts and routers:

elu
Télécharger la présentation

QoS Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. QoS Architecture

  2. Components (Technologies) of a QoS Network • At routers: • Packet Classification • Packet Scheduling or Queuing method • At network entrance • Traffic conditioning Policing Marking Shaping • At routers or somewhere in the network • AdmissionControl • Between hosts and routers: • Signalling • Link Efficiency Mechanisms

  3. QoS Architecture Traffic Conditioning

  4. Traffic Classification

  5. Traffic Classification • Classify arriving packets according to their QoSrequirements • Packet classification is done before routing table lookup

  6. Traffic Classification • Subscriber, application identification may required

  7. Classification techniques/rules • Can be done at Layer 2 or Layer 3 • Use CoS bits in 802.1p/q header • PPP, Point-to-Point Protocol username or the IP interface on therouter that the subscriber is using • On values containedin the IP, TCP, UDP headers • source and destinationIP address • source and destination TCP/UDP port number • DSCP • By logical entitiesthat are represented by the IP interface,such as IP subnets,Virtual Circuits (VCs), paths, tunnels, or Virtual Local Area Networks(VLANs) • A classification rulemay require several different conditions to be met in order for a particularpacket to be included in a given traffic class

  8. Classification techniques/rules • But • IP address of a subscriber can change dynamically. • Not a particularly secure means of identification • So • New subscriber identification methods, which improve the ability to securelytarget specific devices or groups of devices on a LAN or WANare becoming available using IPsec authentication • An example is NBAR,Network Based Application Recognition

  9. NBAR, Network Based Application Recognition Protocol

  10. Traffic Classification, Issues • Use of the DSCP for traffic classification is a very simple and direct approach forimplementing QoS-based services • How packetsobtain a non-zero code point in the first place? • Upstream systemapplied a DSCP code, having a knowledge of the application type carried in the packets, and the associatedQoS requirements • In most cases, it is unwise for a carrier to relyonly on the DSCP • Traffic classification is further complicated when packets are encrypted • Then classification process must either rely on what iscontained in the outer header of the IPsec encrypted packets (such as a DSCP value) • Or firstdecrypt the packets to expose the inner header, inspect it, and then re-encrypt the packets • Need to operatewithout adding latency

  11. Traffic Classification, Issues • Packets are marked with a specific prioritydenoting a requirement for special service from the network • Traffic arriving at network devices is separated into distinct flows • Priority levels must be enough to provide adequate but not very little differentiation between levels • Four to seven levels are ideal

  12. Traffic Classification, Issues • Classify as far out towards the edge as possible

  13. Packet Scheduling or Queuing method

  14. To offer QoS, delay, jitter and loss should be bounded • For high-performance networks, such as those used by Internet2, delay is mostly the propagation delay, which is determined by fiber length and speed of light in fiber, about 201000km/s • For common networks the component of delay and jitter that is important and can be minimized is the queuing delay / jitter • Special queuing mechanisms have effect only when it's not possible immediately to transmit a packet over the output interface • The most important component of loss is the loss due to congestion

  15. Packet Scheduling or Queuing method • Isolate traffic flows and provide requested QoS • Packet scheduling algorithm determines the order in whichbacklogged packets are transmitted on an output link • Allocates output bandwidth • Controls packet delay

  16. Packet Scheduling or Queuing method • Determines which packet gets the resource • Enforces resource allocation to each flows • To be “Fair”, scheduling must: • Keep track of how many packets each flow has sent • Consider resources reserved for each flow

  17. Queuing techniques & Congestion management techniques • FIFO • PQ, Priority Queuing • CBQ, Class Based Queuing or Custom Queuing • FQ, Fair Queuing • WFQ, Weighted Fair Queuing • LLQ, Low Latency Queuing • TBF, Token Bucket Filter • WRR, RED, GRED • WRED, Weighted Random Early Detection • D-WFQ, Distributed Weighted Fair Queueing • FB-WRED, Flow Based Weighted Random Early Detection

  18. First-Come-First-Served (FCFS or FIFO) • Does not provide any mechanism to prioritize packets so it cannot support different QoS

  19. First-Come-First-Served (FCFS or FIFO) • Used so extensively because it is simply • Peopleperceive it as very fair • Works reasonably well in Internet, with TCP congestion control

  20. First-Come-First-Served (FCFS or FIFO) • Can be very poor atminimizing the wait time of individual customers • Slow customer can stall the queue process considerably • If all but one do congestion control, and one does not, the one that doesn't (IP telephony, multicasting) might grab much of the bandwidth • Used by default at high bit rate interfaces • But generally is not the best option • Waiting time can be high

  21. Priority Queuing • Needed when someservice classes require absolute priority over some others • Or alternatively, whenever there is trafficthat is either strictly higher or lower priority than some other

  22. Priority Queuing • Packets are tagged with a priority level • It is a static priority algorithm • Can define 4 priorities (high, medium, normal, and low) • Tagging is done at the edge equipment and may be based on where the stream originates from, where it is displayed or who required it • Switches or routers will use multiple FIFO queues, one per priority level,on their output interfaces, and packets will be placed in the queue which maps to theirpriority level • The output scheduler always service the queue with the highest priority level first, effectively giving a guarantee that streamsrunning at this priority level will never experience degradation unless thenetwork is overflowed by video streams running at the same or higher priority level • Can support delay differentiation • In presence of high priority flows, lower priority queues tend to starve

  23. Earliest-Deadline-First • Assigns each packet a deadline (deadline = arrival time +delay guarantee of flow) • Requires to maintain a sorted queue

  24. Round-Robin • Works in fundamentally the same way as FCFS, but each customer is onlygiven a constant, limited time quantum with the server and if that customer is not finishedbeing processed then it is sent back to the queue to await more time • Ensures thateach customer will be processed even • Customers that have long processing times willtake a very long time to complete • Another problem is the amount of overhead associated

  25. Class Based or Custom Queuing • Classify incoming packets intogroups called classes

  26. Class-Based or Custom Queuing • Up to 16 queues are permitted • Queues are served in a round robin fashion • The major difference with priority queuing algorithm is that a maximum number of packets that can be served before the scheduler proceeds with the next queue is defined • This algorithm does not cause starvation problem to low priority queues • Guarantee a minimum amount of bandwidth for certain traffic types • Make the bandwidth that is left unused available to other traffic types • On the other hand at applications like voice, a degradation of performance can occur • More complicated to implement since it requires complex knowledge about the traffic pattern (packet size, window size of the application, sensibility to delay, etc.)

  27. Fair Queuing • One FIFO queue for each flow • Packets are taken in round-robin order from each queue that has them • Attempts to implement a scheduler that serves all flowswith a backlog at the same rate • Not completely trivial to implement Fair Queuing in apacket network • Large packets counted the same as small packets

  28. Fair Queuing with Different size packets • Virtual clock ticks once for each bit sent from all the queues (so more active queues means slower virtual clock) • Virtual finish time for a packet is start time plus the size of the packet • Start time is the largest of: finish time for the previous packet in the queue(computed), or arrival time • Select the packet with the lowest finish time

  29. WFQ, Weighted Fair Queuing

  30. WFQ, Weighted Fair Queuing • Packets are classified by flow • Packets with the same source IP address, destination IP address, source TCP or UDP port, or destination TCP or UDP port belong to the same flow • WFQ allocates an equal share of the bandwidth for each flow • Needed when different service classes require some minimumamount of the output link bandwidth, with the option to use more (in a fair fashion) if there isavailable capacity at that moment

  31. WFQ, Weighted Fair Queuing • By default up to 256 queues are defined • Low volumes flows can have a better treatment, while high volume flows share the remaining bandwidth • Default queuing mechanism for interfaces at lower than 2Mbps rates (HDLC, PPP, Frame Relay) • Considered as necessary mechanism for voice applications • WFQ understands (used in)PSVP protocol and IP Precedence mechanism • To provide consistent response time to heavy and light network users without excessive bandwidth service • WFQ is designed to minimize configuration effort and automatically adapts to changing network traffic conditions

  32. WFQ, Weighted Fair Queuing • Supports bandwidth allocation and delay bounds • Widely implemented in routers for supporting QoS • Weighted Fair Queuing variations: • Make it more “fair” • Deal with more situations • Simplify the calculation • Worst-case Fair WFQ (WF^2Q) • Hierarchical WFQ • Self-clocking Fair Queuing (SCFQ) • Weighted Round Robin (WRR) • Deficit Round Robin (DRR)

  33. Token Bucket Filter • Outgoing packet will be sentaccording to the size of the token buffer and the rate

  34. PWFQ, Priority Weighted Fair Queuing • Supplies the granularity required to enable application appropriate PHBs

  35. PWFQ, Priority Weighted Fair Queuing • Combining the previous we get the advantages of both • Allows the use of either approachin its pure form • Also provides increased flexibility in construction of PHBs by allowing multiplequeues at the same priority level, each with a different weight • This flexibility provides more granularity,which enables PHBs to easily meet the diverse range of application performance requirementsthat exist

  36. CBWFQ, Class-Based Weighted Fair Queuing • Extends the standard WFQ functionality to provide support for user-defined traffic classes • One defines traffic classes based on match criteria including protocols, access control lists, and input interfaces • One can configure up to 64 classes and control distribution among them, which is not the case with WFQ • A queue is reserved for each class • The characteristics for a class consist of a bandwidth, weight, and maximum packet limit • The bandwidth assigned to a class is the minimum bandwidth delivered to the class during congestion • If a default class is configured, all unclassified traffic is treated as belonging to the default class • If no default class is configured, then by default the traffic that does not match any of the configured classes is flow classified and given best-effort treatment

  37. CBWFQ, Class-Based Weighted Fair Queuing • Configuring CBWFQ is basically to define and configure a class policy • The following processes are necessary: • - define traffic classes to specify the classification policy (class maps) • This process determines how many types of packets are to be differentiated from one another • - associating policies (class characteristics) with each traffic class (policy maps) • This process provides configuration of policies to be applied to packets • belonging to one of the classes previously defined through a class map. • - attaching policies to interfaces (service policies) • This process requires that one associates an existing policy map, or service policy, with an interface to apply the map’s particular set of policies to that interface

  38. LLQ, Low Latency Queuing • Adds a priority queue to CBWFQ, Class Based Weighted Fair Queuing • When there is not congestion, behavior is FIFO • Configuration options • Priority (Kbps) • Priority percent % • Bandwidth (Kbps) • Bandwidth percent % • Bandwidth remaining percent %

  39. Congestion Avoidance

  40. Global Synchronization • If a queue fills up, all packets at tail end of queue get dropped • Tail drop causes TCP window to shrink on a large number of sessions, giving the effect of global synchronization • Need an intelligent drop decision when queue exceeds a threshold

  41. (Suppressive) Congestion Avoidance Mechanisms • Achieved through Dropping.. • Prevent bufferexhaustion, and future congestion by dropping packets • Congestion mechanisms available are RED, Random EarlyDetection, and WRED, WeightedRED

  42. RED, Random Early Detect • If a queue passes a set threshold (its effective length) RED increases the drop probabilities • This averts a sudden hard limit to the queue size • RED starts to drop packets as the output queue fills up, in order to trigger congestion-avoidance in TCP • Some switch vendors implement RED as a means to actively manage system buffer resources, asthose resources grow scarce • The sessions with the most traffic are most likely to experience a dropped packet, so those are the ones that slow down the most

  43. RED, Random Early Detection • With RED we have: • Efficiency • Allows individual queues to grow quite large viadynamic sharing of system resources • No single queue can grow out of control and adverselyaffect the others • By gradually increasing the drop probability, TCP sessions are able to throttle themselves backgracefully, and sudden large bursts of retransmissions from across many sessions are avoided

  44. Why simple RED is not enough? • Simple RED is inadequate for robust DiffServ andQoS implementations • An AF service class, for example, requires a set of threePHBs that differ only in their queue length thresholds and drop probabilities (but not in the queuethey are using)

  45. WRED, Weighted Random Early Detect • Takes the priority value in the IP header into account and starts dropping low-priority packets earlier than their higher-priority counterparts • Enables multiple classes of RED treatment perqueue (each with different thresholds and drop probabilities)

  46. WRED, Weighted Random Early Detect

  47. WRED, Weighted Random Early Detect • In AF, for example,Green packets can be treated on a base level, while Yellow packets can be evaluated against ashorter effective queue length and use a steeper probability function for dropping packets • And Redpackets can be evaluated against a still more aggressive RED treatment • A single queue to support all three PHBs within a single service class and prevents packets from gettingout of sequence

  48. WRED, Weighted Random Early Detect • Example: • A service forstreaming video or audio (UDP) needs a PHB that prevents this high-volume trafficfrom overwhelming queuing resources • In creating a PHB for UDP traffic, use a WRED treatment that drops packets much more aggressively (than those forTCP traffic) when queue length thresholds are exceeded • This conserves system resources and preventsthe static that would otherwise be caused by late-arriving voice and video packets

  49. Preemption • When a server stops processing the current customer, sends it back to the queue, and begins processing a different customer • This is used to stop customers with long service times from keeping the server busy indefinitely and can also guarantee that important customers get immediate attention. If used properly, preemption can improve efficiency • Starvation • Priority and service time based queuing methods have the possibility of keeping certain customers at the end of the queue, thus never getting processed • occurs when new, high-priority customers continually enter the queue • Deadlocking • Occurs when one customer needs another to finish processing in a server in order to free or create some resource that the first customer must have to process, but the first customer has higher priority • The result is the system freezes and nothing can enter the servers

  50. Delay Bound Discard • Part of some-more advanced queuing implementations • When a packetfrom a latency-sensitive application sits in a queue for longer than a specified time limit (the delaybound), the packet is simply dropped • Efficient use of backbone resources and improves • perceived QoS by flushing out old and stale packets (which can cause congestion, and are useless • for theses applications) • Each queue can be configured for a different delay bound • Or,when a queue is not designed to carry latency-sensitive traffic, this feature can be disabled

More Related