1 / 27

Off­Piste QoS­aware Routing Protocol

Off­Piste QoS­aware Routing Protocol. By Yigal Eliaspur. Problem Background. The main challenge in QoS routing is to be able to respond to online requests with Reasonable response time Minimal network overhead (Messages, Memory Processing time).

willem
Télécharger la présentation

Off­Piste QoS­aware Routing Protocol

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Off­Piste QoS­aware Routing Protocol By Yigal Eliaspur

  2. Problem Background • The main challenge in QoS routing is to be able to respond to online requests with • Reasonable response time • Minimal network overhead (Messages, Memory Processing time). • Minimal probability of blocking (requests failure). • A typical request is to reserve a certain resource along a path from a transaction source to a transaction destination. • The resource may be an additive resource (e.g. delay) or/and a nonadditive resource (e.g. bandwidth). • The request may be applied to Unicast or Multicast traffic.

  3. Available Solutions • The solutions today can be partitioned to three broad classes: • Source routing algorithms • Transforms a distributed problem into a centralized one. • Maintaining a complete global state in each node of dynamics network resources. • Distributed routing algorithms • The path computation is distributed among the intermediate nodes between the source and the destination. • Single path search – usually assumes global state in each node. • Multi path search (flooding) – uses only local state in each node. • Hierarchical routing algorithms • Each node maintains only a partial global state • To cope with the scalability problem of global state in large internetworks.

  4. Related Work • Our OPsAR protocol (Off­Piste QoS­aware Routing) can be classified to the Multi path, distributed routing algorithms family. • Others works related to this family are: • Selective flooding – multi path search will be done only on pre-computed routes. • Ticket based probing - Every probing (search) message is supposed to carry at least one ticket, and thus the total number of tickets limit the multi path search. • QMRP and S-QMRP (Scalable - Distributed QoS Multicast Routing Protocol): • The unicast route towards the destination is checked first. • If that failed selective scanning mechanism is applied. • The scanning is controlled by Maximum Branching Degree and Maximum Branching Level parameters. • QOSMic - best suited to a multicast environment, since it looks for a point on a multicast tree to ``hook'' on the new receiver.

  5. The OPsAR Protocol • Our main motivation of OPsAR is to improve tradeoff between the overhead of the protocol and the success ratio it produces. • In OPsAR, a node keeps track of recent QoS messages to learn about resource availability to and from various target points. • The learning is reflected in the node's ``knowledge­state''. • Efficient path selection is been done by leveraging on the knowledge­state at the nodes. • The OPsAR protocol is built of 3 main stages: • Try Phase – in which a single path search and reservation is applied from the transaction source to the transaction destination. • Scan Phase – in which a multi path search without reservation is applied from the transaction destination back to the source. • Try 2 Phase – in which the scan phase results are evaluated and the best candidate path is reserved from the transaction source to the destination.

  6. Try Phase • A path search from the transaction source toward the transaction target. • Try phase follows the shortest path as long as it has the required resources. • The deviation from the shortest path takes an ``off­piste'' route that leverages on the knowledge­state to optimize the routing protocol. • Its deviation from the shortest path is bounded . • If resources cannot be reserved within that boundary, the resources which have already been reserved are released, and a request is sent to the transaction target to begin the Scan phase.

  7. Scan Phase • The scan process is based on limited Breadth­First­Search (BFS) from the transaction target toward the transaction source. • We neither reserve resources in the Scan phase nor keep any state that relates to the specific scan. • As in the Try Phase the scanning process takes advantage of the knowledge­state to optimize the search. • The branching limitation is done by: • Ticketing scheme to bound the total number of paths. • Maximum branching degree (MBD) at each node in order to increase the variety of potential paths to traverse. • Off-piste counter to limit the distance from the shortest path, similar to the Try phase. • Branch is terminated during the scanning process if: • Off­piste limit is reached and the unicast route does not have the resources. • Or when no outgoing link has the requested resources.

  8. Try 2 Phase • If the transaction­source receives several successful scan messages, it initiates the Try 2 phase. • It chooses the ``best'' route from the successful scan messages and asks to reserve the resources along that path. • If reservation failure along the explicit route is detected, • The OPsAR tries to route the reservation request message to the transaction­destination using alternative routes that the off­piste mechanism offers. • If that fails a nack message is returned to the transaction source indicating the need to choose another explicit route from the previous scan result.

  9. Knowledge State - definition • Each node maintains • A local state in which it holds its links' status and the resource availability on them. • A bounded list of records • <target nodes, outgoing­link> • For each record the resources availability is maintained with respect to that outgoing link : • Max BW toward the target­node. • Max BW from the target­node. • This information is updated occasionally and is marked to identify the time of its last update. • This time is used for aging mechanism.

  10. Knowledge State - usage • Any OPsAR protocol message traversing a node is used to update the knowledge­state (KS). • Each OPsAR protocol message includes the following relevant fields: • Max BW To Origin • Max BW From Origin • There are three main operation the KS is involve with: • KS record creation/update • J • J • OPsAR message fields update • J • Routing decision • The choice is made according to the resource availability along the various links toward the target, and according to how recent that information is. • This is Based on three levels of outgoing links (neighbors) maintained per target node: • Fresh • Stale • Old

  11. Knowledge State – Routing decision (cont.)

  12. Simulation Model • NS­2 simulator • Power­Law network topology • As the node degree increases, the number of nodes with that degree decreases exponentially. • Used the topology generator described in Osnat’s work (On the tomography of networks and multicast trees) • The generator was extended to support BW allocation. • The bandwidth on the links was uniformly distributed from {10,34,45,100} Mb/s. • In order to make sure that the congestion would first occur in the core network we re­assigned the bandwidth of the endpoints to 1000Mb. • We also conducted tests with hierarchical bandwidth assignment chosen from {10Mb,100Mb,1G,10G} bits per second. • This backbone / metro type of over provisioning BW allocation showed almost no congestion for BW reservation requests. • Therefore, the topologies simulated were only large edge networks and ISP like networks.

  13. Simulation Basis • 600 nodes was used. • Transaction endpoints were chosen out of 120 edge nodes. • Most of the graphs are the result of 10,000 transactions performed on six different generated topologies. • We ran each simulation on 5 different protocol types: • Traditional RSVP - Allocates the QoS requirement along the unicast route toward the transaction destination; • S­QMRP* - Is S-QMRP adaptation to unicast routing • Basically it’s the same as OPsAR but without KS and Off-Piste counter support. • S­QMRP*D – Is S-QMRP* with Off-Piste counter support. • OPsAR • OPT - Implemented as a BFS which finds the shortest path that fulfills the bandwidth QoS requirements. • Message overhead;

  14. Simulation Basis (cont.) • We performed the following simulation and evaluate their relationship to the reservation success ratio: • Memory usage • Amount of concurrent transactions • Number of edge nodes • Number of destination nodes • The Cost and Performance Gain of Using Try&Scan Phases • Gradual Deployment within RSVP Framework

  15. Memory Usage vs. Success Ratio

  16. Memory Usage vs. Success Ratio (cont.) • The amount of memory sufficient to achieve about 85% of success ratio is very reasonable. • The memory is theoretically bounded by • The out degree of a core node (or by the aged out threshold which is 9 in our case) • Times the number of possible transaction­destination nodes. • In the largest simulation done this theoretical number was 160KB. • The average memory consumption was about 10% of the theoretical bound. • Where 60KB was limit - the actual bound set in the simulation code.

  17. Concurrent Transactions vs. Success Ratio

  18. Message Overhead vs. Success Ratio

  19. Message Overhead vs. Success Ratio (cont.) • We studied all the parameter's possible combination within a specific range • branching degree • scanning deviation • and number of tickets. • Each simulation result generated one point in the graph. • OPsAR vs. S-QMRP* • For the same amount of message overhead, the OPsAR improves the success ratio up to 30% more than S­QMRP*. • OPsAR vs. RSVP • Increase of overhead by five se times yields about three times the success ratio. • Another point to consider is that the average path length is about 8 hops when deviation is allowed and the 4 hops when deviation is forbidden (e.g. RSVP). • OPsAR vs. OPT • The overhead/sucess ratio of OPT is 20.6 while the overhead/sucess ratio of the OPsAR in 200K messages, is in 29.8 which is only 30% more then the OPT.

  20. Number of Edge Nodes vs. Success Ratio

  21. Number of Destinations Nodes vs. Success Ratio

  22. Number of Destinations Nodes vs. Success Ratio (cont.) • We ran the simulation with a constant number of 25% edge nodes (as opposed to the 20% we usually used). • The number of candidate destination nodes varied from 1% up to the whole set of edge nodes (25%). • The candidate set of source nodes was always the whole set of edge nodes. • Only the links from those destination nodes were assigned a bandwidth capacity of 1000Mb.

  23. The Cost and Performance Gain of Using Try&Scan Phases

  24. The Cost and Performance Gain of Using Try&Scan Phases (cont.) • Scan Phase uses extra time and messages over Try Phase. • Our simulations showed that the time to complete a Try followed by a Scan is three times the time it takes to complete the Try phase alone.

  25. Gradual Deployment within RSVP Framework

  26. Gradual Deployment within RSVP Framework (cont.) • At glance, there is no inherent limitation in the protocol that prohibits its use in an incremental manner. • The ``RSVP only'' routers were selected based on their distance from the a core. • The edge routers have a better chance to be chosen as ``RSVP only'' routers. • From the learning mechanism perspective, the available capacity of the links betweens ``RSVP only'' routers is ignored. • Future work can focus on deployment methods for the OPsAR protocol that will maintain the gain obtained from the learning mechanism.

  27. Future Work • Machine learning improving • The overall scheme of our protocol is an intelligent the choice of routes from a full Breadth­First­Search algorithm (BFS). • Future on research can focus on improving the educated choice of routes while limiting the overhead in memory. • We expect to find ways to use machine learning techniques to achieve that goal. • KS Aggregations • Save in memory by aggregating the information, using techniques like longest prefix matching on transactions destination. • Packet losses, link/node failures • Should be relatively easy using timers and retries for messages, and using soft state reservation. • Additive resources • Handling additive resources, like delay, requires minor changes d to the protocols presented. • Tuning the KS parameters • Linear increasing the fresh neighbor group did not increase the performance and sometimes cause it to be degraded. • Increasing the age out threshold – does not improved the performance either even though it increase the total memory requirements. • Further research must be conducted in order to explore the inter­dependencies among the various variables of OPsAR, and to automatically learn and choose the optimal values, possibly l using machine learning techniques.

More Related