10 likes | 148 Vues
Routing in Rapidly F luctuating S tochastic N etworks Arka Bhattacharya and Shaunak Chatterjee , UC Berkeley. Problem Definition and Challenges PROBLEM STATEMENT A network is represented as a graph G=(V,E,P), where V : the set of vertices (nodes), E : the set of edges
E N D
Routing in Rapidly Fluctuating Stochastic Networks Arka Bhattacharya and ShaunakChatterjee, UC Berkeley Problem Definition and Challenges PROBLEM STATEMENT A network is represented as a graph G=(V,E,P), where V : the set of vertices (nodes), E : the set of edges P : matrix containing availability probability of edges The objective is to devise protocols for routing when the edge availability (i.e. P matrix) fluctuates rapidly. MULTIPLE PATHS? Why we chose XL? XL is a link state algorithm which suppresses certain updates while still maintaining performance guarantees. We wanted our implementation to be able to provide similar performance guarantees. The multipath metrics are also compatible with other algorithms like distance vector and various hierarchical algorithms. • WHY CONVENTIONAL ROUTING WILL FAIL • Link State Algorithms: Too many updates (fluctuations are rapid) • Any other best path based algorithm will perform erratically since the best path will fluctuate a lot. In the worst case, • The current best path can deteriorate drastically • Updates about the best path will not arrive quickly enough • Maintaining information about multiple best paths would be useful • Cons: • More critical links more updates • Increased complexity of route computation and maintenance • Pros: • At least one good path despite fluctuations • Dense regions for routing • Routes fluctuate less compared to constituent links • Larger temporal validity Key Design Choices – What and Why? “Connectivity” metric Successful routing depends more on existence of a path than on the best path • ROUTING THROUGH HYPERLINKS • Hyperlinks capture the redundancy information between a pair of nodes • Shortened path hyperlinks capture local redundancies • Thus, we route using the hyperlinks instead of the links to exploit the redundancies • PROXY FOR CONNECTIVITY • We consider the k shortest edge disjoint paths between a pair of nodes and define a hyperlink, whose ‘connectivity’ is defined as: • 1 - ∏(1 – pathval(S,D,i)), • where i=1,2, …, k and pathval(S,D,i) is the cost of the ith shortest path between S and D • SHORT PATHS IN HYPERLINKS • Hyperlink source is sent updates about every constituent link • Long paths temporally invalid updates • Large control overhead • Thus, we limit each path to at most c hops X Y .6 .6 .3 .3 .5 .5 D .5 S .5 .3 .3 P .5 Q .5 Other multipath metrics 1. Edge-wise sampling The connectivity metric is analogous to path-wise sampling. We can modify it to a metric representing edge-wise sampling. 2. Expected time The expected time to successfully send a packet down a link with probability p is (1/p). This can be used as the cost of the link to then perform routing. The multipath metric will reflect the expected time for a packet to travel across a hyperlink following a decision criterion which is again based on expected transmission time. Mathematical details are outlined in the accompanying report. Caveats: Loops! Algorithm Overview Step 1: Forming Hyperlinks For each node pair which has a c-hop or shorter path between them, form the hyperlink by finding the k-shortest edge-disjoint paths. Use a multipath metric Step 2: Updating 2.1. When a link’s value changes, all nodes within its c-hop neighborhood are informed of the change. These nodes update relevant hyperlinks. 2.2. Hyperlink value updates are sent or suppressed using XL. Hyperlink values are less sensitive to link value changes. Step 3: Routing The route is determined using Dijkstra’s on the hyperlinks. When a packet arrives at a node, it routes it to the next node on the best available path to the hyperlink destination along the shortest hyperlink path to the packet destination. Experiments Random graphs having 10,20,30 and 40 nodes were generated with varying amounts of redundancy. Similar trends were noticed, so we report results for n=40. The gains are not significant, but the ‘Expected time’ metric could yield much better results. The ‘overhead’ metric reported is the number of update messages per node per link value change. A x x x B1 B2 B3 x x x C1 C2 C3 x x x D B1 is the first hop from A to D along the best path in the A-D hyperlink Fig 1. Avg. Transmission time vs Overhead (c=2, n=40) Fig 2. Avg. Transmission time vs Overhead (c=3, n=40) Fig 3. Redundancy is helpful. Higher values of k benefit more (n=40). A x x x • Conclusions • The benefits are small and not significant, even for a high overhead • The hyperlink cost metric used is not optimizing the performance metric • Existence of more redundant paths improves transmission time B1 B2 B3 x x x C1 C2 C3 x x x D A is the first hop from B1 to D along the best path using the B1-A and A-D hyperlinks Fig 4. Overhead is insensitive to the value of epsilon (c=3, n=40) Fig 5. Avg. transmission time also insensitive to epsilon (c=3, n=40)