1 / 45

CS 473 – Algorithms I

CS 473 – Algorithms I. All Pairs Shortest Paths. All Pairs Shortest Paths (APSP). given : directed graph G = ( V, E ) , weight function ω : E → R, |V| = n goal : create an n × n matrix D = ( d ij ) of shortest path distances i.e., d ij = δ ( v i , v j )

lavi
Télécharger la présentation

CS 473 – Algorithms I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS473 – Algorithms I All Pairs Shortest Paths All Pairs Shortest Paths

  2. All Pairs Shortest Paths (APSP) • given : directed graph G = ( V, E), • weight function ω : E → R, |V|= n • goal : create an n × n matrix D = ( dij ) of shortest path distances • i.e., dij = δ( vi ,vj) • trivial solution : run a SSSP algorithm ntimes, one for each vertex as the source. All Pairs Shortest Paths

  3. All Pairs Shortest Paths (APSP) ► all edge weights are nonnegative : use Dijkstra’s algorithm • PQ = linear array : O ( V3 + VE ) = O ( V3) • PQ = binary heap : O ( V2lgV + EVlgV ) = O ( V3lgV ) for dense graphs • better only for sparse graphs • PQ = fibonacci heap : O ( V2lgV + EV ) = O ( V3) for dense graphs • better only for sparse graphs ► negative edge weights : use Bellman-Ford algorithm • O ( V2E ) = O ( V4) on dense graphs All Pairs Shortest Paths

  4. Adjacency Matrix Representation of Graphs ►n x n matrix W = (ωij) of edge weights : ω(vi, vj) if (vi , vj )  E ωij = ∞ if (vi , vj)  E ►assume ωii = 0 for all viV, because • no neg-weight cycle  shortest path to itself has no edge, i.e., δ(vi ,vi ) = 0 All Pairs Shortest Paths

  5. Dynamic Programming (1) Characterize the structure of an optimal solution. (2) Recursively define the value of an optimal solution. (3) Compute the value of an optimal solution in a bottom-up manner. (4) Construct an optimal solution from information constructed in (3). All Pairs Shortest Paths

  6. Shortest Paths and Matrix Multiplication Assumption : negative edge weights may be present, but no negative weight cycles. (1) Structure of a Shortest Path : • Consider a shortest pathpijm from vi to vjsuch that |pijm|≤ m ► i.e., path pijm has at most m edges. • no negative-weight cycle  all shortest paths are simple  m is finite m ≤ n – 1 • i = j|pii|= 0 & ω(pii) = 0 • i ≠ j decompose path pijm into pikm-1 & vk → vj , where|pikm-1|≤ m - 1 ► pikm-1should be a shortest path from vi to vk by optimal substructure property. ► Therefore, δ (vi ,vj ) = δ (vi ,vk ) + ωk j All Pairs Shortest Paths

  7. Shortest Paths and Matrix Multiplication (2) A Recursive Solution to All Pairs Shortest Paths Problem : • dijm= minimum weight of any path fromvi to vjthat contains at most “m” edges. • m = 0 :There exist a shortest path from vi to vj with no edges↔ i = j . 0 if i = j ► dij0 = ∞ if i ≠ j • m ≥ 1:dijm = min { dijm-1 , min1≤k≤n Λ k≠j { dikm-1 + ωkj}} = min1≤k≤n {dikm-1+ ωkj } for all vk  V, since ωj j = 0 for all vj  V. All Pairs Shortest Paths

  8. Shortest Paths and Matrix Multiplication • to consider all possible shortest paths with ≤ m edges from vi to vj ► consider shortest path with ≤ m -1 edges, from vi to vk, where vk  Rviand (vk ,vj ) E • note :δ(vi ,vj ) = dijn-1 = dijn= dijn+1 , since m≤n -1 = | V | - 1 vk’s vi vj All Pairs Shortest Paths

  9. Shortest Paths and Matrix Multiplication • (3) Computing the shortest-path weights bottom-up : • given W = D1, compute a series of matrices D2, D3, ..., Dn-1 , • where Dm = ( dijm ) for m = 1, 2,..., n-1 • ► final matrix Dn-1 contains actual shortest path weights, • i.e.,dijn-1= δ (vi ,vj ) • SLOW-APSP( W ) • D1← W • form ← 2ton-1do • Dm ← EXTEND( Dm-1 , W ) • return Dn-1 All Pairs Shortest Paths

  10. Shortest Paths and Matrix Multiplication EXTEND( D , W) ► D = ( d ij ) is an n x n matrix fori ← 1tondo forj ← 1ton do d ij ← ∞ fork ← 1ton do d ij ← min{d ij , d ik + ωk j} return D MATRIX-MULT( A , B ) ► C = ( cij ) is an n x n result matrix fori ←1tondo forj ← 1tondo cij ← 0 fork ← 1tondo cij ← cij + aik x bk j returnC All Pairs Shortest Paths

  11. Shortest Paths and Matrix Multiplication • relation to matrix multiplication C = A B : cij = ∑1≤k≤n aik x bk j , ► Dm-1 ↔ A&W ↔ B&Dm ↔ C “min” ↔ “t” &“t” ↔ “x” &“∞” ↔ “0” • Thus, we compute the sequence of matrix products D1 = D0x W = W ; note D0 = identity matrix, 0 if i = j D2= D1x W = W2 i.e., dij0 = D3= D2 x W = W3 ∞ if i ≠ j Dn-1= Dn-2x W = Wn-1 • running time :( n4) = ( V4) ► each matrix product : ( n3) ► number of matrix products : n-1 All Pairs Shortest Paths

  12. 2 3 4 3 1 8 2 -5 1 7 -4 5 4 6 Shortest Paths and Matrix Multiplication • Example All Pairs Shortest Paths

  13. 2 3 4 3 1 8 2 -5 1 7 -4 5 4 6 Shortest Paths and Matrix Multiplication D1= D0W All Pairs Shortest Paths

  14. 2 3 4 3 1 8 2 -5 1 7 -4 5 4 6 Shortest Paths and Matrix Multiplication D2= D1W All Pairs Shortest Paths

  15. 2 3 4 3 1 8 2 -5 1 7 -4 5 4 6 Shortest Paths and Matrix Multiplication D3= D2W All Pairs Shortest Paths

  16. 2 3 4 3 1 8 2 -5 1 7 -4 5 4 6 Shortest Paths and Matrix Multiplication D4= D3W All Pairs Shortest Paths

  17. cj cj k dijm k = ri ri SSSP and Matrix-Vector Multiplication • relation of APSP to one step of matrix multiplication W ↔ B Dm-1↔ A Dm↔ C All Pairs Shortest Paths

  18. SSSP and Matrix-Vector Multiplication • dijn-1at rowriand columncjof product matrix =δ( vi=s, vj )for j = 1, 2, 3, ... , n • rowriof the product matrix=solution to single-source shortest path problem fors = vi. ► riof C =matrix B multiplied byriof A Dim= Dim-1x W All Pairs Shortest Paths

  19. SSSP and Matrix-Vector Multiplication 0 if i = j • let Di0= d0, where dj0= ∞ otherwise • we compute a sequence of n-1 “matrix-vector” products di1 = di0x W di2= di1 x W di3= di2 x W : din-1= din-2 x W All Pairs Shortest Paths

  20. SSSP and Matrix-Vector Multiplication • this sequence of matrix-vector products ► same as Bellman-Ford algorithm. ► vector dim d values of Bellman-Ford algorithm after m-th relaxation pass. ► dim← dim-1x W m-th relaxation pass over all edges. All Pairs Shortest Paths

  21. SSSP and Matrix-Vector Multiplication BELLMAN-FORD ( G , vi ) ► perform RELAX ( u , v ) for ► every edge ( u , v )  E forj ← 1tondo fork ← 1tondo RELAX ( vk , vj ) RELAX ( u , v ) dv= min { dv, du+ ωuv} EXTEND ( di , W ) ► di is an n-vector forj ← 1tondo dj ← ∞ fork ← 1tondo dj ← min { dj , dk + ωkj } All Pairs Shortest Paths

  22. Improving Running Time Through Repeated Squaring • idea : goal is not to compute all Dm matrices ► we are interested only in matrix Dn-1 • recall : no negative-weight cycles Dm= Dn-1for all m ≥ n-1 • we can compute Dn-1with only lg(n-1) matrix products as D1= W D2 = W2 = W x W D4 = W4= W2x W2 D8 = W8= W4x W4 = = • This technique is called repeated squaring. All Pairs Shortest Paths

  23. Improving Running Time Through Repeated Squaring • FASTER-APSP ( W ) D1← W m ← 1 whilem < n-1do D2m← EXTEND ( Dm , Dm) m ← 2m return Dm • final iteration computes D2m for some n-1 ≤ 2m ≤ 2n-2 D2m = Dn-1 • running time :( n3lgn ) = ( V3lgV ) ►each matrix product :( n3) ► # of matrix products : lg( n-1 ) ► simple code, no complex data structures, small hidden constants in -notation. All Pairs Shortest Paths

  24. vk’s vi vj Idea Behind Repeated Squaring • decompose pij2m as pikm & pkjm, where pij2m:vi vj pikm:vi vk pkjm:vk vj All Pairs Shortest Paths

  25. Floyd-Warshall Algorithm • assumption : negative-weight edges, but no negative-weight cycles (1) The Structure of a Shortest Path : • Definition : intermediate vertex of a pathp=<v1, v2, v3, ... , vk> ► any vertex ofpother thanv1 or vk. • pijm: a shortest path fromvi to vjwith all intermediate vertices from Vm={ v1 , v2 , ... , vm} • relationship betweenpijmandpijm-1 ► depends on whethervmis an intermediate vertex of pijm -case 1:vmis notan intermediate vertex ofpijm all intermediate vertices ofpijmare inVm -1 pijm=pijm-1 All Pairs Shortest Paths

  26. Floyd-Warshall Algorithm -case 2 :vmisan intermediate vertex ofpijm -decompose path asvi vm vj p1 : vi vm & p2 : vm vj -by opt. structure property bothp1&p2are shortest paths. -vmis not an intermediate vertex ofp1&p2 p1 = pimm-1&p2 = pmjm-1 Vm p2 vm p1 vj vi All Pairs Shortest Paths

  27. Floyd-Warshall Algorithm (2) A Recursive Solution to APSP Problem : • dijm = ω(pij ) : weight of a shortest path from vi to vj with allintermediate vertices from Vm= { v1 , v2 , ... , vm}. • note :dijn=δ(vi ,vj ) since Vn =V ► i.e., all vertices are considered for being intermediate vertices of pijn. All Pairs Shortest Paths

  28. Floyd-Warshall Algorithm • compute dijmin terms of dijkwith smallerk < m • m = 0 :V0 = empty set  path from vi to vj with no intermediate vertex. i.e., vi to vj paths with at most one edge dij0=ωi j • m ≥ 1 : dijm=min {dijm-1, dimm-1+ dmjm-1} All Pairs Shortest Paths

  29. Floyd-Warshall Algorithm (3) Computing Shortest Path Weights Bottom Up : FLOYD-WARSHALL( W ) ►D0, D1, ... , Dn are n x n matrices form ← 1tondo fori ← 1tondo forj ← 1tondo dijm← min {dijm-1, dimm-1+ dmjm-1} return Dn All Pairs Shortest Paths

  30. Floyd-Warshall Algorithm FLOYD-WARSHALL ( W ) ► D is an n x n matrix D ← W form ← 1tondo fori ← 1ton do forj ← 1tondo ifdij >dim +dmj then dij ←dim +dmj return D All Pairs Shortest Paths

  31. Floyd-Warshall Algorithm • maintaining nD matrices can be avoided by dropping all superscripts. • m-th iteration of outermost for-loop begins with D = Dm-1 ends with D = Dm • computation ofdijmdepends on dimm-1and dmjm-1. no problem if dim &dmj are already updated to dimm &dmjmsince dimm= dimm-1& dmjm = dmjm-1. • running time : ( n3 ) = ( V3 ) simple code, no complex data structures, small hidden constants All Pairs Shortest Paths

  32. Transitive Closure of a Directed Graph • G'= ( V , E') : transitive closure of G = ( V , E ), where ► E'= { (vi, vj): there exists a path from vi to vj in G } • trivial solution : assign W such that 1 if (vi, vj) E ωij = ∞ otherwise ► run Floyd-Warshall algorithm on W ► dijn< n there exists a path from vitovj , i.e., (vi, vj) E' ► dijn= ∞ no path from vitovi , i.e., (vi, vj) E' ► running time :( n3 ) = ( V3 ) All Pairs Shortest Paths

  33. Transitive Closure of a Directed Graph • Better ( V3 ) algorithm : saves time and space. 1 if i = j or (vi, vj) E ► W = adjacency matrix :ωij= 0 otherwise ► run Floyd-Warshall algorithm by replacing “min” → “ ” & “+” → “ ” 1 if a path from vi to vj with all intermediate vertices from Vm • define tijm= 0 otherwise ► tijn = 1 (vi, vj) E'&tijn = 0 (vi, vj) E' • recursive definition for tijm = tijm-1 (timm-1 tmjm-1 ) with tij0 = ωij All Pairs Shortest Paths

  34. Transitive Closure of a Directed Graph T-CLOSURE (G ) ► T = ( tij ) is an n x n boolean matrix fori ← 1tondo forj ← 1tondo ifi = j or ( vi, vj)  E then tij ← 1 else tij ← 0 form ← 1tondo fori ← 1tondo forj ← 1tondo tij ← tij ( timtmj ) All Pairs Shortest Paths

  35. Johnson’s Algorithm for Sparse Graphs (1) Preserving shortest paths by edge reweighting : • L1 : given G = ( V , E ) withω:E→ R ► let h : V → R be any weighting function on the vertex set ► define ω( ω , h ) : E → R as ω( u , v ) = ω( u , v ) + h (u) – h (v) ► let p0k = < v0 , v1 , ... , vk > be a path from v0 to vk (a)ω( p0k ) = ω( p0k ) + h (v0 ) - h (vk ) (b)ω( p0k ) = δ(v0, vk ) in ( G, ω )ω( p0k ) = δ(v0, vk ) in ( G, ω ) (c)( G , ω ) has a neg-wgt cycle( G , ω ) has a neg-wgt cycle All Pairs Shortest Paths

  36. Johnson’s Algorithm for Sparse Graphs • proof (a):ω( p0k ) = ∑1 ≤i ≤k ω( vi-1 ,vi ) = ∑1 ≤i ≤k ( ω(vi-1 ,vi ) + h (v0 ) - h (vk ) ) = ∑1 ≤i ≤k ω(vi-1,vi ) + ∑1 ≤i ≤k ( h (v0 ) - h (vk ) ) = ω( p0k ) + h (v0 ) - h (vk ) • proof (b): () show ω( p0k ) = δ ( v0 , vk ) ω( p0k ) = δ ( v0 , vk ) by contradiction. ► Suppose that a shorter path p0k'from v0to vkin (G , ω),then ω( p0k') < ω( p0k ) • due to (a) we have • ω(p0k') + h (v0 ) - h (vk ) = ω(p0k') < ω(p0k ) = ω(p0k) + h (v0 ) - h (vk ) ω(p0k') + h (v0 ) - h (vk ) < ω(p0k) + h (v0 ) - h (vk ) ω(p0k') < ω(p0k )  contradicts that p0k is a shortest path in( G , ω) All Pairs Shortest Paths

  37. Johnson’s Algorithm for Sparse Graphs • proof (b): (<=) similar • proof (c): () consider a cycle c =< v0 , v1 , ... , vk =v0 > . Due to (a) ► ω(c) = ∑1 ≤i ≤k ω(vi-1,vi ) = ω(c) + h (v0 ) - h (vk ) = ω(c) + h (v0 ) - h (v0 ) = ω(c) since vk =v0 ►ω(c) = ω(c). QED All Pairs Shortest Paths

  38. Johnson’s Algorithm for Sparse Graphs (2) Producing nonnegative edge weights by reweighting : • given (G, ω) with G = ( V, E ) and ω : E → R construct a new graph ( G', ω') with G' = ( V', E' ) and ω' = E'→ R ► V' = V U { s } for some new vertex s  V ► E'= E U { ( s ,v ) : v  V } ► ω'(u,v) = ω(u,v) (u,v)  E and ω'(s,v) = 0 , v  V • vertex s has no incoming edges sRvfor any v in V ► no shortest paths from u ≠ s to v in G'contains vertex s ► ( G', ω') has no neg-wgt cycle  (G,ω) has no neg-wgt cycle All Pairs Shortest Paths

  39. Johnson’s Algorithm for Sparse Graphs • suppose that G and G' have no neg-wgt cycle • L2 : if we define h (v) = δ (s ,v ) v  V in G' and ω according to L1. ►we will have ω(u,v) = ω(u,v) + h(u) – h(v) ≥ 0 v  V proof : for every edge (u,v)  E δ (s ,v ) ≤ δ (s,u) + ω(u,v) in G' due to triangle inequality h (v) ≤ h (u) + ω(u,v)  0 ≤ ω(u,v) + h(u) – h(v) = ω(u,v) All Pairs Shortest Paths

  40. 3 4 8 2 -5 1 7 -4 6 Johnson’s Algorithm for Sparse Graphs example : 0 v2 0 v1 v0=s 0 v3 0 • ( G', ω') v4 v5 0 All Pairs Shortest Paths

  41. -1 3 4 -5 0 8 2 -5 1 7 -4 -4 0 6 Johnson’s Algorithm for Sparse GraphsEdge Reweighting v2 v1 v3 • (G', ω') with h(v) v4 v5 All Pairs Shortest Paths

  42. Johnson’s Algorithm for Sparse Graphs Edge Reweighting v2 4 0 v1 v3 13 2 0 0 10 0 • (G, ω ) 2 v5 v4 All Pairs Shortest Paths

  43. Johnson’s Algorithm for Sparse Graphs Computing All-Pairs Shortest Paths • adjacency list representation of G. • returns n x n matrix D = ( dij ) where dij = δij , or reports the existence of a neg-wgt cycle. All Pairs Shortest Paths

  44. Johnson’s Algorithm for Sparse Graphs • JOHNSON(G,ω) ► D=(dij) is an nxn matrix ► construct ( G' = (V', E') , ω')s.t. V'= V U {s}; E' = E U { (s,v) : v  V } ►ω'(u,v) = ω(u,v), (u,v)  E &ω'(s,v) = 0 v  V ifBELLMAN-FORD(G', ω', s) = FALSEthen return“negative-weight cycle” else for each vertex v  V'- {s} = Vdo h[v] ← d'[v] ► d'[v] = δ'(s,v) computed by BELLMAN-FORD(G', ω', s) for each edge (u,v)  Edo ω(u,v) ← ω(u,v) + h[u] – h[v] ► edge reweighting for each vertex u  Vdo runDIJKSTRA(G, ω, u) to compute d[v] = δ (u,v) for all v in V  (G,ω) for each vertex v  Vdo duv = d[v] – ( h[u] – h[v] ) return D All Pairs Shortest Paths

  45. Johnson’s Algorithm for Sparse Graphs • running time:O ( V2lgV + EV ) ► edge reweighting BELLMAN-FORD(G', ω', s) : O ( EV ) computing ω values : O (E ) ► |V| runs of DIJKSTRA : | V | x O ( VlgV + EV ) =O ( V2lgV + EV ); PQ = fibonacci heap All Pairs Shortest Paths

More Related