html5-img
1 / 38

On the ICP Algorithm

On the ICP Algorithm. Esther Ezra, Micha Sharir Alon Efrat. The problem: Pattern Matching. Input: A = {a 1 , …, a m }, B = {b 1 , …, b n } A,B  R d Goal: Translate A by a vector t  R d s.t.  (A+t,B) is minimized. The (1-directional) cost function:.

bernad
Télécharger la présentation

On the ICP Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the ICP Algorithm Esther Ezra, Micha Sharir Alon Efrat

  2. The problem: Pattern Matching Input: A = {a1, …, am}, B = {b1, …, bn} A,B  Rd Goal: TranslateAby a vectort  Rds.t.(A+t,B)is minimized. The (1-directional) cost function: The cost function (measures resemblance). The nearest neighbor of ainB. A , B

  3. The algorithm [Besl & Mckay 92] Use local improvements: Repeat: At each iteration i, with cumulative translation ti-1 • Assign each point a+ti-1 A+ti-1to its NN b = NB(a+ti-1) • Compute the relative translationti that minimizes with respect to this fixed (frozen) NN-assignment.Translate the points of A by ti(to be aligned to B). ti ti-1 +ti . • Stop when the value of does not decrease. The overall translation vector previously computed. t0 = 0. Some of the points of A may acquire new NN in B.

  4. The algorithm: Convergence [Besl & Mckay 92]. • The value of=2decreases at each iteration. • The algorithm monotonically converges to a local minimum. This applies for =as well. A local minimum of the ICP algorithm. a1 a2 a3 b1 b2 b3 b4 The global minimum is attained when a1, a2, a3 are aligned on top of b2, b3, b4 .

  5. NN and Voronoi diagrams Each NN-assignment can be interpreted by the Voronoi diagram V(B) of B. b is the NN of a a is contained in the Voronoi cell V(b) of V(B).

  6. The problem under the RMS measure

  7. The algorithm: An example in R1 (A+t,B) = RMS(t) = Iteration 1: Solving the equation: RMS’(t)=0 (A,B) 3 m=4, n=2. a1 = -3- a2 = -1 a3 = 1 a4 = 3 b1 = 0 b2 = 4 t1 1 (b1+b2)/2 = 2

  8. An example in R1:Cont. Iteration 2: (A+t1,B) 2 a1+t1 a4+ t1 a2+ t1 a3+ t1 b1 = 0 b2 = 4 t2 = 1 (b1+b2)/2 = 2 Iteration 3: (A+t1+t2,B) 1 a1+ t1 + t2 a2+ t1 + t2 a3+ t1 + t2 a4+ t1 + t2 b1 = 0 b2 = 4 NN-assignment does not change.t3 = 0 (b1+b2)/2 = 2

  9. Number of iterations: An upper bound The value of is reduced at each iteration, since • The relative translation t minimizes  with respect to the frozen NN-assignment. • The value of  can only decrease with respect to the new NN-assignment (after translating by t). No NN-assignment arises more than once! #iterations = #NN-assignments (that the algorithm reaches)

  10. Number of iterations: A quadratic upper bound in R1 am a1 a2 ai Critical event:The NN of some point ai is changed. Two “consecutive” NN-assignments differ by one pair (ai,bj). #NN-assignments  |{(ai,bj) | ai  A, bj  B}| = nm. b1 b2 bj bj+1 bn t aicrosses into a new Voronoi cell.

  11. The bound in R1 Is the quadratic upper bound tight??? A linear lower bound construction (n=2): (m) m=2k+2, n=2 a2 = -2k+1 a1 = -2k-1 am-1 = 2k-1 am = 2k+1 b1 = 0 b2 = 4k Using induction on i=1,…, k/2 . (b1+b2)/2 = 2k Tight! (when n=2) .

  12. Structural properties in R1 Lemma: At each iteration i2 of the algorithm Corollary (Monotonicity): The ICP algorithm always moves A in the same (left or right) direction.Either ti 0for eachi0, orti 0for eachi0. Use induction on i.

  13. A super-linear lower bound construction in R1. The construction (m=n): a1 = -n - (n-1), ai = (i-1)/n -1/2 + , bi = i-1, for i=1, …, n. m=n b1 = 0 b2 = 1 1 = 1/2 Theorem: The number of iterations of the ICP algorithm under the above construction is (n log n).

  14. Iteration 2: b2 = 1 b3= 2 Iteration 3: b3 = 2 b2 = 1 b4= 3 At iteration 4, the NN of a2 remains unchanged (b3): Only n-2 points of A cross into V(b4).

  15. The lower bound construction in R1 Round j:n-1 right pts of A assigned to bn-j+1 and bn-j+2 Consists of steps where pts of A cross the bisector n-j+1 = (bn-j+1 + bn-j+2) / 2 Then at each such step i, # pts of A that cross n-j+1is exactly j (except for the last step). At the last such step, # pts of A that cross n-j+1or n-j+2is exactly j-1. # steps in the round n/j . At the next round, # steps in the round n/(j-1) .

  16. The lower bound construction in R1 The overall number of steps:

  17. The lower bound construction in R1:Proof of the claim Induction on j: Consider the last step i of round j : l< j pts have not yet crossed n-j+1 Black pts still remain in V(bn-j+2). bn-j+2 bn-j+1 bn-j+3 The points a2, …, al+1 cross n-j+1, and an-(j-l-2), …,an cross n-j+2 . Overall number of crossing points =l + j - l -1 = j - 1. QED

  18. General structural properties in Rd Theorem: Let A = {a1, …, am}, B = {b1, …, bn} be two point sets in Rd. Then the overall number of NN-assignments, over all translated copies of A, is O(mdnd). Worst-case tight. Proof: Critical event: The NN of ai+t changes from b to b’ t lies on the common boundary of V(b–ai) and V(b’–ai) t b - ai b’ - ai Voronoi cells of the shifted diagram V(B-ai) . b’’ - ai

  19. # NN Assignments in Rd Critical translations t for a fixedai = Cell boundaries in V(B-ai) . All critical translations = Union of all Voronoi edges = Cell boundaries in the overlayM(A,B)of V(B-a1),…, V(B-am). # NN-assignments = # cells in M(A,B) Each cell of M(A,B) consists of translations with a common NN-assignment

  20. # NN Assignments in Rd Claim: The overall number of cells of M(A,B) is O(mdnd). Proof: • Charge each cell ofM(A,B) to some vertex v of it(v is charged at most 2d times). • v arises as the vertex in the overlay Md of only d diagrams. • The complexity of Mdis O(nd) [Koltun & Sharir 05]. • There are d-tuples of diagrams V(B-a1),…, V(B-ad). QED

  21. Lower bound construction for M(A,B) 1/m d=2 A Vertical cells: Minkowski sums of vertical cells of V(B) and (reflected) horizontal points of A . 1 B M(A,B) contains (nm) horizontal cells that intersect (nm) vertical cells: (n2m2) cells. Horizontal cells: Minkowski sums of horizontal cells of V(B) and (reflected) vertical points of A .

  22. # NN Assignments in Rd Lower bound construction can be extended to any dimension d Open problem: Does the ICP algorithm can step through all (many) NN-assignments in a single execution? d=1: Upper bound: O(n2). Lower bound: (n log n). Probably no.

  23. Monotonicity d=1: The ICP algorithm always moves A in the same (left or right) direction. Generalization to d2:  = connected path obtained by concatenating the ICP relative translations t. ( starts at the origin) Monotonicity: does not intersect itself. 

  24. Monotonicity Theorem: Let t be a move of the ICP algorithm from translation t0 to t0+ t . Then RMS(t0+ t)is a strictly decreasing function of .  does not intersect itself. Cost function monotone decreasing along t. t

  25. Proof of the theorem The Voronoi surface, whose minimization diagram is V(B-a). SB-a(t) = minbB||a+t-b||2 = minbB (||t||2 + 2t·(a-b) + ||a-b||2) RMS(t) = SB-a(t) - ||t||2is the lower envelope of n hyperplanes SB-a(t) - ||t||2is the boundary of a concave polyhedron. Q(t) = RMS(t) - ||t||2is the average of {SB-a(t) - ||t||2}aA Q(t) is the boundary of a concave polyhedron. The average of m Voronoi surfaces S(B-a).

  26. Proof of the theorem Replace f(t) by the hyperplane h(t) containing it. h(t) corresponds to the frozen NN-assignment. NN-assignment at t0 defines a facet f(t) of Q(t), which contains the point (t0,Q(t0)). t0 t The value of h(t)+||t2|| along t is monotone decreasing Since Q(t) is concave, and h(t) is tangent to Q(t) at t0, the value of Q(t)+||t2||along tdecreases faster than the value of h(t)+||t2|| . QED

  27. More structural properties of  Corollary: The angle between any two consecutive edges of  is obtuse. Proof: Let tk, tk+1 be two consecutive edges of  . Claim: tk+1  tk  0  b’ = NB(a+tk) a tk a must cross the bisector of b, b’. b’ lies further ahead b = NB(a+tk-1) QED

  28. More structural properties: Potential Lemma: At each iteration i1 of the algorithm, (i) (ii) Corollary: Let t1, …,tk be the relative translations computed by the algorithm. Then Due to (i) Due to (ii) Cauchy-Schwarz inequality

  29. More structural properties: Potential Corollary: d=1: Trivial d2: Given that ||ti||   , for each i1, then tk  Since  does not get too close to itself.

  30. The problem under the Hausdorf measure

  31. The relative translation Lemma: Let Di-1 be the smallest enclosing ball of {a+ti-1 – NB(a+ti-1) | a  A}. Then ti moves the center of Di-1 to the origin. Proof: Any infinitesimal move of D increases the (frozen) cost. The radius of D determines the (frozen) costafter the translation. a3 a2 a3-b3 a3-b3+t a2-b2 a2-b2+t b2 b3 o o D b1 t b4 D a4-b4+t a4-b4 a4 QED a1-b1 a1 a1-b1+t

  32. No monotonicity (d  2) Lemma: In dimension d2 the cost function does not necessarily decrease along t . Proof: Final distances: ||a2+t-b|| = ||a3+t-b|| = r , || a1+t-b’ || < r . Initially, ||a1-b|| = max ||ai-b|| > r , i=1,2,3 . The distances of a2, a3 from b start increasing t a2 a2+t r b a3 t c a3+t b a1 a1+t b’ Distance of a1 from itsNN always decreases. b’ QED

  33. The one-dimensional problem Lemma (Monotonicity): The algorithm always moves A in the same (left or right) direction. Proof: Let |a*-b*|=maxaA |NB(a)-a| . Suppose a* < b*. a*-b* is the left endpoint of D0; its center c < 0. t1 > 0, |t1| < |a*-b*| . a*+t1 < b*, so b* is still the NN of a*+t1 . |a*+t1-b*|=maxaA |a+t1-NB(a)|> |a+t1-NB(a+t1)| a*+t1-b* is the left endpoint of D1. The lemma follows by induction. Initial minimum enclosing “ball”. t D0 c 0 a–NB(a) a*-b* a*, b* still determine the next relative translation. 0 a*+t1-b* a+t1–NB(a) QED

  34. Number of iterations: An upper bound Ratio between the diameter of B and the distance between its closest pair. Theorem: Let B be the spread of B. Then the number of iterations that the ICP algorithm executes is O((m+n) log B / log n). Corollary: The number of iterations is O(m+n) when B is polynomial in n

  35. The upper bound: Proof sketch Use: the pair a*, b* that satisfies |a*-b*|=maxaA |NB(a)-a| always determines the left endpoint of Di. Put Ik = |b*-(a*+tk)| . Classify each relative translation tk • Short if • Long – otherwise.Overall: O(n log B / log n). Easy

  36. The upper bound: Proof sketch Short relative translations: a A involved in a short relative translation at the k-th iteration, |a+tk-1 – NB(a+tk-1)|is large. a A involved in an additional short relative translation a has to be moved further by |a+tk-1 – NB(a+tk-1)| . Ik = b*-(a*+tk) must decrease significantly. b1 bj bj+1 Ik-1 a1 a tk

  37. Number of iterations: A lower bound construction Theorem: The number of iterations of the ICP algorithm under the following construction (m=n) is (n). • At the i-th iteration: • ti = 1/2i, fori=1,…,n-2 . • Only ai+1crosses into the next cell V(bi+2). • The overall translation length is< 1. b2 b1 bn bi n-1 n an ai a2 a1

  38. Many open problems: • Bounds on # iterations • How many local minima? • Other cost functions • More structural properties • Analyze / improve running time, vs. running time of directly computing minimizing translation And so on…

More Related