1 / 70

Research directions on Visual Inspection Planning

This research paper provides an overview and short review of visual inspection planning, discussing the issues and possible solutions. It focuses on determining optimal camera locations for inspecting geometric entities, minimizing measurement errors and the number of camera sensors needed.

Télécharger la présentation

Research directions on Visual Inspection Planning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research directions on Visual Inspection Planning By Alexis H Rivera

  2. Overview • Short review about visual inspection planning • Discussion of issues • Discussion of possible solutions and research directions

  3. Visual Inspection Problem • Visual Inspection • Given an object, determine if it satisfies the design specification using camera as measurement tool • Visual Inspection Planning • Where should the camera be located to optimally inspect the desired geometricentities?

  4. Visual Inspection Problem(cont.) • Optimality criterion • minimize the inherent measurement errors (quantization and displacement) • such that desired entities are: • resolvable • in focus • within field of view • visible • satisfy dimensional tolerances • minimize number of camera sensors needed

  5. Visual Inspection Problem (cont.) • Finding optimal camera pose: • For a given set of entities, S • minimize F (tx, ty, tz, Φ, θ, Ψ, S) • subject to: • g1j <= 0 (resolution), for j=1 to k • g2a <= 0 (focus) • g2b <= 0 • g3 <= (field of view) • g4i <= 0 (visibility) for i=1 to m

  6. Resolution • For each entity j, there is a constraint g1j() • Map smallest line entity to l pixels • Example: l = 2

  7. Focus • Two constraints, g2a(), g2b() • Require closest and furthest entity vertices from the camera position to be within the far and near limits of the depth of field camera rf rc Far limit Near limit

  8. Field Of View • One constraint: g3() • Bounding cone must be contained within the viewing cone Bounding cone Viewing cone

  9. e1 e2 e4 e3 x y Visibility • Many equations: g4i() for i=1 to m • Plane equations that bound the visibility of the desired entities Example: To see entities e1, e2, e3, e4, the camera must satisfy equation y < 0

  10. Visual Inspection Planning Solution: Finding optimal plan

  11. Issues • The generation of the visibility constraints is incomplete • The software used to generate such constraints is obsolete • The current nonlinear optimization algorithm is very sensitive to the initial conditions • The process of generating a plan is not automated and is very tedious • The test cases used to validate the software were very simple

  12. Plan Generation

  13. Possible Experiments • Optimization Process • Error Models • Inspection Planning Strategies

  14. Possible Experiments (cont.) • Optimization Process • Alternative objective functions: Minimum Mean Square Error vs. Robustness Approach • Alternative optimization algorithm: Is there a better optimization algorithm? Does it matter? • Choice of initial pose Is it possible to find a good initial pose that guarantees an optimal solution?

  15. Possible Experiments (cont.) • Error Models • Sub-pixel Quantization vs. Pixel Quantization, will it make a difference? • Inspection Planning Strategies • Characterizing sub-node strategies • Observations on more complicated objects • How to evaluate system? What criteria can be used to validate it?

  16. Building Blocks

  17. Automation • How do I plan to automate this process?

  18. Plan Generation

  19. Plan Verification

  20. Optimization Process

  21. Alternative objective functions • Idea: How does the objective function choice affects the inspection plan? • Explain straightforward objective function • Explain derivation of the MMSE • Explain robustness approach

  22. Optimization Problem • Finding optimal camera pose: • For a given set of entities, S • minimize F (tx, ty, tz, Φ, θ, Ψ, S) • subject to: • g1j <= 0 (resolution), for j=1 to k • g2a <= 0 (focus) • g2b <= 0 • g3 <= (field of view) • g4i <= 0 (visibility) for i=1 to m

  23. Optimization Problem • Different choices for objective function • (Tarabanis) Maximize weighted sum of sensor constraints • (Crosby) Minimize Mean Square Error • (Gu) Maximize the robustness • Objective function characterizes the quality of solution

  24. Maximizing weighted sum of sensor constraints • Max F (tx, ty, tz, Φ, θ, Ψ, S) = α1g1+α2ag2a+α2bg2b+ α3g3+ α4g4 • The higher the value of the objective function the better the constraints are satisfied • Weights indicate how much a constraint contributes to the objective function

  25. Maximizing weighted sum of sensor constraints • Issues: • Weights are based on experimental results • Optimization can be biased because of scaling issues • Assumption that multiple and couple objectives can be combines in an additive sense into a single objective

  26. Minimizing Mean Square Error • Minimize the total expected error • Displacement errors • Quantization errors • MSE E[ε2]=E[εd2]+E[εq2]

  27. Next Slides • Give a feel of the derivation of the MSE • Statistics are found defining and relating random variables • Some of this results are used in other objective function definitions

  28. y v x z f u q p y z x Imaging process camera coordinate system image coordinate system World coordinate system

  29. Imaging process • A point (x,y,z) in WCS is related to a point (u,v) in image plane coordinate by: u = Fu (x,y,z,tx, ty, tz, Φ, θ, Ψ) v = Fv(x,y,z, tx, ty, tz, Φ, θ, Ψ) • Fv, Fu, are coordinate transformation functions • (tx, ty, tz) vector representing camera location • (Φ, θ, Ψ) camera orientation

  30. Imaging with displacement error • Let dx, dy, dz, dΦ, dθ, dΨ be displacement error in position and orientation • Assume I.I.D Gaussian random variables with zero mean and variance (σx2, σy2, σz2, σΦ2, σθ2, σψ2) • Mapping is now: u’ = Fu (x,y,z,tx, ty, tz, Φ, θ, Ψ, dx, dy, dx, dΦ, dθ, dΨ ) v’ = Fv(x,y,z, tx, ty, tz, Φ, θ, Ψ, dx, dy, dx ,dΦ, dθ, dΨ )

  31. Displacement error of single point Image plane (u’,v’) εdu (u,v) εdv Displacement error for a each end point are new Gaussian RV εdu = u’ – u εdv = v’ – v

  32. Displacement error of single point Displacement error for a each end point are new RV Let ξ χ ς be functions of the RVsdx, dy, dx, dΦ, dθ, dΨ These are Gaussian RVs Let εdu = u’ – u = ς / χ εdv = v’ – v = ξ / χ Then εdu and εdv are functions of the form g(x,y) = x/y The mean and variance can be approximated.

  33. Displacement error of individual components of line Image plane (u’,v’) εdu2 εdu1 (u,v) εdv2 εdv1 Displacement error for a line are new RV εdx = εdu1 – εdu2 εdy = εdv1 – εdv2

  34. Displacement error of line • Dimensional error is geometrically approximated: ddxcos() + dysin()  = angle between line

  35. Displacement error of k lines • Total dimensional error for k lines is:

  36. Quantization Error 1D • Actual Length: L = lrx + u + v, where u,v uniform random variables • Quantized Length

  37. Quantization Error 2D The actual and quantize length define new RVs

  38. Quantization Error 2D The quantization error for the horizontal and vertical components:

  39. Quantization Error for a line • Total quantization determined by geometric approximation, • qqxcos() + qysin() • zero mean • E[q 2]=σ q2 1/6(rx2cos2  + ry2sin2 )

  40. Total quantization error for k lines Total dimensional error due to quantization in all lines:

  41. Mean Square Error • Total error ε= εd – εq • MSE E[ε2]=E[εd2]+E[εq2]

  42. Dimensional Tolerances • Dimensional Tolerance is satisfied if • fε(ε) is the probability density function of dimensional inspection error

  43. Dimensional Tolerances • Can be rewritten in terms of characteristic equation • Where, • Φε(w)= Φεd(w) Φεq(w)

  44. Finding Φεd(w) • Recall the following relations between the random variables • d = f(dx, dy) dxcos() + dysin() • εdx= f(du1, du2) = εdu1 – εdu2 • εdy = f(dv1, dv2) = εdv1 – εdv2 • εdu = f(ς, χ) = ς / χ • εdv = f(ξ, χ) = ξ / χ • Problem: We don’t know distribution of d because we don’t know distribution of εdu and εdv

  45. Finding Φεd(w) • Solution: • Approximate distribution of εdu and εdv with Gaussian • It is shown that is an acceptable representation if camera pose if within feasible region

  46. Finding Φεd(w) • Consequence • d is Gaussianbecause of linear relationships between RV

  47. Finding Φεq(w) • Recall the following relations between the random variables • q = f(qx, qy) qxcos() + qysin() • εqx= f(Lqx, Lx) = Lqx – Lx • εdy = f(Lqy, Ly) = Lqy – Ly • f εqx(εqx) and f εqy(εqy) are triangular distributions defined in the range [-rx,rx] and [-ry,ry] respectively. Their characteristic function are:

  48. Finding Φεq(w) • Finally Φεq(w) is given by:

  49. Why do I care about all that derivation?? Next objective function: Robustness

  50. Maximizing the robustness • Recall, • Define δ*2 as the maximum permissible inspection variance, > threshold δ*2 > δ = threshold δ*2 = δ -ΔL ΔL -ΔL ΔL

More Related