1 / 46

Photo Navigator

Photo Navigator. Chi-Chang Hsieh 1 , Wen-Huang Cheng 2 , Chia-Hu Chang 2 , Yung-Yu Chuang 1 , Ja-Ling Wu 2 1 Department of Computer Science and Information Engineering 2 Graduate Institute of Networking and Multimedia National Taiwan University MM’08. Outline. Introduction System Overview

syshe
Télécharger la présentation

Photo Navigator

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Photo Navigator Chi-Chang Hsieh1, Wen-Huang Cheng2, Chia-Hu Chang2, Yung-Yu Chuang1, Ja-Ling Wu2 1Department of Computer Science and Information Engineering 2Graduate Institute of Networking and Multimedia National Taiwan University MM’08

  2. Outline • Introduction • System Overview • 3D SCENE MODEL CONSTRUCTION • Feature matching • 3D scene modeling • Camera alignment • Photo ordering • Music beat alignment and speed control • Evaluation • Conclusion

  3. Introduction a single trip - thousands of photographs taking a trip is fun - organize is tedious and painful ACDSee and Picasa= dull vivid and eye-catching= privileges of professionals

  4. Introduction • Photo Navigator • enhancing the photo browsing experience • taking a trip back in time to revisit the place • revealing the spatial relations between photos • well-routed browsing path • fully automatic • only requires users to input photos

  5. System Overview

  6. 3D SCENE MODELCONSTRUCTION • Given a source image I and a destination image J, an automatic procedure is proposed to obtain the following: • a cuboid model MI→J for the source image I • initial extrinsic parameters (rotation and translation) • final extrinsic camera parameters

  7. 3D SCENE MODELCONSTRUCTION Feature matching 3D scene modeling Camera alignment

  8. Feature matching SIFT (Scale Invariant Feature Transform) To reduce the adverse effect of falsely matched features, we estimate the fundamental matrix F between I and J. x’TFx = 0 use the normalized 8-point algorithm and RANSAC to compute F

  9. Feature matching

  10. 3D scene modeling

  11. 3D scene modeling

  12. 3D scene modeling

  13. 3D scene modeling However,when the road is a trapezoid or inaccurate due to people in the image, the bottom might not be a good boundary. Thus, when the distance between top and bottom of the detected ground area is larger than 10% of the image height, we instead set the top of detected ground area as the bottom of the rear wall.

  14. 3D scene modeling Rear wall (Ω)

  15. 3D scene modeling

  16. Camera alignment the initial camera pose can be estimated using the method proposed by Cao et al. [4]. Since the focal length is assumed to be 1, we can also obtain the corresponding camera projection matrix Π.

  17. Camera alignment final camera pose To speed up the estimation, instead of using all pixels, we only measure the discrepancy between matched features.

  18. Camera alignment To make the camera path more similar to a walk-through through the scene, two camera parameters (θz and ty ) are set fixed as their values in

  19. Photo ordering wtz > wtx > wry > wrx wtz - to the translation along z to encourage a camera motion spending more time on “moving forward”. wtx- translation along x-axis, panning. wry-rotation with respect to y-axis, panoramic motion. wrx-rotation with respect to x-axis, corresponding camera motion of looking up and down.

  20. Photo ordering Assume that the preferred velocity for the four camera parameters tx, tz, θx, θz are vx, vz , ωx, ωz respectively. be x-offset, z-offset, x-rotation, y-rotation of ,and be the ones of .

  21. Photo ordering

  22. Music beat alignment

  23. speed control

  24. Evaluation

  25. Evaluation Reality. How do they feel about the reality of the virtual walking-through? Visual perception. How do they like the novel views between photographs? Smoothness. How do they think about the smoothness of the transitions?

  26. Evaluation Spatiality. How strong sense of space does this sequence offer them after watching the slideshow? Acceptance. How do they feel about the overall system? Experience. Do they think that the slideshow helps them experience this travel and encourages them to visit?

  27. Evaluation

  28. Conclusion Compared to Photo Story, our system reveals more sense of space and offers more enjoyment of watching the slideshows. Compared to Photo Tourism, our system can work with a sparse set of photographs and is more suitable for personal travel photo slideshows.

  29. Conclusion Many aspects of our system can be improved. For example, automatic algorithms for creating the “pop-up” foregrounds are worth of further investigation. more efficient algorithms for feature matching would greatly speed up our system.

  30. SIFT 影像資料轉換成具有尺度不變性特性的特徵點座標,即使是在不同視點所拍攝的影像或物件,也能作影像比對。 scale-invariant sample scale space SIFT是用Difference of Gaussian (DoG) filter 來建立scale space。

  31. SIFT Why is DOG? Mikolajczyk (2002) found that the maxima and minima of produce the most stable image features compared to a range of other possible image functions, such as the gradient, Hessian, or Harris corner function. :

  32. SIFT • 對於SIFT 演算法主要可分為以下步驟: • 尺度空間的極值偵測(Scale-space extrema detection) • 特徵點定位(Accurate keypoint localization) • 方向的分配(Orientation assignment) • 特徵點描述(Keypoint descriptor)

  33. 尺度空間的極值偵測 • 對輸入的影像,使用高斯filter做處理。 • L(x,y,σ) = G(x,y,σ) *I(x,y) • DOG=D(x,y, σ) = ( G(x,y,k σ) - G(x,y, σ) ) *I(x,y) • = L(x,y,kσ) - L(x,y, σ) • 一個八度(octave)有s 層, k=2^(1/s)

  34. 128^(1/4)σ 64^(1/4)σ 32^(1/4)σ 2σ 2σ 8^(1/4)σ 4^(1/4)σ 2^(1/4)σ σ 尺度空間的極值偵測

  35. 尺度空間的極值偵測 local extrema

  36. 特徵點定位 • 刪除low contrast的點 • 使用泰勒展開式,表示DOS。 • The location of the extremum, ˆx, is determined by taking the derivative of this function with respect to x and setting it to zero, giving

  37. 特徵點定位 把ˆx帶入泰勒展開式,得到 <0.03 low contrast 刪掉

  38. 特徵點定位 • 刪除可能是edge的點 • The difference-of-Gaussian function will have a strong response along edges, even if the location along the edge is poorly determined and therefore unstable to small amounts of noise. • The principal curvatures(主曲率) can be computed from a 2x2 Hessian matrix, H, computed at the location and scale of the keypoint:

  39. 特徵點定位 Let α be the eigenvalue with the largest magnitude and β be the smaller one.

  40. 方向的分配 利用特徵點附近的pixel 差異,決定特徵點的方向與大小。 對於每一個特徵點,我們去考慮它鄰近周圍一個區域內的點的梯度大小和方向。投票決定特徵點主要方向。

  41. 方向的分配 鄰近的點的權重= Gaussian mask * 1.5。 若是有某些方向的梯度大小大於主要方向的梯度大小的80%,就設定這個特徵點有多個主要方向

  42. 特徵點描述 back 找出能夠代表每個特徵點的描述子

  43. Fundamental matrix 極線幾何(epipolar geometry)定理 back

  44. parameterized camera projection P = KR[Id| − C] K :intrinsic calibration R:extrinsic calibration C: cameracenter δx and δy:scalingfactors (px , py ): image center s : skew factor

  45. Post-processing

More Related