1 / 20

Ruwan Egoda Gamage , Abhishek Joshi, Jiang Yu Zheng , Mihran Tuceryan

A High Resolution 3D Tire and Footprint Impression Acquisition Device for Forensics Applications. Ruwan Egoda Gamage , Abhishek Joshi, Jiang Yu Zheng , Mihran Tuceryan Computer Science Department, IUPUI. Acknowledgement.

percy
Télécharger la présentation

Ruwan Egoda Gamage , Abhishek Joshi, Jiang Yu Zheng , Mihran Tuceryan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A High Resolution 3D Tire and Footprint Impression Acquisition Device for Forensics Applications RuwanEgodaGamage, Abhishek Joshi, Jiang Yu Zheng, MihranTuceryan Computer Science Department, IUPUI

  2. Acknowledgement This project was supported by Award No. 2010-DN-BX-K145, awarded by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. The opinions, findings, and conclusions or recommendations expressed in this exhibition are those of the author(s) and do not necessarily reflect those of the Department of Justice. Also thanks to Mr. Alan Kainuma for providing the test shoe.

  3. Objectives • Acquire 3D depth images of tire and shoeprint impressions in a non-destructive way • Simultaneously obtain a high resolution registered 2D color image • Acquire long tire impressions in one scan (no stitching of multiple images necessary) • Portable design and easy to use for outdoor capture • Also scan shoes and tires themselves for matching purposes

  4. Basic design of the hardware Actuator rail: 1.75m long HD camcorder Actuator motion direction Speed: 1-3mm/sec Green stripe laser light Red stripe laser light Powersupply Control box • Camera assembly moves at a precise and fixed speed along the rail. • The slower it moves, the better resolution along rail direction (slowest speed: 1.3 mm/sec at 30 fps).

  5. Why two lasers? Scanner motion Occluding surface Green light occluded in this region, but red light visible. To be able to handle occlusions.

  6. Basic design of depth extraction • One line per video frame per laser • One for red, one for green laser • Depth variations on the surface result in deformations in the laser line. • The amount of deformation is used to estimate the depth values. Depth values are calculated along the detected laser line.

  7. Imaging geometry In order to calculate depth from laser deformation in the image, we need to know the configuration of the camera and laser lights with respect to each other. This can be estimated via a calibration procedure. We do not want to perform calibration ahead of time, because the configuration can change during transportation or setup of the device.

  8. Device Calibration • Our device has a calibration procedure that is integrated with the scan. • Put calibration object (with known dimensions) into the scene and scan. • The calibration is done as part of the post-processing. • Don’t worry about imaging geometry getting messed up during transportation and set-up.

  9. y Calibration x Y y X Y z x X Y’ z X’ Z Y’ Z X’ Camera coordinate system Z’ Z’ After correcting roll and tilt rail Rail coordinate system with respect to which all depth values are computed • Ideally the camera should be set orthogonal to the rail, but this is not realistic. • We correct camera’s roll and tilt as part of calibration. • Coordinate systems defined for calibration:

  10. Calibration B1 A1 C1 L1 L3 L2 B2 A2 C2 Calibration object F1 D1 E1 F2 D2 E2 p(x0,y0) • Place an L-shaped calibration object with known dimensions in the scene • From the recorded video, select two images with lasers on the calibration object • Because of the linear motion along the rail, motion vectors of corner points in the FOV have a vanishing point p(x0, y0). • Compute vanishing point using corners of the calibration object to estimate tilt and roll of camera • This is used to correct for tilt and roll of the camera.

  11. Calibration • Laser on L-shaped object • Baseline length between two frames is known from the speed of rail motion and frame numbers (30 fps capture) • 3D positions of corner points on a calibration object are captured using a user interface • Camera-laser relation • Find the 3D laser lines on the calibration object to estimate the normal vector of the laser plane

  12. Some example scans Depth scan: depth color coded 2D color image acquired registered with depth image.

  13. Shoe scan 3D scan (pseudo-color) Color-texture image

  14. Scan of a Shoe Features detected in our scan Detecting fine details: Shoe and desired features provided by expert

  15. Long tire impression Results z range: 307.1mm-259.2mm. Brighter points are closer to camera

  16. Accuracy • Accuracy Summary • Horizontal resolution 0.23mm (perpendicular to rail motion) • Rail direction resolution ≥0.043mm • Depth resolution ~0.5mm We did a systematic testing of the accuracy achieved in scanned images by designing and building a test object.

  17. Software User Interface We built a software interface to perform metric measurements on the acquired image

  18. Interfaces to operate the device Button Interface • Programmed the motor to perform following • Stop • Homing • Short Run • Long Run

  19. Tablet based wireless interface • Bluetooth wireless Interface via tablet: • more configuration options possible.

  20. Conclusion • Achieved 0.5mm depth accuracy. • The prototype device is promising • Accuracy can be improved by improving the subcomponents of the system or by improving the algorithms. • Shoes and tires themselves can be scanned to be used for automated matching against a database or for computing degree of match results.

More Related