1 / 14

Towards real-time camera based logos detection

Towards real-time camera based logos detection. Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership meeting Tours city, France Friday 9 th of September 2011. Towards real-time camera based logos detection. Introduction

eshe
Télécharger la présentation

Towards real-time camera based logos detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership meeting Tours city, France Friday 9th of September 2011

  2. Towards real-time camera based logos detection • Introduction • Devices synchronization for 3D frame tagging • Frame partitioning and selection

  3. Towards real-time camera based logos detection“Introduction” (1) Logo detection from video capture using some handled interactions, to display context based information (tourist check points, bus stop, meal, etc.). This constitutes a hard computer vision application, due to the complexity of the recognition task and the real time constraints. • To support the real time, two basic paths could be considered • To reduce complexity of the algorithms • To reduce the amount of data Camera Selection Frames Pattern Recognition Frames

  4. Towards real-time camera based logos detection“Introduction” (2) “With static objects, one capture (in time and space) could be enough for recognition, if recognition is perspective, scale and rotation invariant and if occlusions neither appear” Static object: without motion and appearance modification is object is camera t0 t1 t2 “Capture instance could be detected if the embedded system can track its own positioning, and if objects are static” Dynamic object: with motion then with appearance modification “Then, self-tracking embedded system can be set for single capture of static objects. It can support real-time recognition by reducing the amount of data to process, without miss-case (i.e. one capture is here, at least)”

  5. Towards real-time camera based logos detection • Introduction • Devices synchronization for 3D frame tagging • Frame partitioning and selection

  6. Towards real-time camera based logos detection“Device synchronization for 3D frame tagging” (1) The combination of these devices allows to tag frames in 3D space. Camera device, to capture images d x, y, z Frame coordinates the frame Accelerometer device, that measures proper acceleration. Embedded system positioning (from root) Embedded system orientation Gyroscope device, for measuring or maintaining orientation

  7. Towards real-time camera based logos detection“Devices synchronization for 3D frame tagging” (2) Most of the commercial wearable systems (e.g. smartphones) can support frame tagging, but the multimodality is designed in a separate way, not in the sense of combination of these modalities. The device synchronization at hardware level is not done, and must achieved at the operating system level. How to do it ? Polling exchange with device (accelerometer, gyroscope) data data Device controller CPU Memory t real-life event (tE) memory writing (tw) control control DMA exchange with device (camera) • value depends of the device, considering • Acquisition delay of the device • Data transference time on bus • Execution time of control instruction • Interrupt execution time • Etc. • value is an estimation, it depends of • Mean access bus rate • operating scheduling and interrupt queuing • Etc. CPU control interrupt data data Device controller DMA Memory control control

  8. Towards real-time camera based logos detection“Devices synchronization for 3D frame tagging” (3) tE0 Root Device D0 Device D1 Synchronization will be done using a two timers framework - The “coarse” timer will be scheduled on the root device - The “finer” timer will be used within a “upstream” frame, to be opened previously to the next “coarse” timer period. It will allow to catch events of the device to be synchronized tE1 e.g. At I0, run Ti0 Every T0, run Ti1 I0 I0+T0 I0+2T0 L1 Ti0 t T0 Ti1

  9. Towards real-time camera based logos detection“Devices synchronization for 3D frame tagging” (4) General synchronization algorithm of the Ti1 timer k = 0 Every T1 period k = k+1 When Ii occurs tE0 Root Device D0 Device D1 tE1 s=I0+T0 s+(k=1)T1 s+(k=2)T1 s+(k=2)T1 s+(k=3)T1 t Ti1 I1

  10. Towards real-time camera based logos detection • Introduction • Devices synchronization for 3D frame tagging • Frame partitioning and selection

  11. Towards real-time camera based logos detection“Frame partitioning and selection” (1) Device synchronization can support 3D image tagging The open problems now are how to detect overlapping between frames, how to achieve the frame selection in case of overlapping ,and how to access the obtained partition. d x, y, z Frame coordinates the frame F2 e.g. F1 F3 P2 P4 Positioning P3 orientation P1 P5 P6 P7 F4

  12. Towards real-time camera based logos detection“Frame partitioning and selection” (2) To detect the overlapping, frames can be projected into a plane D to be computed with line intersection and closed polygon detection algorithms at complexity kO(nlog(n)). To do it, it is necessary to fix the position of P in the 3D space and define an updating protocol P can be obtained by meaning positioning of frames P Updating of positioning is not necessary at any frame capture, only when important differences start to appear between the current plan and recent frame captures. D3 D1 At t1, D1 is computed from the current frames F1 F2 D2 At t2, differences between D1 and D2 (corresponding to recent frame captures) is too important, D1 is shifted to D3 t t1 t2

  13. Towards real-time camera based logos detection“Frame partitioning and selection” (3) Once overlapping are detected, at every overlap a region (coming from the overlapping frames) must be selected using a selection method Video frame processing is a producer/consumer synchronization problem, where producer (i.e. frame capture) are blocked on memory constraint, and consumer (i.e. image process) are blocked when the frame stack is empty. Here, we are working “up” to the frame with partition object. Intelligent access must be driven with RAG (Region Adjacency Graph) structure and graph coloring techniques. R1F1 e.g. c2 F2 d2 R2F1 F1 R4F2 d1 c1 P2 P3 R5F2 P1 R3F1 e.g. This selection can be done using a spatial criterion c1, c2 are projected gravity centers of frames adjacent side F1 F2 to process together

  14. Towards real-time camera based logos detection • Introduction • Devices synchronization for 3D frame tagging • Frame partitioning and selection

More Related