1 / 26

Head-tracking virtual 3-D display for mobile devices

Head-tracking virtual 3-D display for mobile devices. Miguel Bordallo López*, Jari Hannuksela*, Olli Silvén* and Lixin Fan**, * University of Oulu, Finland ** Nokia Research Center, Tampere, Finland. Contents. Introduction Head-tracking 3D virtual display Interaction design

terrel
Télécharger la présentation

Head-tracking virtual 3-D display for mobile devices

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Head-tracking virtual 3-D display for mobile devices Miguel Bordallo López*, Jari Hannuksela*, Olli Silvén* and Lixin Fan**, * University of Oulu, Finland ** Nokia Research Center, Tampere, Finland

  2. Contents Introduction Head-tracking 3D virtualdisplay • Interaction design • Face-trackingfor mobile devices • Mobile device’sconstrains • Field of view • Energyefficiency Implementation Latencyconsiderations Performance Summary

  3. Introduction3D virtual displays Calculate the relative position of the userrespect to the screen Calculate the angle of the user’spoint of view Render an imageaccording to the point of view Result is a VirtualWindow: - Showsrealistic 3D objects - Based on parallaxeffect * Video from Johnny Lee (Wiimoteheadtrackingproject) • The position information is used to render the 3D UI/content as if the user watched it from different angles. • The technology enable users to watch the content from different angles and become more immersed.

  4. IntroductionMobile3D virtual displays • Mobile head-coupleddisplay • cantakeadvantage of the smallsize • Movement of eitheruserordevice • Mobile Deviceshavecameras and sensorsintegrated • No need for externalperiferics • Canincrease UI functionalities • New applications and concepts • Realistic 3D objectscanberendered and perceived • New interactionmethodscanbedeveloped • Weknowwhat the userlooks at and wecanusethatinformation

  5. Demo

  6. Head-tracking mobile virtual 3D display A simpleuse case

  7. Interaction design

  8. IntroductionMobile face-tracking • Head-coupled displays require robust and fast face-tracking • Based on multiscale LBP, Cascade classifier and AdaBoost • Excellent results in face recognition and authentication, face detection, facial expression recognition, gender classification

  9. IntroductionEvaluating the distance to the screen • Essential to compute de relative angle • Ground truth determined With Kinect • Two methods evaluated: • Face size obtained with face tracking • Flickering between frames • No extra computations needed • Good accuracy • Motion estimation library: • Harris corners + BLUE • Computes changes of scale between frames • Presents about 10% more accuracy • Less flickering between frames • Needs extra computations: • Introduces latency, decreases framerate • Worse input sequence for tracking • More differences between frames

  10. Mobile constrainsField of view • Front Camera is on the device’s corner and • not pointing to the user: • Reduced field of view (<45dg) • Assymmetric FoV • Even more reduced effective FoV • Considerable minimum • distance to the screen • User often outside of the point of view • Tracking sometimes lost • Need to show viewfinder on the screen

  11. Mobile constrainsField of view • Implemented solution: Wide angle lens • Dramatically increases the effective • field of view (<160dg) • Requires calibrated lens • Requires de-warping routine • Implemented with lookup tables • Problems when several faces are on • the field of view

  12. Mobile constrainsEnergy efficiency Practical challenge of camera-based UI is to have an always active camera Lower framerate -> High UI starting latencies Higher framerate -> Small energy-efficiency Application processor (even in mobile) is power hungry Specific processors closer to the sensors are needed Current devices include HW-codecs and GPUs: Better energy efficiency due to small EPI Mobile GPU already programable: OpenGL ES OpenCL Embeded Porfile

  13. Energy efficiencyGPU-accelerated face-tracking GPU can be treated as an independent entity Can be use concurrently with CPU Use of GPU for feature extraction (format conversion + multiscaling + LBP) Mobile GPUs still not very efficient for certain tasks Computational and energy costs per VGA frame of feature extraction

  14. Implementation • Demo platform: N900 (Qt + Gstreamer + openGL ES) • Based on face-trackingexternallibrary • Implementationdetails: • Input imageresolution : 320x240 • Framerate: 16-20 fps. • Baselatency: 90-100 ms. • Acceptedfield of view: < 45dg hori. & < 35dg vert. • User’sdistancerange: 25 - 300 cm.

  15. ImplementationSimple block diagram

  16. ImplementationTask distribution

  17. ImplementationTask distribution Camera module Application Processor CPU Graphics Processor GPU Touchscreen Display

  18. ImplementationTask distribution Camera module Application Processor Graphics Processor Touchscreen

  19. Mobile constrainsLatency • User interface latency is a critical issue • Latency > 100ms. Very disturbing • Realistic 3D rendering even more sensitive • Not realistic if it happened a while ago !!!

  20. Mobile constrainsLatency hiding A possible solution: Latency hiding Requires good knowledge of the system’s timing Predict the current position based on motion vector

  21. Performance Demo platform: Nokia N900 ARM cortex A8, 600 MHz + PowerVR535 GPU Comparison platform: Nokia N9 ARM cortex A8, 1 GHz + PowerVR535 GPU

  22. Remaining problems • Face-tracking based 3D User Interfaces provide support for new concepts • Face tracking can be offered as a platform level • Current mobile platforms still present several shortcomings • Energy efficiency compromises battery life • Camera not designed for UI purposes • Single camera implies difficult 3D context recognition

  23. Thank you Any question?

  24. LBP fragment shader implementation • Uses OpenGL ES interface • Two versions: • Version 1: calculates LBP map in one grayscale channel • Version 2: calculates 4 LBP maps in RGBA channels • Access the image via texture lookup • Fetch the selected picture pixel • Fetch the neighbours values • Compute binary vector • Multiply by weighting factor

  25. Preprocessing Create quad Render each piece in one channel Divide texture & Convert to grayscale

  26. GPU assisted face analysis process

More Related