1 / 71

Interacting With Dynamic Real Objects in a Virtual Environment

Interacting With Dynamic Real Objects in a Virtual Environment. Benjamin Lok February 14 th , 2003. Why we need dynamic real objects in VEs. How we get dynamic real objects in VEs. What good are dynamic real objects?. Applying the system to a driving real world problem. Outline.

winda
Télécharger la présentation

Interacting With Dynamic Real Objects in a Virtual Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Interacting With Dynamic Real Objects in a Virtual Environment Benjamin Lok February 14th, 2003

  2. Why we need dynamic real objects in VEs How we get dynamic real objects in VEs What good are dynamic real objects? Applying the system to a driving real world problem Outline • Motivation • Incorporation of Dynamic Real Objects • Managing Collisions Between Virtual and Dynamic Real Objects • User Study • NASA Case Study • Conclusion

  3. Assembly Verification • Given a model, we would like to explore: • Can it be readily assembled? • Can repairers service it? • Example: • Changing an oil filter • Attaching a cable to a payload

  4. Current ImmersiveVE Approaches • Most objects are purely virtual • User • Tools • Parts • Most virtual objects are not registered with a corresponding real object. • System has limited shape and motion information of real objects.

  5. Ideally • Would like: • Accurate virtual representations, or avatars, of real objects • Virtual objects responding to real objects • Haptic feedback • Correct affordances • Constrained motion • Example: Unscrewing a virtual oil filter from a car engine model

  6. Dynamic Real Objects • Tracking and modeling dynamic objects would: • Improve interactivity • Enable visually faithful virtual representations • Dynamic objects can: • Change shape • Change appearance

  7. Thesis Statement Naturally interacting with real objects in immersive virtual environments improves task performance and presence in spatial cognitive manual tasks.

  8. Previous Work: Incorporating Real Objects into VEs • Non-Real Time • Virtualized Reality (Kanade, et al.) • Real Time • Image Based Visual Hulls [Matusik00, 01] • 3D Tele-Immersion [Daniilidis00] • Augment specific objects for interaction • Doll’s head [Hinkley94] • Plate [Hoffman98] • How important is to get real objects into a virtual environment?

  9. Previous Work: Avatars • Self - Avatars in VEs • What makes avatars believable? [Thalmann98] • What avatars components are necessary? [Slater93, 94, Garau01] • VEs currently have: • Choices from a library • Generic avatars • No avatars • Generic avatars > no avatars [Slater93] • Are visually faithful avatars better than generic avatars?

  10. Visual Incorporation of Dynamic Real Objects in a VE

  11. Motivation • Handle dynamic objects (generate a virtual representation) • Interactive rates • Bypass an explicit 3D modeling stage • Inputs: outside-looking-in camera images • Generate an approximation of the real objects (visual hull)

  12. … Reconstruction Algorithm 1. Start with live camera images 2. Image Subtraction 3. Use images to calculate volume intersection 4. Composite with the VE

  13. Visual Hull Computation • Visual hull - tightest volume given a set of object silhouettes • Intersection of the projection of object pixels

  14. Visual Hull Computation Visual hull - tightest volume given a set of object silhouettes Intersection of the projection of object pixels

  15. Volume Querying • A point inside the visual hull projects onto an object pixel from each camera

  16. Implementation • 1 HMD-mounted and 3 wall-mounted cameras • SGI Reality Monster – handles up to 7 video feeds • Computation • Image subtraction is the most work • ~16000 triangles/sec, 1.2 gigapixels • 15-18 fps • Estimated error: 1 cm • Performance will increase as graphics hardware continues to improve

  17. Results

  18. Managing Collisions Between Virtual and Dynamic Real Objects

  19. Approach • We want virtual objects respond to real object avatars • This requires detecting when real and virtual objects intersect • If intersections exist, determine plausible responses

  20. Assumptions • Only virtual objects can move or deform at collision. • Both real and virtual objects are assumed stationary at collision. • We catch collisions soon after a virtual object enters the visual hull, and not as it exits the other side.

  21. Detecting Collisions

  22. Resolving Collisions Approach • Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

  23. Resolving Collisions Approach • Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

  24. Resolving Collisions Approach • Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

  25. Resolving Collisions Approach • Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

  26. Resolving Collisions Approach • Estimate point of deepest virtual object penetration 2. Define plausible recovery vector 3. Estimate point of collision on visual hull

  27. Results

  28. Results

  29. Collision Detection / Response Performance • Volume-query about 5000 triangles per second • Error of collision points is ~0.75 cm. • Depends on average size of virtual object triangles • Tradeoff between accuracy and time • Plenty of room for optimizations

  30. Spatial Cognitive Task Study

  31. Study Motivation • Effects of • Interacting with real objects • Visual fidelity of self-avatars • On • Task Performance • Presence • For spatial cognitive manual tasks

  32. Spatial Cognitive Manual Tasks • Spatial Ability • Visualizing a manipulation in 3-space • Cognition • Psychological processes involved in the acquisition, organization, and use of knowledge

  33. Hypotheses • Task Performance: Participants will complete a spatial cognitive manual task faster when manipulating real objects, as opposed to virtual objects only. • Sense of Presence: Participants will report a higher sense of presence when their self-avatars are visually faithful, as opposed to generic.

  34. Task • Manipulated identical painted blocks to match target patterns • Each block had six distinct patterns. • Target patterns: • 2x2 blocks (small) • 3x3 blocks (large)

  35. Measures • Task performance • Time to complete the patterns correctly • Sense of presence • (After experience) Steed-Usoh-Slater Sense of Presence Questionnaire (SUS) • Other factors • (Before experience) spatial ability • (Before and after experience) simulator sickness

  36. Conditions Purely Virtual • All participants did the task in a real space environment. • Each participant did the task in one of three VEs. Real Space Hybrid Vis. Faithful Hybrid

  37. Conditions Sense of presence Task performance

  38. Real Space Environment • Task was conducted within a draped enclosure • Participant watched monitor while performing task • RSE performance was a baseline to compare against VE performance

  39. Purely Virtual Environment • Participant manipulated virtual objects • Participant was presented with a generic avatar

  40. Hybrid Environment • Participant manipulated real objects • Participant was presented with a generic avatar

  41. Visually-Faithful Hybrid Env. • Participant manipulated real objects • Participant was presented with a visually faithful avatar

  42. Task Performance Results

  43. Task Performance Results * - significant at the =0.05 level ** - =0.01 level *** - =0.001 level

  44. Sense of Presence Results

  45. Sense of Presence Results

  46. Debriefing Responses • They felt almost completely immersed while performing the task. • They felt the virtual objects in the virtual room (such as the painting, plant, and lamp) improved their sense of presence, even though they had no direct interaction with these objects. • They felt that seeing an avatar added to their sense of presence. • PVE and HE participants commented on the fidelity of motion, whereas VFHE participants commented on the fidelity of appearance. • VFHE and HE participants felt tactile feedback of working with real objects improved their sense of presence. • VFHE participants reported getting used to manipulating and interacting in the VE significantly faster than PVE participants.

  47. Study Conclusions • Interacting with real objects provided a quite substantial performance improvement over interacting with virtual objects for cognitive manual tasks • Debriefing quotes show that the visually faithful avatar was preferred, though reported sense of presence was not significantly different. • Kinematic fidelity of the avatar is more important than visual fidelity for sense of presence. Handling real objects makes task performance and interaction in the VE more like the actual task.

  48. Case Study: NASA Langley Research Center (LaRC)Payload Assembly Task

  49. NASA Driving Problems • Given payload models, designers and engineers want to evaluate: • Assembly feasibility • Assembly training • Repairability • Current Approaches • Measurements • Design drawings • Step-by-step assembly instruction list • Low fidelity mock-ups

  50. Task • Wanted a plausible task given common assembly jobs. • Abstracted a payload layout task • Screw in tube • Attach power cable

More Related