240 likes | 425 Vues
The Magic Window. Jaron Lanier Visiting Scientist, SGI Lead Scientist, National Tele-immersion Initiative. NSF Lake Tahoe Workshop on Collaborative Virtual Reality and Visualization. October 26, 2003. Research supported by:.
E N D
The Magic Window Jaron Lanier Visiting Scientist, SGI Lead Scientist, National Tele-immersion Initiative NSF Lake Tahoe Workshop on Collaborative Virtual Reality and Visualization October 26, 2003 Research supported by:
An attempt to solve five problems with one (hopefully simple) design… • Navigation and manipulation difficulties in virtual worlds emphasizing complex volumetric data. • 2) Loss of simplicity in UI design when moving from desktop to VR. 3) Imprecise points of view in typical available autostereo displays. 4) Interpersonal geometric conundrum in Tele-immersion. 5) Hard to understand the points of view and agency of other participants in a shared virtual reality system.
1) Navigation and manipulation difficulties in virtual worlds emphasizing complex volumetric data. In practice, 6d navigation, especially in dense,non-rectilinear environments, is tough even for seasoned users.
Moving the display though stationary, 1-1 scale (when possible) data as a solution to problem #1. Mobile display Fakespace Boom
Comparison to “Boom Chameleon” (Tsang, Fitzmaurice, Kurtenbach1,Khan, Buxton; all Alias/SGI, U Toronto)
Boom Chameleon emphasizes a “projective” user interface. Art+Com version
Reasonable to assume that an experimental advanced user interface design is most suitable for a certain set of problems. Boom Chameleon tested mostly on 1-1 scale automobile design CAD data. Magic Window is intended to be most useful on: 1) Non-rectilinear volumetric data derived from real world measurements. 2) Virtual stuff that fits on a desk (1-1 scale medical models, collaborative design models that fit between seated people.)
Non-rectilinear Volumetric Data is generally harder to Navigate. But recently it has at least become computationally affordable to do so at interactive speeds… Linsen, Scorzelli, Pascucci, Frank,Hamann, and Joy; UC Davis 2003 GigaViz navigating seismic data on SGI Altix, 2003.
“Breadbox sized” virtual stuff because: NTII, Oct ‘00 It matches (seated) human scale.
2) Loss of simplicity in UI design when moving from desktop to VR. In this picture, we have a better view of the action than the participants because we can see it more from the side and see their ‘z’ better than they can. This becomes a more serious problem for volumetric data…
What’s like a “one button mouse” simplifying principle for navigating and manipulating this kind of data? One answer that will be tested is this: • No projective UI; the cutplane of the mobile display IS the selection. • Very limited number of “modes” of action given a selection. • Spring loaded hardware interface element for gaining momentary points of view that diverge from display position and scale.
The Proposed Hardware UI Design Cut plane is the selection. Might or might not include roll as degree of freedom. Dial to select action of button. Example action set: slice, remove, stretch, copy, drag, attach, stain similar, get info Crosshairs at center of display serve to allow precise point selection. Single button to perform action- will have a pressure degree of freedom. Ambidextrous design Two-way spring loaded trigger to momentarily gain overview or magnified views. Also: Sound effects while passing through stuff! Handle both moves display and subtly helps constrain user’s point of view.
Instead of Boom? Cross between giant mouse and iMac… …or tracked tablet
3) Imprecise points of view in typical available autostereo displays. • Eye-tracking autostereo designs typically have disruptive failure modes. • Autostereo coalition has formed around an interchangeably lenticule or parallax barrier format. There will probably be a bunch more of these around soon, so it’s worth figuring out how to make best use of them. • Combining eye-tracking with such displays to tune points-of-view within constraints can improve stereo effect with gentle failure modes.
Perspectives tuned by eyetracking, but constrained by lenticule geometry Typical approximate viewing geometry of Synthagram lenticule
Constrained eyetracking Idea to test: Slow return to center when eyetrack data seems poor, or when more than one eye shares a lenticular perspective. X tightly constrained. Can make use of assumptions about head motion. A single lenticular viewing zone. Y hopefully somewhat constrained because of handle design Z can be taken into account.
An aside…“Invisibility”: Just for marketing, or is there real value? • Magic Window just might win race to be first pixelated transparent object. • Will use either rear mounted camera array or pre-scanned data of local environment. • Will at least provide addition visual cues within the frame for local physical orientation. • Might go ahead and make a bunch of them to enclose an invisible car or piano or something.
Photo by Baerbel Schmidt - June 2000 4) Interpersonal geometric conundrum in Tele-immersion. Must choose: View or be viewed. Humans stuck in quadrants.
In an ideal solution: • Users would see in stereo while not appearing encumbered. • Users would have extensive freedom of position.
To view and be viewed… • Solution one: Synthesize view of person that removes stereo viewing gear. • Solution two: Autostereo. But… Available autostereo solutions constrain viewing position, so current autostereo displays must become mobile to meet needs of tele-immersion.
5) Hard to understand the points of view and agency of other participants in a shared virtual reality system. Another person seen as scanned by forward-facing camera array. This is the cut plane that person is selecting at the moment.