1 / 26

Real-Time Auralization of Sound in Virtual 3D Environments

Real-Time Auralization of Sound in Virtual 3D Environments. by Scott McDermott sdm1718@louisiana.edu. Overview & Objective. Develop an adaptive virtual environment that simulates real-time generation of 3D sound.

hughesdavid
Télécharger la présentation

Real-Time Auralization of Sound in Virtual 3D Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-Time Auralization of Sound in Virtual 3D Environments by Scott McDermott sdm1718@louisiana.edu

  2. Overview & Objective • Develop an adaptive virtual environment that simulates real-time generation of 3D sound. • Design algorithms to efficiently and effectively compute realistic 3D sounds in this environment. • Apply these techniques to various applications, including simulations, virtual reality, gaming, and modeling.

  3. Outline • Sound Perception • Digital Sound and Computers • 3D Sound Approximations • True 3D Sound • “Surround” Sounds (Stereo Expansion) Approach • Head Response Transfer Function (HRTF) Approach • Beam Tracing Approach • The Graphics Analogy

  4. Sound Perception • When we hear a sound, we automatically obtain certain information about the source: • Direction • Distance • Elevation • Environmental conditions • Status of source

  5. Sound Perception • Interaural Delay Time (direction) • delay between time arrives at each ear (0 to 0.63 ms) • Head Shadow (direction and distance) • difference in volume from one ear to the other (up to 9 dB) • Pinna Response (direction and elevation) • outer ear filters sound, compare between two ears • Shoulder Response (elevation and direction) • reflections off upper body (1-3 kHz) • 8 types of cues for sound spatialization [1]: • Head Motion • move head to re-evaluate these filters • Vision • ignore audio cues if different from visual • Early Echo Response (distance and direction) • echos from environment (50 to 100 ms) • Reverberation (distance and direction) • late dense echos from environment (> 100 ms)

  6. Sound Perception • An environment with true 3D sound will need to take all of these into account. • It must also be able to perform calculations and apply filters in real-time. • The result must be convincing to the listener and enhance the virtual experience.

  7. Sound in the Digital World • Sound in the physical world exists as waves of pressure changes. • A microphone converts pressure changes to changes in voltage. • An analog to digital convert changes these voltage signals to discrete digital signals. • A computer stores, manipulates, and retransmits these abstractions of sound. • The sounds can be stored in various formats and qualities (such as mono or stereo, 8 or 16 bit, 11 or 44 kHz). • The reverse of this process allows the computer to re-generate the sound. Blah blah blah…

  8. 3D Sound, The Basics… • In a virtual 3D environment, sound can originate from an infinite number of locations relative to the observer. • Ideally, when the observer hears the sound it should take into account the environment. • Specifically:

  9. Distance:Causes sound to arrive at different times.

  10. Reflection & Reverberation:Causes “copies” of the sound to arrive at different times.

  11. Diffraction & Refraction:Causes sound to bend around objects or arrive at different times.

  12. Absorption & Attenuation:Causes the sound to be weaker when it arrives.

  13. 3D Sound Approximations: Surround Sound • Surround sound uses various filters to simulate the effects of sound spatialization. • These filters create effects such as reverberation, localization, and attenuation. • Sound paths are not calculated. • Used in most theaters and home entertainment units.

  14. Bark bark bark!! Surround Sound • The user is situated with a set of speakers around him. • To simulate 3D localization, sound is played louder, out of phase, and/or at slightly different times from each speaker. • Comes in a variety speaker placement setups [8]: Dolby 5.1 Two Speaker Stereo Quadraphonic Headphones

  15. 3D Sound Approximations:Head Related Transfer Functions • Used in conjunction with surround sound to create better 3D approximations. • Microphones record sound from within the ear of a person or a model. • Differences between original sound and recordings are used to create filters. • These filters are applied to generated sounds to create the illusion of dimensionality.

  16. Surround Sound &Head Related Transfer Functions • Pros: • Cons: • Relatively cheap. • Effective. • Makes sense. • Many different approaches (non-standard). • Works only with limited speaker positions. • Not entirely generic. • Still not pure 3D sound.

  17. True 3D Sound • 3D graphical environments already exist. • Light paths traverse the scene and surface intensities are calculated. • Currently, sound paths are at most superficially computed. • Yet, programmers already have a wealth of environmental data. • Various possible approaches…

  18. Source: Real-Time Acoustic Modeling forDistributed Virtual Environments [4] True 3D Sound Beam Tracing • Approach: • Divide the environment into cells or regions. • Precompute and store beam paths from various source locations. • Lookup, in real-time, reverberation paths from the avatar to the source. • Use these paths to calculate delay and attenuation from the original, anechoic, audio signal for each of the echoes.

  19. Beam Tracing • Quick and effective (with a good data structure). • Intuitive. • Scalable for large environments. • Needs offline computations. • Assumes sources are stationary. • Assumes source locations are finite. • Pros: • Cons:

  20. True 3D Sound • On a basic level, we can determine sound propagation similar to how light travels through a 3D environment. • One simple, but computationally intensive method would be similar to ray tracing. • Ray tracing algorithms are generally very effective but also extremely slow and prone to sampling errors. • Most real-time algorithms for graphical computers make various assumptions:

  21. True 3D SoundThe Graphics Analogy • The 3D Graphics Pipeline: • Objects are made from geometric primitives composed of points. • These vertices are transformed to be relative to the camera. • Objects outside of the viewing field are clipped. • Rays are sent from the camera, through each point on the projection plane, and into the scene. • Corresponding pixel values in the viewport are calculated from these rays.

  22. True 3D SoundThe Graphics Analogy • Objects are made from geometric primitives (triangles, rectangles) composed of points. • Light intensities are calculated based on surface normals of these points. • These intensities are fed into the graphics pipeline.

  23. True 3D Sound The Graphics Analogy • Many of these computations are forwarded to optimized 3D graphics cards. • Many of these same techniques could be employed for generating realistic 3D sounds. • We would need to develop and design 3D sound cards and appropriate algorithms.

  24. Conclusion • 3D graphics and many other components of today’s computer systems have been almost thoroughly developed. • 3D sound is still in the infancy stage. • This field has a great deal of research potential.

  25. References [1] Burgress, David, A. Techniques for Low Cost Spatial Audio. ACM UIST, pages 53-59, 1992. [2] Ellis, Sean. Towards More Realistic Sound in VRML. ACM Virtual Reality and Modeling, pages 95-100, 1998. [3] Flaherty, Nick. 3D audio: new directions in rendering realistic sound. Electronic Engineering, pages 49, 52, 55, & 56, 1998. [4] Funkhouser, Thomas, A. , Patrick Min, and Ingrid Carlbom. Real-time Acoustic Modeling for Distributed Virtual Environments. SIGGRAPH, pages 365-374, 1999. [5] Funkhouser, Thomas, A. , Ingrid Carlbom, Gary Elko, Gopal Pingali, and Mohan Sondhi. A Beam Tracing Approach to Acoustic Modeling for Interactive Virtual Environments. [6] Funkhouser, Thomas, A. , Ingrid Carlbom, Gary Elko, Gopal Pingali, and Mohan Sondhi. Interactive Acoustic Modeling of Complex Environments. Acoustical Society of America, 1999. [7] Min, Patrick, and Thomas A. Funkhouser. Priority-Driven Acoustic Modeling for Virtual Environments. EUROGRAPHICS, 2000. [8] Tsingos, Nicolas, Thomas A. Funkhouser, Addy Ngan, and Ingrid Carlbom. Modeling Acoustics in Virtual Environments Using the Uniform Theory of Diffraction. [9] Hull, Joseph. Surround Sound Past, Present, and Future. Dolby Laboratories Inc. http://www.dolby.com/tech/. [10] Suen, An-Nan, Jhing-Fa Wang, and Jia-Ching Wang. VLSI Implementation of 3-D Sound Generator. IEEE Transactions on Consumer Electronics, pages 679-688, 1997.

  26. Real-Time Auralization of Sound in Virtual 3D Environments by Scott McDermott sdm1718@louisiana.edu

More Related