1 / 74

Advanced Mapping

Advanced Mapping. Computer Graphics. Types of Mapping. Maps affect various values in the rendering process: Color Texture mapping Light mapping Transparency Alpha mapping Specular component Environment mapping Gloss Mapping Surface normal Bump mapping Vertex position

willow
Télécharger la présentation

Advanced Mapping

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Mapping Computer Graphics

  2. Types of Mapping • Maps affect various values in the rendering process: • Color • Texture mapping • Light mapping • Transparency • Alpha mapping • Specular component • Environment mapping • Gloss Mapping • Surface normal • Bump mapping • Vertex position • Displacement mapping

  3. MultiTexturing • Most of the advanced mapping techniques we will be looking at will be made possible by multitexturing • Multitexturing is simply the ability of the graphics card to apply more than one texture to a surface in a single rendering pass • Specifically, there is a hardware pipeline of texture units, each of which applies a single texture

  4. MultiTexturing Interpolated vertex value Texture unit 1 Texture value Texture unit 2 Texture value Texture unit 3 Texture value

  5. MultiTexturing • Each of the texture units is independent: • Specification of “texture” parameters • The type of information is stored in the “texture” • The parameter in the rendering process that is modified by the “texture” values

  6. Multipass Rendering • In theory, all illumination equation factors are evaluated at once and a sample color is generated • In practice, various parts of the light equations can be evaluated in separate passes, each successive pass modifying the previous result • Results are accumulated in the offscreen framebuffer (a.k.a. colorbuffer) • Multipass rendering is an older technique than MultiTexturing

  7. Multipass Rendering • The multipass idea came about as more of the rendering pipeline moved into hardware • When all rendering was done in software, one had control over all the details of the rendering process • Moving rendering to hardware significantly increased performance, at the expense of flexibility • This lack of flexibility means we can’t program arbitrarily complex lighting models in a single pass • Vertex and Pixel Shaders give us back some of the flexibility while still being done in hardware (later)

  8. Multipass Rendering • There are several techniques we will see that can be performed by either Multipass rendering or MultiTexturing • MultiTexturing is a newer and not all graphics cards support it • Although this is quickly changing

  9. Multipass Rendering • Multipass also has the advantage that a program can automatically adjust to the capabilities/speed of the graphics card it is being run on • That is, the program can perform the basic passes it needs to produce an acceptable picture. Then if it has time (e.g. the frame rate isn’t too low) it can perform extra passes to improve the quality of the picture for those users who own better cards.

  10. Multipass Rendering • Quake III engine uses 10 passes: • (Passes 1-4: accumulate bump map) • Pass 5: diffuse lighting • Pass 6: base texture • (Pass 7: specular lighting) • (Pass 8: emissive lighting) • (Pass 9: volumetric/atmospheric effects) • (Pass 10: screen flashes) • The passes in ( ) can be skipped for slower cards

  11. Light Mapping • Lightmaps are simply texture maps that contain illumination information (lumels) • How can lighting be done in a texture map? • Recall that the diffuse component of the lighting equation is view independent • Thus, for static light sources on static objects the lighting is always the same no matter where the viewer is located • The light reflected from a surface can be pre-computed and stored in a lightmap

  12. Light Mapping • What are the benefits? • Speed: the lighting equations can be turned off while rendering the object that contains the lightmap • More realism: we are notconstrained by the Phong localreflection model whencalculating our lighting • View-independent global modelssuch as radiosity can even be used

  13. Light Mapping • The illumination information can be combined with the texture information, forming a single texture map • But there are benefits to not combining them: • Lightmaps can be reused on different textures • Textures can be reused with different lightmaps • Repeating textures don’t look good with repeating light • Lightmaps are usually stored at a lower resolution so they don’t take up much space anyway • Extensions will allow us to perform dynamic lightmaps

  14. Light Mapping • In order to keep the texture and light maps separate, we need to be able to perform multitexturing – application of multiple textures in a single rendering pass

  15. Light Mapping • How do you create light maps? • Trying to create a light map that will be used on a non-planar object things get complex fast: • Need to find a divide object into triangles with similar orientations • These similarly oriented triangles can all be mapped with a single light map

  16. Light Mapping • Things for standard games are usually much easier since the objects being light mapped are usually planar: • Walls • Ceilings • Boxes • Tables • Thus, the entire planar object can be mapped with a single texture map

  17. Light Mapping

  18. Light Mapping • Can dynamic lighting be simulated by using a light map? • If the light is moving (perhaps attached to the viewer or a projectile) then the lighting will change on the surface as the light moves • The light map values can be partially updated dynamically as the program runs • Several light maps at different levels of intensity could be pre-computed and selected depending on the light’s distance from the surface

  19. Alpha Mapping • An Alpha Map contains a single value with transparency information • 0  fully transparent • 1  fully opaque • Can be used to make sections of objects transparent • Can be used in combination with standard texture maps to produce cutouts • Trees • Torches

  20. Alpha Mapping

  21. Alpha Mapping • In the previous tree example, all the trees are texture mapped onto flat polygons • The illusion breaks down if the viewer sees the tree from the side • Thus, this technique is usually used with another technique called “billboarding” • Simply automatically rotating the polygon so it always faces the viewer • Note that if the alpha map is used to provide transparency for texture map colors, one can often combine the 4 pieces of information (R,G,B,A) into a single texture map

  22. Alpha Mapping • The only issue as far as the rendering pipeline is concerned is that the pixels of the object made transparent by the alpha map cannot change the value in the z-buffer • We saw similar issues when talking about whole objects that were partially transparent  render them last with the z-buffer in read-only mode • However, alpha mapping requires changing z-buffer modes per pixel based on texel information • This implies that we need some simple hardware support to make this happen properly

  23. Environment Mapping • Environment Mapping is used to approximate mirrored surfaces

  24. Environment Mapping • The standard Phong lighting equation doesn’t take into account reflections • Just specular highlights • Raytracing (a global model) bounces rays off the object in question and into the world to see what they hit N N R V N

  25. Environment Mapping • Environment Mapping approximates this process by capturing the “environment” in a texture map and using the reflection vector to index into this map N R V

  26. Environment Mapping • The basic steps are as follows: • Generate (or load) a 2D map of the environment • For each pixel that contains a reflective object, compute the normal at the location on the surface of the object • Compute the reflection vector from the view vector (V) and the normal (N) at the surface point • Use the reflection vector to compute an index into the environment map that represents the objects in the reflection direction • Use the texel data from the environment map to color the current pixel

  27. Environment Mapping • Put into texture mapping terminology: • The projector function converts the reflection vector (x, y, z) to texture parameter coordinates (u, v) • There are several such projector functions in common use today for environment mapping: • Cubic Mapping • Spherical Mapping • Parabolic mapping

  28. Cubic Environment Mapping • The map is constructed by placing a camera at the center of the object and taking pictures in 6 directions

  29. Cubic Environment Mapping • Or the map can be easily created from actual photographs to place CG objects into real scenes(Abyss, T2, Star wars)

  30. Cubic Environment Mapping • When the object being mapped moves, then the maps need to change • Can be done in real-time using multipass • 6 rendering passes to accumulate the environment map • 1 rendering pass to apply the map to the object • Can be done with actual photographs • Take 6 pictures are set locations along the path • Warp the images to create intermediate locations

  31. Cubic Environment Mapping • How to define the projector function: • The reflection vector with the largest magnitude selects the corresponding face • The remaining two coordinates are divided by the absolute value of the largest coordinate • They now range from [-1..+1] • Then they are remapped to [0..1] and used as our texture parameter spaces coordinates on the particular face selected

  32. Cubic Environment Mapping • Just like with normal texture mapping, the texture coordinates are computed at the vertices and then interpolated across the triangle • However, this poses a problem when 2 vertices reflect onto different cube faces • The software solution to this is to subdivide the problematic polygon along the EM cube edge • The hardware solution puts reflection interpolation and face selection onto the graphics card • This is what most modern hardware does

  33. Cubic Environment Mapping • The main advantages of cube maps: • Maps are easy to create (even in real-time) • They are view-independent • The main disadvantage of cube maps: • Special hardware is needed to perform the face selection and reflection vector interpolation

  34. Spherical Environment Mapping • The map is obtained by orthographically projecting an image of a mirrored sphere • Map stores colors seen by reflected rays

  35. Spherical Environment Mapping • The map can be obtained from a synthetic scene by: • Raytracing • Warping automatically generated cubic maps • The map can be obtained from the real world by: • Photographing an actual mirrored sphere

  36. Spherical Environment Mapping

  37. Spherical Environment Mapping • Note that the sphere map contain information about both the environment in front of the sphere and in back of the sphere

  38. Spherical Environment Mapping • To map the reflection vector to the sphere map the following equations are used, based on the reflection vector (R)

  39. Spherical Environment Mapping • Some disadvantages of Spherical maps: • Maps are hard to create on the fly • Sampling is non-linear: • Sampling is non-uniform • View-point dependent! • Some advantages of Spherical maps: • No interpolation across map seams • Normal texture mapping hardware can be used

  40. Parabolic Environment Mapping • Similar to Spherical maps, but 2 parabolas are used instead of a single sphere • Each parabola forms a environment map • One for the front, one for the back • Image shownis a singleparabola

  41. Parabolic Environment Mapping • The maps are still circles in 2D • The following is a comparison of the 2 parabolic maps (left) to a single spherical map (right)

  42. Parabolic Environment Mapping • The main advantages of the parabolic maps: • Sampling is fairly uniform • They are view-independent! • Can be performed on most graphics hardware that supports texturing • Interpolation between vertices even over seam between front and back maps can be done with a trick • The main disadvantage of parabolic maps: • Creating the map is difficult • Cube maps are easily created from both real and synthetic environments (even on the fly) • Sphere maps are easily created from from real-world scenes

  43. General Environment Mapping • Potential problems with Environment Maps: • Object must be small w.r.t. environment • No self-reflections (only convex objects) • Separate map is required for each object in the scene that is to be environment mapped • Maps may need to be changed whenever the viewpoint changes (i.e. may not be viewpoint independent – depends on map type)

  44. Gloss Mapping • Not all objects are uniformly shiny over their surface • Tile floors are worn in places • Metal has corrosion in spots • Partially wet surfaces • Gloss mapping is a way toadjust the amount of specularcontribution in the lightingequation

  45. Gloss Mapping • The lighting equations can be computed at the vertices and the resulting values can be interpolated across the surface • Similar to Gouraud shading • But the diffuse and specular contributions must be interpolated across the pixels separately • This is because the gloss map contains a single value that controls the specular contribution on a per pixel basis • Adjusts the Ks value, not the n (shininess) value

  46. Gloss Mapping

  47. Gloss Mapping • This is more complex than Gouraud shading: • 2 values (diffuse / specular) need to be interpolated across the surface rather than just the final color • They need to be combined per pixel rather than just at the vertices • But simpler than Phong shading: • The normal, lighting, viewing directions still only need to be computed at the vertices • The cos (dot products) only need to be computed at the vertices • Of course, Phong shading produces better specular highlights for surfaces that have large triangles: • Could use full Phong shading • Tessellate the surface finer to capture the specular highlights with Grouaud shading

  48. Gloss Mapping • What is needed in term of hardware extensions to the classic rendering pipeline to get gloss mapping to work? • We need to separate the computation of the diffuse and specular components • Or we can simply use a multipass rendering technique to perform gloss mapping on any hardware • 1st pass computes diffuse component • 2nd pass computes specular with gloss map applied as a lightmap, adding the result to the 1st pass result

  49. Bump Mapping • A technique to make a surface appear bumpy without actually changing the geometry • The bump map changes the surface normal value by some small angular amount • This happens before the normal is used in the lighting equations

  50. 1D Bump Map Example • Surface • Bump map • The “goal” surface • The “actual” surface

More Related