1 / 70

INB382/INN382 Real-time Rendering Techniques Lecture 6: Advanced Shading

INB382/INN382 Real-time Rendering Techniques Lecture 6: Advanced Shading. Ross Brown. Lecture Contents. Bump Mapping Normal Mapping Parallax Mapping Displacement Mapping Motion Blur Depth of Field. Advanced Shading.

taariq
Télécharger la présentation

INB382/INN382 Real-time Rendering Techniques Lecture 6: Advanced Shading

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INB382/INN382 Real-time Rendering Techniques Lecture 6: Advanced Shading Ross Brown

  2. Lecture Contents • Bump Mapping • Normal Mapping • Parallax Mapping • Displacement Mapping • Motion Blur • Depth of Field

  3. Advanced Shading • “Every pilot needs a co-pilot, and let me tell you, it is awful nice to have someone sitting there beside you, especially when you hit some bumpy air” - Eric Wald

  4. Lecture Context • So far, you have used geometry to create surface appearances • Then you have draped a texture over the surface to create diffuse colour on the surface • Now we use texture to create geometric detail • This means we can modulate lighting more completely via texturing

  5. Bump Mapping • Extension of texture technique to incorporate geometric effects • Replaces geometric surface detail with a perturbation of the surface normal • Bump map is often a single channel image designed to simulate geometric surface properties

  6. bumpuu bumpvv n’ n v Bump Map u Bump Mapping – Two Methods • Offset Mapping using bumpu and bumpv drawn from the bump map image • bumpu and bumpv are the amount the normal is perturbed in the u and v direction (Moller and Haines 2002)

  7. Heightfield Perturbed Normals (Each arrow is the value of a texel) Bump Mapping – Two Methods • Heightfield mapping which modifies the surface normals by using the difference in the u and v directions from pixel to pixel • Each bump map value is a height value (Moller and Haines 2002)

  8. Bump Mapping – Emboss Bump Mapping • Borrowed from image processing – images done in GIMP • Render the surface with the heightfield applied as diffuse mono texture • Shift all vertex (u,v) coords - in direction of projected light vector (Moller and Haines 2002) Height Field Final Embossed Effect

  9. n b l Light l’ Polygon t Bump Mapping – Emboss Bump Mapping • Render this surface with heightfield subtracted from the first-pass result • Gives an embossed effect • Render the surface again with no heightfield • Diffusely illuminated and Gouraud shaded • Add this shaded image to the result • Does not work when light is overhead

  10. Normal Mapping – aka Dot3 and Bump Mapping • Primary method for performing bump mapping in modern hardware systems – being superseded by Parallax mapping if you have bandwidth • Synonymous with Bump Mapping • Instead of perturbations or heights, the actual normals for the surface are stored • The [-1,1] values are mapped to unsigned 8 bits and stored as colours in a texture map – Unity packs them (see demo)

  11. n b l Light l’ Polygon t Normal Mapping • Coords of each light direction vector are passed in as a vertex colour • These are then transformed to the surface’s tangent vector space • The tangent space is an orthogonal space made up of: • N – Normal • T – Tangent in any direction • B – Binormal (N X T)

  12. Normal Mapping • The normal map texture is combined with the light vector at each pixel using a dot product • This calculates the diffuse component of the bump map values • For the specular component, calculate the half angle vector that is part of Blinn’s lighting model • The n.h term is then raised to the 8th power for the specular term

  13. Normal Mapping × + Diffuse Texture Diffuse Light Specular Light = Final Bump Map Image Bump Map Used

  14. Bump Mapping – Tangent Basis Visualisation • Normal is Blue • Tangent is Green • Binormal is Red • Part of the FrenetSerret formulae used in particle kinematics • Can define the path of a particle on a curve using this formulation • In our case the surface properties, no matter the orientation • Start here - http://en.wikipedia.org/wiki/Frenet-Serret_formulas

  15. Tangent Basis • Obtained by adding two contiguous surface vectors together • Imagine three points • Take the difference between them to generate two vectors • Add them together • Or take the orthogonal vector of the vertex normal in a direction of choice • Or take the derivative from an analytic representation Vertex 3 Tangent Vector 1 Vector 2 Vertex 1 Vertex 2

  16. Tangent Basis • Provides a surface reference coordinate system to perform calculations • Otherwise have to transform from world space every time • Tangents can be generated on meshes using Unity Inspector – but watch out, I have found them suspect at times

  17. Visually Speaking Object Space Map • Basis transform thus rotates the vectors to suit the surface properties • Otherwise the bluish bump maps would do what? • Handy separation of spaces for modeling • Do not need to have object or world space normal maps • NB: Have to map from [0,1] to [-1,1] Tangent Space Map http://www.surlybird.com/tutorials/TangentSpace/index.html

  18. Cg Vertex Shader // WIT transform the normal and tangent a_Input.nor = normalize(mul(transpose(_World2Object), float4(a_Input.nor, 0.0f))).xyz; a_Input.tang = normalize(mul(transpose(_World2Object), a_Input.tang)); // calculate binormal bin = cross(a_Input.tang.xyz, a_Input.nor); // calculate matrix to tangent space float3x3 toTangentSpace = transpose(float3x3(a_Input.tang.xyz, bin, a_Input.nor)); output.lightDir = mul(toTangentSpace, g_lightDir);

  19. HLSL Pixel Shader // index into textures float4 colour = tex2D(_Tex1, a_Input.tex); float3 normal = UnpackNormal(tex2D(_Tex2, a_Input.tex)).rgb; // calculate vector to camera float3 toCamera = normalize(_vecCameraPos.xyz - a_Input.posWorld.xyz);             // normalize light direction float3 light = normalize(a_Input.lightDir); // calculate reflection vector float3 reflectVec = reflect(light, normal); // calculate diffuse component, max to prevent back lighting float diffuse = max(dot(normal, light), 0.0f); // calculate specular variable float specComp = pow(max(dot(reflectVec, toCamera), 0.0f), _fSpecPower); colour = colour + diffuse * 0.05 + specComp * 0.5; colour.a = 1; // return texture colour modified by diffuse component return colour;

  20. Cg Demonstration

  21. Creating Normal Maps • These are usually generated from high quality meshes • First, the normal map is generated from the high quality mesh • Then the mesh is decimated down to a lower resolution for use in the game • Low polygon models can therefore be used to minimise the transmission of vertices across the bus onto the gpu

  22. Creating Normal Maps From presentation by Dave Gosselin, ATI [3]

  23. Gears of War

  24. Creating Normal Maps • Sometimes created from a height map

  25. Creating Normal Maps • Using low and high resolution models is now normal method – high quality mesh placed into normal map texture • Render with low resolution polygons as mesh basis and normal map applied &

  26. Forming a normal map – the ATI way • Command line interface that generates a bump map from a high and low resolution model (and a height map for fine details) • High resolution model requires vertices and normals • Low resolution model requires vertices, normals, and one set of texture coordinates

  27. Ray Casting • The output normal map is applied to the low resolution model at runtime • For each texel mapped onto the low resolution model • ray is cast out from the low resolution model and intersected with the high resolution model • Multiple rays if super sampling • The normal at that point is stored in the normal map at that texel location Rays Low Resolution Model High Resolution Model

  28. Ray Casting • If multiple polygons on the high resolution model are intersected, the one closest to the origin of the ray is chosen • Negative intersections are valid • Why?

  29. Combining With A Height Map • A height map can be combined with the ray cast normal map to provide micro detail to the normal map • Scars on skin, imperfections in rock, bumps in bricks, etc. Normal Map Height Map Final Map &

  30. Authoring Normals For Low Resolution Model • Artists must make sure that rays cast based on the normals from the low resolution model, can accurately represent the high resolution model • Does this look familiar? GOOD BAD: High Res Detail Missed! High Resolution Model High Resolution Model Vertex Normals Vertex Normals Low Resolution Model Low Resolution Model

  31. Spacing Texture Coordinates • Space must be left between surfaces so there is room for a dilation filter

  32. Normal Map Dilation Filter For Bilinear Fetches • A 4-tap filter is used to grow the normals to neighboring pixels to account for border conditions • This is necessary for generating mip maps and bilinear texture fetches • Why? Before Filter After Filter Dilation Filter

  33. Parallax Occlusion Mapping • Bump mapping does not allow for view dependent factors in surface detail • Doesn’t take into account geometric surface depth • Does not exhibit parallax - apparent displacement of the object due to viewpoint change • No self-shadowing of the surface • Coarse silhouettes expose the actual geometry being drawn • Parallax slides from presentation by Tatarchuk [2]

  34. Parallax Occlusion Mapping • Per-pixel ray tracing at its core – more on this later • Correctly handles complicated viewing phenomena and surface details • Displays motion parallax • Renders complex geometric surfaces such as displaced text / sharp objects • Uses occlusion mapping to determine visibility for surface features (resulting in correct self-shadowing)

  35. Parallax Occlusion Mapping • Introduced in [Browley04] “Self-Shadowing, Perspective-Correct Bump Mapping Using Reverse Height Map Tracing” • Efficiently utilizes programmable GPU pipeline for interactive rendering rates

  36. Parallax Occlusion Mapping • Tangent-space normal map • Displacement values (the height map) • All computations are done in tangent space, and thus can be applied to arbitrary surfaces Height Map Normal Map Final Scene

  37. Parallax Occlusion Mapping

  38. Implementation: Per-Vertex • Compute the viewing direction, the light direction in tangent space • May compute the parallax offset vector (as an optimization)

  39. Implementation: Per-Pixel • Ray-cast the view ray along the parallax offset vector • Light ray • height profile intersection for occlusion computation (shadows to determine the visibility coefficient • Shading • Using any attributes • Any lighting model • Ray – height field profile intersection as a texture offset • Yields the correct displaced point visible from the given view angle

  40. Height Field Profile Tracing

  41. Height Field Profile – RayIntersection

  42. Self-Occlusion Shadows

  43. Shadow Computation • Simply determining whether the current feature is occluded yields hard shadows [Policarpo05] • More effort can be applied – later with PCCS

  44. Illuminating the Surface • Apply material attributes sampled at the offset corresponding to the displacement point • Any lighting model is suitable – Phong is used in demos

  45. Parallax Occlusion Mapping – DirectX SDK Demo Parallax with high depth value Parallax with low depth value Normal mapping

  46. Cg Implementation • Is too long to cover fully, and is another lecture again • Again, is used to show how you can do much more than bump mapping • But, it gets complex to simulate advanced appearances • I will expect you to understand Normal Mapping in detail • Parallax Mapping - only need to understand concepts and its improved functionality

  47. Displacement Mapping • In this approach we take the use of textures to their logical conclusion • We use it to actually change the vertex positions to deform the shape – but not create new vertices, more on this later • First Developed by Pixar for use in their Renderman software • Renderman uses the REYES algorithm – Render Everything You Ever Saw

  48. Displacement Mapping • Renderman subdivides the surface – defined as a Bicubic Patch (refer to INB381) – into smaller projected pixel-sized facets, and then displaces them according to the texture map components along the surface normal • Thus giving a nice antialiased form of geometric modelling that enables a simple surface to be deformed with an offline image, or a procedural deformation (see Lecture Seven)

  49. GPU Displacement Mapping • But as of GeForce 6 cards onwards, we can access texture information within the Vertex Shader • So the texture can contribute to the transformations of the vertex points, but without the step of subdividing to pixel sized facets • Often useful for embossing information into the surface as text or implementing a low cost height field solution for terrain • Think of it as a precomputed transform on the vertices – thus costing less within the vertex processing stage

  50. Cg Demonstration

More Related