640 likes | 1.04k Vues
Computer Graphics - Discrete Techniques -. Hanyang University Jong-Il Park. Objectives. Buffers and pixel operations Mapping methods Texture mapping Environmental (reflection) mapping Variant of texture mapping Bump mapping Solves flatness problem of texture mapping Blending
E N D
Computer Graphics- Discrete Techniques - Hanyang University Jong-Il Park
Objectives • Buffers and pixel operations • Mapping methods • Texture mapping • Environmental (reflection) mapping • Variant of texture mapping • Bump mapping • Solves flatness problem of texture mapping • Blending • Anti-aliasing
Buffer Define a buffer by its spatial resolution (n x m) and its depth (or precision) k, the number of bits/pixel pixel
OpenGL Buffers • Color buffers can be displayed • Front • Back • Auxiliary • Overlay • Depth • Accumulation • High resolution buffer • Stencil • Holds masks
Writing in Buffers • Conceptually, we can consider all of memory as a large two-dimensional array of pixels • We read and write rectangular block of pixels • Bit block transfer (bitblt) operations • The frame buffer is part of this memory memory source frame buffer (destination) writing into frame buffer
The Limits of Geometric Modeling • Although graphics cards can render over 10 million polygons per second, that number is insufficient for many phenomena • Clouds • Grass • Terrain • Skin
Modeling an Orange • Consider the problem of modeling an orange (the fruit) • Start with an orange-colored sphere • Too simple • Replace sphere with a more complex shape • Does not capture surface characteristics (small dimples) • Takes too many polygons to model all the dimples
Modeling an Orange (2) • Take a picture of a real orange, scan it, and “paste” onto simple geometric model • This process is known as texture mapping • Still might not be sufficient because resulting surface will be smooth • Need to change local shape • Bump mapping
Three Types of Mapping • Texture Mapping • Uses images to fill inside of polygons • Environment (reflection mapping) • Uses a picture of the environment for texture maps • Allows simulation of highly specular surfaces • Bump mapping • Emulates altering normal vectors during the rendering process
Texture Mapping geometric model texture mapped
Where does mapping take place? • Mapping techniques are implemented at the end of the rendering pipeline • Very efficient because few polygons make it past the clipper
Is it simple? • Although the idea is simple---map an image to a surface---there are 3 or 4 coordinate systems involved 2D image 3D surface
Coordinate Systems • Parametric coordinates • May be used to model curves and surfaces • Texture coordinates • Used to identify points in the image to be mapped • Object or World Coordinates • Conceptually, where the mapping takes place • Window Coordinates • Where the final image is really produced
Texture Mapping parametric coordinates texture coordinates window coordinates world coordinates
Mapping Functions • Basic problem is how to find the maps • Consider mapping from texture coordinates to a point on a surface • Appear to need three functions x = x(s,t) y = y(s,t) z = z(s,t) • But we really want to go the other way (x,y,z) t s
Backward Mapping • We really want to go backwards • Given a pixel, we want to know to which point on an object it corresponds • Given a point on an object, we want to know to which point in the texture it corresponds • Need a map of the form s = s(x,y,z) t = t(x,y,z) • Such functions are difficult to find in general
Two-part mapping • One solution to the mapping problem is to first map the texture to a simple intermediate surface • Example: map to cylinder
Box Mapping • Easy to use with simple orthographic projection • Also used in environment maps
Second Mapping • Map from intermediate object to actual object • Normals from intermediate to actual • Normals from actual to intermediate • Vectors from center of intermediate actual intermediate
Aliasing • Point sampling of the texture can lead to aliasing errors point samples in u,v (or x,y,z) space miss blue stripes point samples in texture space
Area Averaging A better but slower option is to use area averaging pixel preimage Note that preimage of pixel is curved
Basic Stragegy Three steps to applying a texture • specify the texture • read or generate image • assign to texture • enable texturing • assign texture coordinates to vertices • Proper mapping function is left to application • specify texture parameters • wrapping, filtering
y z x t s Texture Mapping display geometry image
Mapping a Texture • Based on parametric texture coordinates • glTexCoord*() specified at each vertex Texture Space Object Space t 1, 1 (s, t) = (0.2, 0.8) 0, 1 A a c (0.4, 0.2) b B C (0.8, 0.4) s 0, 0 1, 0
Typical Code glBegin(GL_POLYGON); glColor3f(r0, g0, b0); //if no shading used glNormal3f(u0, v0, w0); // if shading used glTexCoord2f(s0, t0); glVertex3f(x0, y0, z0); glColor3f(r1, g1, b1); glNormal3f(u1, v1, w1); glTexCoord2f(s1, t1); glVertex3f(x1, y1, z1); . . glEnd();
Texture Polygon Texture Polygon Magnification Minification Magnification and Minification More than one texel can cover a pixel (minification) or more than one pixel can cover a texel (magnification) Can use point sampling (nearest texel) or linear filtering ( 2 x 2 filter) to obtain texture values
Environment mapping • Environmental mapping is way to create the appearance of highly reflective surfaces without ray tracing which requires global calculations • Examples: The Abyss, Terminator 2 • Is a form of texture mapping • Supported by OpenGL and Cg
Mapping to a sphere N V R
Issues • Must assume environment is very far from object (equivalent to the difference between near and distant lights) • Object cannot be concave (no self reflections possible) • No reflections between objects • Need a reflection map for each object • Need a new map if viewer moves
Modeling an Orange • Consider modeling an orange • Texture map a photo of an orange onto a surface • Captures dimples • Will not be correct if we move viewer or light • We have shades of dimples rather than their correct orientation • Ideally we need to perturb normal across surface of object and compute a new color at each interior point
Bump Mapping (Blinn) • Consider a smooth surface n p
Rougher Version n’ p’ p
Displacement Function p’ = p + d(u,v) n d(u,v) is the bump or displacement function |d(u,v)| << 1
Approximating the Normal n’ = p’up’v ≈ n + (∂d/∂u)n pv + (∂d/∂v)n pu • The vectors n pvand n pu lie in the tangent plane • Hence the normal is displaced in the tangent plane • Must precompute the arrays ∂d/ ∂u and ∂d/ ∂v • Finally, we perturb the normal during shading
Image Processing • Suppose that we start with a function d(u,v) • We can sample it to form an array D=[dij] • Then ∂d/ ∂u ≈ dij – di-1,j and ∂d/ ∂v ≈ dij – di,j-1 • Embossing: multipass approach using accumulation buffer
Opacity and Transparency • Opaque surfaces permit no light to pass through • Transparent surfaces permit all light to pass • Translucent surfaces pass some light translucency = 1 – opacity (a) opaque surface a =1
Physical Models • Dealing with translucency in a physically correct manner is difficult due to • the complexity of the internal interactions of light and matter • Using a pipeline renderer
Writing Model • Use A component of RGBA (or RGBa) color to store opacity • During rendering we can expand our writing model to use RGBA values blend source blendingfactor source component destination component Color Buffer destination blending factor
Blending Equation • We can define source and destination blending factors for each RGBA component s = [sr, sg, sb, sa] d = [dr, dg, db, da] Suppose that the source and destination colors are b = [br, bg, bb, ba] c = [cr, cg, cb, ca] Blend as c’ = [br sr+ cr dr, bg sg+ cg dg , bb sb+ cb db , basa+ cada]
Fog • We can composite with a fixed color and have the blending factors depend on depth • Simulates a fog effect • Blend source color Csand fog color Cf by Cs’=f Cs + (1-f) Cf • f is the fog factor • Exponential • Gaussian • Linear (depth cueing)
OpenGL Fog Functions GLfloat fcolor[4] = {……}: glEnable(GL_FOG); glFogf(GL_FOG_MODE, GL_EXP); glFogf(GL_FOG_DENSITY, 0.5); glFOgv(GL_FOG, fcolor);