1 / 64

Advanced Graphics Lecture Eight

Advanced Graphics Lecture Eight. “ The Shader knows… ”. Alex Benton, University of Cambridge – A.Benton@damtp.cam.ac.uk Supported in part by Google UK, Ltd. Local space. World space. Viewing space. 3D screen space. Process vertices. Clipping, projection, backface culling. Process pixels.

baldwine
Télécharger la présentation

Advanced Graphics Lecture Eight

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced GraphicsLecture Eight “The Shader knows…” Alex Benton, University of Cambridge – A.Benton@damtp.cam.ac.uk Supported in part by Google UK, Ltd

  2. Local space World space Viewing space 3D screen space Process vertices Clipping, projection, backface culling Process pixels 2D display space – plot pixels Closer to the truth (but still a terrible oversimplification) What is… the shader? Local space World space Viewing space 3D screen space 2D display space Lecture one…

  3. Ex: computing diffuse shading color per vertex; transforming vertex position; transforming texture co-ordinates “Wouldn’t it be great if the user could install their own code into the hardware to choose these effects?” Ex: interpolating texture coordinates across the polygon; interpolating the normal for specular lighting; textured normal-mapping What is… the shader? Local space World space Viewing space Local space 3D screen space World space Process vertices Viewing space Clipping, projection, backface culling 3D screen space Process pixels 2D display space 2D display space – plot pixels Lecture one… Closer to the truth (but still a serious oversimplification)

  4. What is… the shader? • The next generation: Introduce shaders, programmable logical units on the GPU which can replace the “fixed” functionality of OpenGL with user-generated code. • By installing custom shaders, the user can now completely override the existing implementation of core per-vertex and per-pixel behavior.

  5. Shader gallery I Above: Demo of Microsoft’s XNA game platform Right: Product demos by nvidia (top) and Radeon (bottom)

  6. What are we targeting? • OpenGL shaders give the user control over each vertex and each fragment (each pixel or partial pixel) interpolated between vertices. • After vertices are processed, polygons are rasterized. During rasterization, values like position, color, depth, and others are interpolated across the polygon. The interpolated values are passed to each pixel fragment.

  7. Per vertex: Vertex transformation Normal transformation and normalization Texture coordinate generation Texture coordinate transformation Lighting Color material application Per fragment (pixel): Operations on interpolated values Texture access Texture application Fog Color summation Optionally: Pixel zoom Scale and bias Color table lookup Convolution What can you override?

  8. Think parallel • Shaders are compiled from within your code • They used to be written in assembler • Today they’re written in high-level languages () • They execute on the GPU • GPUs typically have multiple processing units • That means that multiple shaders execute in parallel! • At last, we’re moving away from the purely-linear flow of early “C” programming models…

  9. Least advanced; most portable and supported; topic of this lecture. What’re we talking here? • There are several popular languages for describing shaders, such as: • HLSL, the High Level Shading Language • Author: Microsoft • DirectX 8+ • Cg • Author: nvidia • GLSL, the OpenGL Shading Language • Author: the Khronos Group, a self-sponsored group of industry affiliates (ATI, 3DLabs, etc)

  10. OpenGL programmable processors (not to scale) Figure 2.1, p. 39, OpenGL Shading Language, Second Edition, Randi Rost, Addison Wesley, 2006. Digital image scanned by Google Books.

  11. Vertex processor – inputs and outputs Color Normal Position Texture coord etc… Vertex Processor Color Position Texture data Custom variables Modelview matrix Material Lighting etc… Custom variables Per-vertex attributes

  12. Fragment processor – inputs and outputs Color Texture coords Fragment coords Front facing Fragment Processor Fragment color Fragment depth Texture data Modelview matrix Material Lighting etc… Custom variables

  13. GLSL Data Types • Three basic data types in GLSL: • float, bool, int • float and int behave just like in C,and bool types can take on the values of true or false. • Vectors with 2,3 or 4 components, declared as: • vec{2,3,4}: a vector of 2, 3,or 4 floats • bvec{2,3,4}: bool vector • ivec{2,3,4}: vector of integers • Square matrices 2x2, 3x3 and 4x4: • mat2 • mat3 • mat4

  14. GLSL Data Types • A set of special types are available for texture access, called sampler • sampler1D - for 1D textures • sampler2D - for 2D textures • sampler3D - for 3D textures • samplerCube - for cube map textures • Arrays can be declared using the same syntax as in C, but can't be initialized when declared. Accessing array's elements is done as in C. • Structures are supported with exactly the same syntax as C struct dirlight { vec3 direction; vec3 color; };

  15. GLSL Variables • Declaring variables in GLSL is mostly the same as in C • Differences: GLSL relies heavily on constructor for initialization and type casting • GLSL is pretty flexible when initializing variables using other variables float a,b; // two vector (yes, the comments are like in C) int c = 2; // c is initialized with 2 bool d = true; // d is true float b = 2; // incorrect, there is no automatic type casting float e = (float)2;// incorrect, requires constructors for type casting int a = 2; float c = float(a); // correct. c is 2.0 vec3 f; // declaring f as a vec3 vec3 g = vec3(1.0,2.0,3.0); // declaring and initializing g vec2 a = vec2(1.0,2.0); vec2 b = vec2(3.0,4.0); vec4 c = vec4(a,b) // c = vec4(1.0,2.0,3.0,4.0); vec2 g = vec2(1.0,2.0); float h = 3.0; vec3 j = vec3(g,h);

  16. GLSL Variables • Matrices also follow this pattern • The declaration and initialization of structures is demonstrated below mat4 m = mat4(1.0) // initializing the diagonal of the matrix with 1.0 vec2 a = vec2(1.0,2.0); vec2 b = vec2(3.0,4.0); mat2 n = mat2(a,b); // matrices are assigned in column major order mat2 k = mat2(1.0,0.0,1.0,0.0); // all elements are specified struct dirlight { // type definition vec3 direction; vec3 color; }; dirlight d1; dirlight d2 = dirlight(vec3(1.0,1.0,0.0),vec3(0.8,0.8,0.4));

  17. GLSL Variables • Accessing a vector can be done using letters as well as standard C selectors. • One can the letters x,y,z,w to access vectors components; r,g,b,a for color components; and s,t,p,q for texture coordinates. • As for structures the names of the elements of the structure can be used as in C vec4 a = vec4(1.0,2.0,3.0,4.0); float posX = a.x; float posY = a[1]; vec2 posXY = a.xy; float depth = a.w; d1.direction = vec3(1.0,1.0,1.0);

  18. GLSL Variable Qualifiers • Qualifiers give a special meaning to the variable. In GLSL the following qualifiers are available: • const - the declaration is of a compile time constant • attribute – (only used in vertex shaders, and read-only in shader) global variables that may change per vertex, that are passed from the OpenGL application to vertex shaders • uniform – (used both in vertex/fragment shaders, read-only in both) global variables that may change per primitive (may not be set inside glBegin,/glEnd) • varying - used for interpolated data between a vertex shader and a fragment shader. Available for writing in the vertex shader, and read-only in a fragment shader.

  19. GLSL Functions • As in C, a shader is structured in functions. At least each type of shader must have a main function declared with the following syntax: void main() • User defined functions may be defined. • As in C a function may have a return value, and use the return statement to pass out its result. A function can be void. The return type can have any type, except array. • The parameters of a function have the following qualifiers: • in - for input parameters • out - for outputs of the function. The return statement is also an option for sending the result of a function. • inout - for parameters that are both input and output of a function • If no qualifier is specified, by default it is considered to be in.

  20. How do the shaders communicate? There are three types of shader parameter in GLSL: • Uniform parameters • Set throughout execution • Ex: surface color • Attribute parameters • Set per vertex • Ex: local tangent • Varying parameters • Passed from vertex processor to fragment processor • Ex: transformed normal Attributes Vertex Processor Uniform params Varying params Fragment Processor

  21. What happens when you install a shader? • All the fixed functionality (see slide six) is overridden. • It’s up to you to replace it! • You’ll have to transform each vertex into viewing coordinates manually. • You’ll have to light each vertex manually. • You’ll have to apply the current interpolated color to each fragment manually. • The installed shader replaces all OpenGL fixed functionality for all renders until you remove it.

  22. GLSL Functions • A few final notes: • A function can be overloaded as long as the list of parameters is different. • Recursion behavior is undefined by specification. • Finally, let’s look at an example vec4 toonify(in float intensity) { vec4 color; if (intensity > 0.98) color = vec4(0.8,0.8,0.8,1.0); else if (intensity > 0.5) color = vec4(0.4,0.4,0.8,1.0); else if (intensity > 0.25) color = vec4(0.2,0.2,0.4,1.0); else color = vec4(0.1,0.1,0.1,1.0); return(color); }

  23. Shader gallery II Above: Kevin Boulanger (PhD thesis, “Real-Time Realistic Rendering of Nature Scenes with Dynamic Lighting”, 2005) Above: Ben Cloward (“Car paint shader”)

  24. GLSL Varying Variables • Define varying variables in both vertex and fragment shaders • Varying variables must be written in the vertex shader • Varying variables can only be read in fragment shaders varying vec3 normal;

  25. More Setup for GLSL- Uniform Variables • Uniform variables, this is one way for your C program to communicate with your shaders (e.g. what time is it since the bullet was shot?) • A uniform variable can have its value changed by primitive only, i.e., its value can't be changed between a glBegin / glEnd pair. • Uniform variables are suitable for values that remain constant along a primitive, frame, or even the whole scene. • Uniform variables can be read (but not written) in both vertex and fragment shaders.

  26. More Setup for GLSL- Uniform Variables • A sample: Assume that a shader with the following variables is being used: uniform float specIntensity; uniform vec4 specColor; uniform float t[2]; uniform vec4 colors[3]; In the OpenGL application, the code for setting the variables could be: GLint loc1,loc2,loc3,loc4; float specIntensity = 0.98; float sc[4] = {0.8,0.8,0.8,1.0}; float threshold[2] = {0.5,0.25}; float colors[12] = {0.4,0.4,0.8,1.0, 0.2,0.2,0.4,1.0, 0.1,0.1,0.1,1.0}; loc1 = glGetUniformLocationARB(p,"specIntensity"); glUniform1fARB(loc1,specIntensity); loc2 = glGetUniformLocationARB(p,"specColor"); glUniform4fvARB(loc2,1,sc); loc3 = glGetUniformLocationARB(p,"t"); glUniform1fvARB(loc3,2,threshold); loc4 = glGetUniformLocationARB(p,"colors"); glUniform4fvARB(loc4,3,colors);

  27. More Setup for GLSL- Attribute Variables • Attribute variables also allow your C program to communicate with shaders • Attribute variables can be updated at any time, but can only be read (not written) in a vertex shader. • Attribute variables pertain to vertex data, thus not useful in fragment shader • To set its values, (just like uniform variables) it is necessary to get the location in memory of the variable. • Note that the program must be linked previously and some drivers may require the program to be in use. • GLint glGetAttribLocationARB(GLhandleARB program,char *name); • Parameters: • program - the handle to the program. • name - the name of the variable

  28. More Setup for GLSL- Attribute Variables • A sample snippet Assuming the vertex shader has: attribute float height; In the main Opengl program, we can do the following: loc = glGetAttribLocationARB(p,"height"); glBegin(GL_TRIANGLE_STRIP); glVertexAttrib1fARB(loc,2.0); glVertex2f(-1,1); glVertexAttrib1fARB(loc,2.0); glVertex2f(1,1); glVertexAttrib1fARB(loc,-2.0); glVertex2f(-1,-1); glVertexAttrib1fARB(loc,-2.0); glVertex2f(1,-1); glEnd();

  29. Shader sample one – ambient lighting // Vertex Shader void main() { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } // Fragment Shader void main() { gl_FragColor = vec4(0.2, 0.6, 0.8, 1); }

  30. Shader sample one – ambient lighting

  31. Shader sample one – ambient lighting • Notice the C-style syntax • void main() { … } • The vertex shader uses two standard inputs, gl_Vertex and the model-view-projection matrix; and one standard output, gl_Position. • The line gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; applies the model-view-projection matrix to calculate the correct vertex position in perspective coordinates. • The fragment shader applies basic ambient lighting, setting its one standard output, gl_FragColor, to a fixed value.

  32. // Vertex Shader varying vec3 Norm; varying vec3 ToLight; void main() { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; Norm = gl_NormalMatrix * gl_Normal; ToLight = vec3( gl_LightSource[0].position - (gl_ModelViewMatrix * gl_Vertex)); } // Fragment Shader varying vec3 Norm; varying vec3 ToLight; void main() { const vec3 DiffuseColor = vec3(0.2, 0.6, 0.8); float diff = clamp(dot(normalize(Norm), normalize(ToLight)), 0.0, 1.0); gl_FragColor = vec4(DiffuseColor * diff, 1.0); } Shader sample two – diffuse lighting

  33. Shader sample two – diffuse lighting

  34. Shader sample two – diffuse lighting • This examples uses varying parameters to pass info from the vertex shader to the fragment shader. • The varying parameters Norm and ToLight are automatically linearly interpolated between vertices across every polygon. • This represents the normal at that exact point on the surface. • The exact diffuse illumination is calculated from the local normal. • This is the Phong shading technique (usually seen for specular highlights) applied to diffuse lighting.

  35. Shader sample two – diffuse lighting • Notice the different matrix transforms used in this example: gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; Norm = gl_NormalMatrix * gl_Normal; ToLight = vec3(gl_LightSource[0].position - (gl_ModelViewMatrix * gl_Vertex)); • The gl_ModelViewProjectionMatrix transforms a vertex from local coordinates to perspective coordinates for display, whereas the gl_ModelViewMatrix transforms a point from local coordinates to eye coordinates. We use eye coordinates because lights are (usually) defined in eye coordinates. • The gl_NormalMatrix transforms a normal from local coordinates to eye coordinates; it holds the inverse of the transpose of the upper 3x3 submatrix of the model-view transform.

  36. GLSL – design goals • GLSL was designed with the following in mind: • Work well with OpenGL • Shaders should be optional extras, not required. • Fit into the design model of “set the state first, then render the data in the context of the state” • Support upcoming flexibility • Be hardware-independent • The GLSL folks, as a broad consortium, are far more invested in hardware-independence than, say, nvidia. • That said, they’ve only kinda nailed it: I get different compiler behavior and different crash-handling between my high-end home nVidia chip and my laptop Intel x3100. • Support inherent parallelization • Keep it streamlined, small and simple

  37. GLSL • The language design in GLSL is strongly based on ANSI C, with some C++ added. • There is a preprocessor--#define, etc! • Basic types: int, float, bool • No double-precision float • Vectors and matrices are standard: vec2, mat2 = 2x2; vec3, mat3 = 3x3; vec4, mat4 = 4x4 • Texture samplers: sampler1D, sampler2D, etc are used to sample multidemensional textures • New instances are built with constructors, a la C++ • Functions can be declared before they are defined, and operator overloading is supported.

  38. GLSL • Some differences from C/C++: • No pointers, strings, chars; no unions, enums; no bytes, shorts, longs; no unsigned. No switch() statements. • There is no implicit casting (type promotion): float foo = 1; fails because you can’t implicitly cast int to float. • Explicit type casts are done by constructor: vec3 foo = vec3(1.0, 2.0, 3.0); vec2 bar = vec2(foo); // Drops foo.z • Function parameters are labeled as in (default), out, or inout. • Functions are called by value-return, meaning that values are copied into and out of parameters at the start and end of calls.

  39. The GLSL API To install and use a shader in OpenGL: • Create one or more empty shader objects with glCreateShader. • Load source code, in text, into the shader with glShaderSource. • Compile the shader with glCompileShader. • The compiler cannot detect every program that would cause a crash. (And if you can prove otherwise, see me after class.) • Create an empty program object with glCreateProgram. • Bind your shaders to the program with glAttachShader. • Link the program (ahh, the ghost of C!) with glLinkProgram. • Register your program for use with glUseProgram. Vertex shader Fragment shader Program Compiler Linker OpenGL

  40. // From the Orange Book varying float NdotL; varying vec3 ReflectVec; varying vec3 ViewVec; void main () { vec3 ecPos = vec3(gl_ModelViewMatrix * gl_Vertex); vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal); vec3 lightVec = normalize(gl_LightSource[0].position.xyz - ecPos); ReflectVec = normalize(reflect(-lightVec, tnorm)); ViewVec = normalize(-ecPos); NdotL = (dot(lightVec, tnorm) + 1.0) * 0.5; gl_Position = ftransform(); gl_FrontColor = vec4(vec3(0.75), 1.0); gl_BackColor = vec4(0.0); } vec3 SurfaceColor = vec3(0.75, 0.75, 0.75); vec3 WarmColor = vec3(0.1, 0.4, 0.8); vec3 CoolColor = vec3(0.6, 0.0, 0.0); float DiffuseWarm = 0.45; float DiffuseCool = 0.045; varying float NdotL; varying vec3 ReflectVec; varying vec3 ViewVec; void main() { vec3 kcool = min(CoolColor + DiffuseCool * vec3(gl_Color), 1.0); vec3 kwarm = min(WarmColor + DiffuseWarm * vec3(gl_Color), 1.0); vec3 kfinal = mix(kcool, kwarm, NdotL) * gl_Color.a; vec3 nreflect = normalize(ReflectVec); vec3 nview = normalize(ViewVec); float spec = max(dot(nreflect, nview), 0.0); spec = pow(spec, 32.0); gl_FragColor = vec4(min(kfinal + spec, 1.0), 1.0); } Shader sample three – Gooch shading

  41. Shader sample three – Gooch shading

  42. Shader sample three – Gooch shading • Gooch shading is not a shader technique per se. • It was designed by Amy and Bruce Gooch to replace photorealistic lighting with a lighting model that highlights structural and contextual data. • They use the diffuse term of the conventional lighting equation to choose a map between ‘cool’ and ‘warm’ colors. • This is in contrast to conventional illumination where diffuse lighting simply scales the underlying surface color. • This, combined with edge-highlighting through a second renderer pass, creates models which look more like engineering schematic diagrams. Image source: “A Non-Photorealistic Lighting Model For Automatic Technical Illustration”, Gooch, Gooch, Shirley and Cohen (1998). Compare the Gooch shader, above, to the Phong shader (right).

  43. Shader sample three – Gooch shading • In the vertex shader source, notice the use of the built-in ability to distinguish front faces from back faces: gl_FrontColor = vec4(vec3(0.75), 1.0); gl_BackColor = vec4(0.0); This supports distinguishing front faces (which should be shaded smoothly) from the edges of back faces (which will be drawn in heavy black.) • In the fragment shader source, this is used to choose the weighted diffuse color by clipping with the a component: vec3 kfinal = mix(kcool, kwarm, NdotL) * gl_Color.a; Here mix() is a GLSL method which returns the linear interpolation between kcool and kwarm. The weighting factor (‘t’ in the interpolation) is NdotL, the diffuse lighting value.

  44. Ivory – vertex shader uniform vec4 lightPos; varying vec3 normal; varying vec3 lightVec; varying vec3 viewVec; void main(){ gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; vec4 vert = gl_ModelViewMatrix * gl_Vertex; normal = gl_NormalMatrix * gl_Normal; lightVec = vec3(lightPos - vert); viewVec = -vec3(vert); }

  45. Ivory – fragment shader varying vec3 normal; varying vec3 lightVec; varying vec3 viewVec; void main(){ vec3 norm = normalize(normal); vec3 L = normalize(lightVec); vec3 V = normalize(viewVec); vec3 halfAngle = normalize(L + V); float NdotL = dot(L, norm); float NdotH = clamp(dot(halfAngle, norm), 0.0, 1.0); // "Half-Lambert" technique for more pleasing diffuse term float diffuse = 0.5 * NdotL + 0.5; float specular = pow(NdotH, 64.0); float result = diffuse + specular; gl_FragColor = vec4(result); }

  46. Antialiasing on the GPU • Hardware antialiasing can dramatically improve image quality. • The naïve approach is simply to supersample the image • This is easier in shaders than it is in standard software • But it really just postpones the problem. • Several GPU-based antialiasing solutions have been found. • Eric Chan published an elegant polygon-based antialiasing approach in 2004 which uses the GPU to prefilter the edges of a model and then blends the filtered edges into the original polygonal surface. (See figures at right.)

  47. Antialiasing on the GPU • One clever form of antialiasing is adaptive analytic prefiltering. • The precision with which an edge is rendered to the screen is dynamically refined based on the rate at which the function defining the edge is changing with respect to the surrounding pixels on the screen. • This is supported in the shader language by the methods dFdx(F) and dFdy(F). • These methods return the derivative with respect to X and Y of some variable F. • These are commonly used in choosing the filter width for antialiasing procedural textures. (A) Jagged lines visible in the box function of the procedural stripe texture (B) Fixed-width averaging blends adjacent samples in texture space; aliasing still occurs at the top, where adjacency in texture space does not align with adjacency in pixel space. (C) Adaptive analytic prefiltering smoothly samples both areas. Image source: Figure 17.4, p. 440, OpenGL Shading Language, Second Edition, Randi Rost, Addison Wesley, 2006. Digital image scanned by Google Books. Original image by Bert Freudenberg, University of Magdeburg, 2002.

  48. Particle systems on the GPU • Shaders extend the use of texture memory dramatically. Shaders can write to texture memory, and textures are no longer limited to being a two-dimensional plane of RGB(A). • A particle systems can be represented by storing a position and velocity for every particle. • A fragment shader can render a particle system entirely in hardware by using texture memory to store and evolve particle data. Image by Michael Short

More Related