Download
ubi 516 advanced computer graphics n.
Skip this Video
Loading SlideShow in 5 Seconds..
UBI 516 Advanced Computer Graphics PowerPoint Presentation
Download Presentation
UBI 516 Advanced Computer Graphics

UBI 516 Advanced Computer Graphics

110 Vues Download Presentation
Télécharger la présentation

UBI 516 Advanced Computer Graphics

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. UBI 516Advanced Computer Graphics Visible Surface Detection Aydın Öztürk ozturk@ube.ege.edu.tr http://www.ube.ege.edu.tr/~ozturk

  2. Review: Rendering Pipeline • Almost finished with the rendering pipeline: • Modeling transformations • Viewing transformations • Projection transformations • Clipping • Scan conversion • We now know everything about how to draw a polygon on the screen, except visible surface detection.

  3. Invisible Primitives • Why might a polygon be invisible? • Polygon outside the field of view • Polygon is backfacing • Polygon is occluded by object(s) nearer the viewpoint • For efficiency reasons, we want to avoid spending work on polygons outside field of view or backfacing • For efficiency and correctness reasons, we need to know when polygons are occluded

  4. View Frustum Clipping • Remove polygons entirely outside frustum • Note that this includes polygons “behind” eye (actually behind near plane) • Pass through polygons entirely inside frustum • Modify remaining polygonsto pass through portions intersecting view frustum

  5. View Frustum Clipping • Canonical View Volumes • Remember how we defined cameras • Eye point, lookat point, v-up • Orthographic | Perspective • Remember how we define viewport • Width, height (or field of view, aspect ratio) • These two things define rendered volume of space • Standardize the height, length, and width of view volumes

  6. View Frustum Clipping • Canonical View Volumes

  7. Review Rendering Pipeline • Clipping equations are simplified • Perspective and Orthogonal (Parallel) projections have consistent representations

  8. Perspective Viewing Transformation • Remember the viewing transformation for perspective projection • Translate eye point to origin • Rotate such that projection vector matches –z axis • Rotate such that up vector matches y • Add to this a final step where we scale the volume

  9. Canonical Perspective Volume • Scaling

  10. Clipping • Because both camera types are represented by same viewing volume • Clipping is simplified even further

  11. Visible Surface Detection There are many algorithms developed for the visible surface detection ● Some methods involve more processing time. ● Some methods require more memory. ● Some others apply only to special types of objects.

  12. Classification of Visible-Surface Detection Algorithms They are classified according to whether they deal with object definitions or with their projected images. ● Object space methods. ● Image-space methods. Most visible-surface algorithms use image space method.

  13. Back-Face Detection • Most objects in scene are typically “solid”

  14. Note: backface detectionalone doesn’t solve thehidden-surface problem! Back-Face Detection (cont.) • On the surface of polygons whose normals point away from the camera are always occluded:

  15. Back-Face Detection yv • This test is based on inside-outside test. A point (x,y,z) is inside if N=(A,B,C) xv V zv • We can simplify this test by considering the normal vector vector N to a polygon surface, which has Cartesian components (A,B,C). • If V is a vector in the viewing direction from eye then this polygon is back face if V●N > 0. • If the object descriptions have been converted to projection coordinates and viewing direction is parallel to zv axis then V=(0, 0, Vz) and V●N=VzC so that we only need to consider the sign of C.

  16. Depth-Buffer (z-Buffer) Method • This method compares surface depths at each pixel position on the projection plane. • Each surface is processed separetly, one point at a time across the surface. • Surface S1 is closest to view plane, so its surface intensity value at (x,y) is saved. S3 S2 yv S1 xv (x,y) zv

  17. Steps for Depth-Buffer (z-Buffer) Method(Cont.) • Initialize the depth buffer and refresh buffer s.t. for all buffer positions (x,y) depth(x, y) = 0, refresh(x, y) = Ibackground

  18. Steps for Depth-Buffer (z-Buffer) Method(Cont.) • For each position on each polygon surface, compare depth values to previously stored values in depth buffer to determine visibility. ● Calculate the depth z for each (x,y) position on the polygon. ● If z >depth(x,y), then depth(x, y) = z, refresh(x, y) = Isurf(x,y). where Ibackground is the value for the bacground intensity and Isurf(x,y), is the projected intensity value for the surface at (x,y).

  19. Depth-Buffer (z-Buffer) Calculations. Depth values for a surface position (x,y) are calculated from the plane equation z-value for the horizontal next position z-value down the edge(starting at top vertex) Y Y-1 X X+1 top scan line Left edge intersection bottom scan line

  20. Scan-Line Method yv B E F Scan Line 1 A Scan Line 2 Scan Line 3 H S1 S1 C S2 D G xv

  21. Depth-Sorting Algorithm (Painter’s Algorithm) This method performs the following basic functions: • Surfaces are sorted in order of decreasing order. • Surfaces are scan converted in order, starting with the surface of greatest.

  22. Depth-Sorting Algorithm (Painter’s Algorithm) • Simple approach: render the polygons from back to front, “painting over” previous polygons:

  23. Depth-Sorting Algorithm (Painter’s Algorithm)

  24. Depth-Sorting Algorithm (Painter’s Algorithm)

  25. Painter’s Algorithm: Problems • Intersecting polygons present a problem • Even non-intersecting polygons can form a cycle with no valid visibility order:

  26. Analytic Visibility Algorithms • Early visibility algorithms computed the set of visible polygon fragments directly, then rendered the fragments to a display: • Now known as analytic visibility algorithms

  27. Analytic Visibility Algorithms • What is the minimum worst-case cost of computing the fragments for a scene composed of n polygons? • Answer: O(n2)

  28. Analytic Visibility Algorithms • So, for about a decade (late 60s to late 70s) there was intense interest in finding efficient algorithms for hidden surface removal • We’ll talk about two: • Binary Space-Partition (BSP) Trees

  29. Binary Space Partition Trees (1979) • BSP tree: organize all of space (hence partition)into a binary tree • Preprocess: overlay a binary tree on objects in the scene • Runtime: correctly traversing this tree enumerates objects from back to front • Idea: divide space recursively into half-spaces by choosing splitting planes • Splitting planes can be arbitrarily oriented

  30. BSP Trees: Objects

  31. BSP Trees: Objects

  32. BSP Trees: Objects

  33. BSP Trees: Objects

  34. BSP Trees: Objects

  35. Rendering BSP Trees renderBSP(BSPtree *T) BSPtree *near, *far; if (eye on left side of T->plane) near = T->left; far = T->right; else near = T->right; far = T->left; renderBSP(far); if (T is a leaf node) renderObject(T) renderBSP(near);

  36. Rendering BSP Trees

  37. Polygons: BSP Tree Construction • Split along the plane containing any polygon • Classify all polygons into positive or negative half-space of the plane • If a polygon intersects plane, split it into two • Recurse down the negative half-space • Recurse down the positive half-space

  38. Notes About BSP Trees • No bunnies were harmed in our example. • But what if a splitting plane passes through an object? • Split the object; give half to each node: Ouch

  39. BSP Demo • Nice demo: http://symbolcraft.com/graphics/bsp/

  40. Summary: BSP Trees • Advantages: • Simple, elegant scheme • Only writes to framebuffer (i.e., painters algorithm) • Thus very popular for video games (but getting less so) • Disadvantages: • Computationally intense preprocess stage restricts algorithm to static scenes • Worst-case time to construct tree: O(n3) • Splitting increases polygon count • Again, O(n3) worst case

  41. UBI 516Advanced Computer Graphics OpenGL Visibility Detection Functions

  42. OpenGL Backface Culling • glEnable(GL_CULL_FACE);glCullFace(mode);// mode: GL_BACK, GL_FRONT, GL_FRONT_AND_BACK :-o • glDisable(GL_CULL_FACE);

  43. OpenGL Depth Buffer Functions • Set display ModeglutDisplayMode( GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH ); • Clear screen and depth buffer every time in the display functionglClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); • Enable/disable depth bufferglEnable( GL_DEPTH_TEST );glDisable( GL_DEPTH_TEST );

  44. OpenGL Depth-Cueing Function • We can vary the brigthness of an objectglEnable ( GL_FOG );glFogi ( GL_FOG_MODE, mode);// modes: GL_LINEAR, GL_EXP or GL_EXP2. . .glDisable ( GL_FOG );