1 / 87

CSCI 6360/4360

CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 7 OpenGL Programming and Reference Guides, other sources. ppt from Angel, AW, van Dam, etc. CSCI 6360/4360. Implementation. “From Vertices to Fragments” Next steps in viewing pipeline: Clipping

feng
Télécharger la présentation

CSCI 6360/4360

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CG Algorithms and Implementation:“From Vertices to Fragments”Angel: Chapter 7OpenGL Programming and Reference Guides, other sources.ppt from Angel, AW, van Dam, etc. CSCI 6360/4360

  2. Implementation • “From Vertices to Fragments” • Next steps in viewing pipeline: • Clipping • Eliminating objects that lie outside view volume • and, so, not visible in image • Rasterization • Produces fragments from remaining objects • Hidden surface removal (visible surface determination) • Determines which object fragments are visible • Show objects (surfaces)not blocked by objects closer to camera • Will consider above in some detail in order to give feel for computational cost of these elements • “in some detail” = algorithms for implementing • … algorithms that are efficient • Same algorithms for any standard API • Whether implemented by pipeline, raytracing, etc. • Will see different algorithms for same basic tasks

  3. About Implementation Strategies • Angel: At most abstract level … • Start with application program generated vertices • Do stuff like transformation, clipping, … • End up with pixels in a frame buffer • Can consider two basic strategies • Will see again in hidden surface removal • Object-oriented -- An object at a time … • For each object • Render the object • Each object has series of steps • Image-oriented -- A pixel at a time … • For each pixel • Assign a frame buffer value • Such scanline based algorithms exploit fact that in images values from one pixel to another often don’t change much • Coherence • So, can use value of a pixel in calculating value of next pixel • Incremental algorithm

  4. Tasks to Render a Geometric Entity1 • Angel introduces more general terms and ideas, than just for OpenGL pipeline… • Recall, chapter title “From Vertices to Fragments” … and even pixels • From definition in user program to (possible) display on output device • Modeling, geometry processing, rasterization, fragment processing • Modeling • Performed by application program, e.g., create sphere polygons (vertices) • Angel example of spheres and creating data structure for OpenGL use • Product is vertices (and their connections) • Application might even reduce “load”, e.g., no back-facing polygons

  5. Tasks to Render a Geometric Entity2 • Geometry Processing • Works with vertices • Determine which geometric objects appear on display • 1. Perform clipping to view volume • Changes object coordinates to eye coordinates • Transforms vertices to normalized view volume using projection transformation • 2. Primitive assembly • Clipping object (and it’s surfaces) can result in new surfaces (e.g., shorter line, polygon of different shape) • Working with these “new” elements to “re-form” (clipped) objects is primitive assembly • Necessary for, e.g., shading • 3. Assignment of color to vertex • Modeling and geometry processing called “front-end processing” • All involve 3-d calculations and require floating-point arithmetic

  6. Tasks to Render a Geometric Entity3 • Rasterization • Only x, y values needed for (2-d) frame buffer • Rasterization, scan conversion, determines which fragments displayed (put in frame buffer) • For polygons, rasterization determines which pixels lie inside 2-d polygon determined by projected vertices • Colors • Most simply, fragments (and their pixels) are determined by interpolation of vertex shades & put in frame buffer • Output of rasterizer is in units of the display (window coordinates)

  7. Tasks to Render a Geometric Entity4 • Fragment Processing • Colors • OpenGL can merge color (and lighting) results of rasterization stage with geometric pipeline • E.g., shaded, texture mapped polygon (next chapter) • Lighting/shading values of vertex merged with texture map • For translucence, must allow light to pass through fragment • Blending of colors uses combination of fragment colors, using colors already in frame buffer • e.g., multiple translucent objects • Hidden surface removal performed fragment by fragment using depth information • Anti-aliasing also dealt with

  8. Efficiency and Algorithms • For cg illumination/shading, saw how role of efficiency drove algorithms • Phong shading is “good enough” to be perceived as “close enough” to real world • Close attention to algorithmic efficiency • Similarly, for often calculated geometric processing efficiency is a prime consideration • Will consider efficient algorithms for: • Clipping • Line drawing • Visible surface drawing

  9. Recall, Clipping … • Scene's objects are clipped against clip space bounding box • Eliminates objects (and pieces of objects) not visible in image • Efficient clipping algorithms for homogeneous clip space • Perspective division divides all by homogeneous coordinates, w • Clip space becomes Normalized Device Coordinate (NDC) space after perspective division

  10. Clipping • Clipping is performed a bazillion times in cg pipeline • Depending on exact algorithms and number of lines and polygons • Different kinds of clipping • 2D against clipping window • 3D against clipping volume • Easy for line segments of polygons • Polygons can be handled in other ways, too • E.g., Bounding boxes • Hard for curves and text • Convert to lines and polygons first • First example of cg algorithm • Designed for efficient execution • Efficiency includes: • multiplication vs. addition • use of Boolean vs. arithmetic operations • Integer vs. real • Space complexity • Even the “constant” in time complexities

  11. Clipping - 2D Line Segments • Could clip using brute force • Compute intersections with all sides of clipping window • Computing intersections is expensive • To explicitly find intersection, essentially solve y = mx + b • Use lines end points to find slope and intercept • See if line is in there • Requires multiplication/division

  12. Cohen-Sutherland Algorithm • Idea is to eliminate as many cases as possible without computing intersections • E.g., both ends of line outside – or inside • Computing intersections is expensive • Start with four lines that determine sides of clipping window • As if extending sides, top, and bottom of window out • Will use xmin, xmax, ymin ,ymax, in algorithm y = ymax x = xmin x = xmax y = ymin

  13. Consider Cases: Where Endpoints Are y = ymax • Based on relationship of endpoints and clipping region xmin, xmax, ymin, ymax, will define cases • Case 1: • Both endpoints of line segment inside all four lines • Draw (accept) line segment as is • Case 2: • Both endpoints outside all lines and on same side of a line • Discard (reject) the line segment • “trivially reject” • Case 3: • One endpoint inside, one outside • Must do at least one intersection • Case 4: • Both outside • May have part inside • Must do at least one intersection x = xmin x= xmin y = ymin y = ymax x = xmin x = xmax y = ymin

  14. Defining Outcodes y = ymax • For each line endpoint define an outcode: • Endpoint includes both x1, y1 and x2, y2 • 4 bits for each endpoint: b0b1b2b3, • b0 = 1 if y > ymax, 0 otherwise • b1 = 1 if y < ymin, 0 otherwise • b2 = 1 if x > xmax, 0 otherwise • b3 = 1 if x < xmin, 0 otherwise • Examples in red with blue ends at right: • Tedious, but automatic • Left line outcodes: 0000, 0000 • Right line outcodes: 0110, 0010 • Outcodes divide space into 9 regions • Computation of outcode requires at most 4 subtractions • E.g., y1 - ymax • Testing of outcodes can be done with bitwise comparisons x = xmin x= xmin y = ymin

  15. Using Outcodes, 1 • Cases: • AB: outcode(A) = outcode(B) = 0 • A = 0000, B = 0000 • Accept line segment • CD: outcode (C) = 0, outcode(D)  0 • C = 0000, D = anything else • Here, D = 0010 • Do need to compute intersection • Location of 1 in outcode (D) determines which edge to intersect with • So, “shortened” line is what is displayed • C – D’ • Note: • If there were a segment from A to a point in a region with 2 ones in outcode, might have to do two intersections D’

  16. Using Outcodes, 2 J’ • Cases continued: • EF: outcode(E) && outcode(F)  0 • && is bitwise logical AND • E = 0010, F = 0010 • Both outcodes have 1 bit in same place • Line segment is outside of corresponding side of clipping window • Reject – typically, most frequent case • GH, IJ (same outcodes), neither zero, but && of endpoints = zero • G (and I) = 0001, H (and J) = 1000 • Test for intersection • If found, shorten line segment by intersecting with one of sides of window • Compute outcode of intersection (new endpoint of shortened line segment) • (Recursively) reexecute algorithm I’

  17. Efficiency and Extension to 3D • Very efficient in many applications • Clipping window small relative to size of entire data base • Most line segments are outside one or more side of the window and can be eliminated based on their outcodes • Inefficient, when code has to be reexecuted for line segments that must be shortened in more than one step • For 3 dimensions • Use 6-bit outcodes • When needed, clip line segment against planes

  18. Rasterization

  19. Rasterization • End of pipeline (processing) – • Putting values in the frame buffer (or raster) • write_pixel (x, y, color) • At this stage, fragments – clipped, colored, etc. at level of vertices, are turned into values to be displayed • (deferring for a moment the question of hidden surfaces and colors) • Essential question is “how to go from vertices to display elements?” • E.g., lines • Algorithmic efficiency is a continuing theme

  20. Drawing Algorithms • As noted, implemented in graphics processor • Used bazillions of times per second • Line, curve, … algorithms • Line is paradigm example • most common 2D primitive - done 100s or 1000s or 10s of 1000s of times each frame • even 3D wireframes are eventually 2D lines • optimized algorithms contain numerous tricks/techniques that help in designing more advanced algorithms • Will develop a series of strategies, towards efficiency

  21. Drawing Lines: Overview • Recall, fundamental “challenge” of computer graphics: • Representing the analog (physical) world on a discrete (digital) device • Consider a very low resolution display: • Sampling a continuous line on a discrete grid introduces sampling errors: the “jaggies” • For horizontal, vertical and diagonal lines all pixels lie on the ideal line: special case • For lines at arbitrary angle, pick pixels closest to the ideal line • Will consider several approaches • But, “fast” will be best

  22. Strategy 1 ­ Really Basic Algorithm Qx,y • First, the (really) basic algorithm: • Find equation of line that connects 2 pts, Px,y, Qx,y • y = mx + b • m = Dy / Dx , where Dx = xend – xstart, Dy = yend – ystart • Starting with the leftmost point P, • increment x by 1 and calculate y = mx + b at each x point/value • where m = slope, b = y intercept for x = Px, Qx y = round (m*x + b) // compute y write-pixel (x, y) • This works, but uses computationally expensive operations (multiply) at each step • Worked for you homework • Do note that when turn on a pixel, in fact does approximate an ideal line Px,y

  23. Strategy 2 ­ Incremental Algorithm, 1 Qx,y • So, (really) basic algorithm: for x = Px, Qx y = round (m*x + b) // compute y write-pixel (x, y) • Can modify basic algorithm to be an incremental algorithm • Use current state of computation in finding next state • i.e., incrementally going toward the solution • Not “recompute” the entire solution, as above – • Not same computation regardless of where are • Use partial solution, here, last y value, to find next value • Modify (really) Basic Algorithm to just add slope, vs. multiply – next slide m = Dy / Dx // compute slope (to be added) y = m * Px + b // still multiply to get first y value for x = Px, Qx write-pixel (x, round(y)) y = y + m // increment y for next value, just by adding • Make incremental calcs based on preceding step to find next y value • Works here because going unit/1 to right, incrementing x by 1 and y by (slope) m Px,y

  24. Strategy 2 ­ Incremental Algorithm, 2 • Incremental algorithm m = Dy / Dx // slope y = m * Px + b // first y value for x = Px, Qx write-pixel (x, round(y)) y = y + m // inc y for next • Definite improvement over basic algorithm • Still problems • Too slow • Rounding integers takes time • Variables y and m must be a real or fractional binary because slope is a fraction • Ideally, want just integers and add m

  25. Strategy 3 ­ Midpoint Line Algorithm • Midpoint line algorithm (MLA) considers that “ideal” line is in fact approximated on a raster (pixel based) display • Hence, will be “error” in where ideal line should be and how it is represented by turning on pixels • Will use amount of “possible error” to decide which pixel to turn on for successive steps error error

  26. Strategy 3 ­ MLA, 1 • Assume that the (ideal) line's slope is shallow and positive (0 < m < 1). • Other slopes can be handled by suitable reflections about principal axes • Note are calculating “ideal line” and turning on pixels as approximation • Assume that we have just selected the pixel P at (xp ,yp) • Next, must choose between: • pixel to right (pixel E), or • pixel one right and one up (pixel NE) • Let Q be intersection point of line being scan­converted with grid line x = xp + 1 • Note that pixel turned on is not exactly on line – so, “error” Q

  27. Strategy 3 - MLA, 2 • Observe on which side of (ideal) line the midpoint M lies: • E pixel closer to line if midpoint lies above line • NE pixel closer to line if midpoint lies below line • (Ideal) line passes between E and NE • Point closer to point Q must be chosen • Either E or NE • Error: • Vertical distance between chosen pixel and actual line - always <= ½ • Here, algorithm chooses NE as next pixel for line shown • Now, find a way to calculate on which side of line midpoint lies

  28. MLA – Eq. of Line for Selection • How to choose which pixel, based on M and distance of M from ideal line • Line equation as a function, f(x): • y = m * x + b • y = dy/dx * x + b • Line equation as an implicit function: • f(x, y) = a*x + b*y + c = 0 • From above, algebraically (mult. by dx) y * dx = dy * x + b * dx • So, algebraically, a = dy, b = ­dx, c = b*dx, a>0 for y0<y1 • Properties (proof by case analysis): • f(xm, ym) = 0 when any point M is on line • f(xm, ym) < 0 when any point M above line • f(xm, ym) > 0 any point M below line - here • Decision will be based on value of the function at midpoint • M at (xp+1, yp+1/2) – ½ - midpoint

  29. MLA, Decision Variable • So, find a way to (efficiently) calculate on which side of line midpoint lies • And that’s what we just saw • Decision variable d: • Only need sign (fast) of f (xp+1, yp+1/2) • to see where the line lies, • and then pick nearest pixel. • d = f (xp+1, yp+1/2) • if d > 0 choose pixel NE. • if d < 0 choose pixel E. • if d = 0 choose either one consistently. • How to update d: • On basis of picking E or NE, • figure out the location of M to that pixel, and the corresponding value of d for the next grid line

  30. MLA, If E was chosen: • M is incremented by one step in x direction • Subtract dold from dnew to get the incremental difference DE. dnew = f(xp + 2, yp + 1/2) = a(xp +2) + b(yp + 1/2) + c d old = a(xp + 1) + b(yp + 1/2) + c • Derive value of decision variable at next step incrementally without computing f(M) directly: • dnew = dold + DE = dold + dy • DE = a = dy • DE can be thought of as the correction or update factor to take dold to dnew • (and this is the insight: “carrying along the error, vs. recalculating”). • dnew = dold + a • H called “forward difference”

  31. MLA, If NE was chosen: • M is incremented by one step each in both the x and y directions. • dnew = f(xp+2, yp+3/2) • dnew = a(xp+2) + b(yp+3/2) + c • Subtract dold from dnew to get the incremental difference. • dnew = dold + a + b • DNE = a + b = dy ­ dx • Thus, incrementally, • dnew = dold + DNE = dold + dy ­ dx

  32. MLA Summary • At each step, algorithm chooses between 2 pixels based on sign of decision variable calculated in previous iteration • Update decision variable by adding either DE or DNE to old value depending on choice of pixel. • First pixel is first endpoint (x0, y0), so can directly calc init val of d for choosing between E and NE. • First midpoint is at (x0 + 1, y0 + 1/2) • F(x0+1, y0+1/2) = a(x0 + 1) + b(y0 + 1/2) + c = ax 0 + by 0 + c + a + b/2 = F(x0, y0 ) + a + b/2 • But (x0 , y0 ) is point on line and F(x0, y0 ) = 0 Therefore, dstart = a + b/2 = dy ­ dx/2. Use d start to choose the second pixel, etc. • To eliminate fraction in d start: Redefine F by multiplying it by 2; F(x,y) = 2(ax + by + c). This multiplies each constant and the decision variable by 2, but does not change the sign.

  33. FYI - MLA Example Code void MidpointLine(int x0, int y0, int x1, int y1) { int dx = x1 ­ x0; int dy = y1 ­ y0; int d = 2 * dy ­ dx; int incrE = 2 * dy; int incrNE = 2 * (dy ­ dx); int x = x0; int y = y0; writePixel(x,y); while(x < x1) { if(d <= 0) { // East Case d = d + incrE; x++; } else { // Northeast Case d = d + incrNE; x++; y++; } writePixel(x,y); } /* while */ } /* MidpointLine */ • In C:

  34. Bresenham’s Line Algorithm • Can generalize MLA algorithm to work for lines beginning at points other than (0,0) by giving x and y the proper initial values • Results in Bresenham's Line Algorithm.

  35. Hidden Surface Removal • Or, Visible Surface Determination (VSD)

  36. Recall, Projection … • Projectors • View plane (or film plane) • Direction of projection • Center of projection • Eye, projection reference point

  37. About Visible Surface Determination, 1 • Have been considering models, and how to create images from models • e.g., when viewpoint/eye/COP changes, transform locations of vertices (polygon edges) of model to form image • In fact, projectors are extended from front and back of all polygons • Though only concerned with “front” polygons Projectors from front (visible) surface only

  38. About Visible Surface Determination, 2 • To form image, must determine which objects in scene obscured by other objects • Why might objects not be visible? • Occlusion, and also clipping • Definition of visible surface determination (VSD): • Given a set of 3-D objects and a view specification (camera), determine which lines or surfaces of the object are visible • Also called Hidden Surface Removal (HSR)

  39. Visible Surface Determination: Historical notes • Problem first posed for wireframe rendering • doesn’t look too “real” (and in fact is ambiguous) • Solution called “hidden-line (or surface) removal” • Lines themselves don’t hide lines • Lines must be edges of opaque surfaces that hide other lines • Some techniques show hidden lines as dotted or dashed lines for more info • Hidden surface removal often appears as one stage in other models

  40. Classes of VSD Algorithms • Will see different VSD algorithms have advantages and disadvantages: 0. “Conservative” visibility testing: • only trivial reject - does not give final answer • e.g., back-face culling, canonical view volume clipping • have to feed results to algorithms mentioned below 1. Image precision • resolve visibility at discrete points in image • Z-buffer, scan-line (both in hardware), ray-tracing 2. Object precision • resolve for all possible view directions from a given eye point

  41. Image Precision • Resolve visibility at discrete points in image • Sample model, then resolve visibility • raytracing, Z-buffer, scan-line • operate on display primitives. e.g., pixels, scan-lines • visibility resolved to the precision of the display • (very) High Level Algorithm: for (each pixel in image, i.e., from COP to model) { 1. determine object closest to viewer pierced by projector thru pixel 2. draw pixel in appropriate color } • Complexity: • O( n.p), where n = objects, p = pixels, from above for loop or just, at each pixel consider all objects and find closest point

  42. Object Precision, 1 • Resolve for all possible view directions from a given eye point • Historically, first • Each polygon is clipped by projections of all other polygons in front of it • Irrespective of view direction or sampling density • Resolve visibility exactly, then sample the results • Invisible surfaces are eliminated and visible sub-polygons are created • e.g., variations on painter's algorithm, poly’s clipping poly’s, 3-D depth sort, BSP: binary-space partitions

  43. Object Precision, 2 • (very) High Level Algorithm for (each object in the world) { 1. determine parts of object whose view is unobstructed by other parts of it or any other object 2. draw pixel in appropriate color } • Complexity: • O( n2 ), where n = number of objects • from above for loop or just • must consider all objects (visibility) interacting with all others • (but, even when n << p, “steps” are longer, as a constant factor)

  44. Painter’s Algorithm • To start at the beginning … • Way to resolve visibility exactly • Create drawing order, each poly overwriting the previous ones guarantees correct visibility at any pixel resolution • Strategy is to work back to front • find a way to sort polygons by depth (z), then draw them in that order • do a rough sort of polygons by smallest (farthest) z-coordinate in each polygon • draw most distant polygon first, then work forward towards the viewpoint (“painters’ algorithm”)

  45. Back-Face Culling: Overview • Back-face culling directly eliminates polygons not facing the viewer • Makes sense given constraint of convex (no “inward” face) polygons • Computationally, can eliminate back faces by: • Line of sight calculations • Plane half-spaces • In practice, • surface (and vertex) normals often stored with vertex list representations • Normals used both in back face culling and illumination/shading models

  46. Back-Face Culling: Line of Sight Interpretation • Line of Sight Interpretation • Use outward normal (ON) of polygon to test for rejection • LOS = Line of Sight, • The projector from the center of projection (COP) to any point P on the polygon. • (note: For parallel projections LOS = DOP = direction of projection) • If normal is facing in same direction as LOS, it’s a back face: • Use cross-product • if LOS ON >= 0, then polygon is invisible—discard • if LOS ON < 0, then polygon may be visible

  47. Back-Face Culling - OpenGL • OpenGL automatically computes an outward normal from the cross product of two consecutive screen-space edges and culls back-facing polygons • just checks the sign of the resulting z component

  48. Z-Buffer Algorithm, 1 • Recall, frame/refresh buffer: • Screen is refreshed one scan line at a time, from pixel information held in a refresh or frame buffer • Additional buffers can be used to store other pixel information • E.g., double buffering for animation • 2nd frame buffer to which to draw an image (which takes a while) • then, when drawn, switch to this 2nd frame/refresh buffer and start drawing again in 1st • Also, a z-buffer in which z-values (depth of points on a polygon) stored for VSD

  49. Z-Buffer Algorithm, 2 • Just draw every polygon • If find a piece (one or more pixels) of a polygon is closer to the front of what there already, draw over it • Init Z-buffer to background value • furthest plane view vol., e.g, 255, 8-bit • Polygons scan-converted in arbitrary order • When pixels overlap, use Z-buffer to decide which polygon “gets” that pixel • If new point has z values less than previous one (i.e., closer to the eye), its z-value is placed in the z-buffer and its color placed in the frame buffer at the same (x,y) • Otherwise the previous z-value and frame buffer color are unchanged • Below shows numeric z-values and color to represent fb values

  50. Z-Buffer Algorithm, 3 • Polygons scan-converted in arbitrary order • After 1st polygon scan-converted, at depth 127 • After 2nd polygon, at depth 63 – in front of some of 1st polygon

More Related