1 / 65

Viewing and Projections

Viewing and Projections. Dr. Amy Zhang. Reading. Hill, Chapter 5 and 7 Red Book, Chapter 3, “Viewing”. 3D Graphics Pipeline. The big picture…. Outline. Camera Models Viewing Transformation Projection Matrix OpenGL Transformation Pipeline. Cameras. Cameras have an optical system:

vivi
Télécharger la présentation

Viewing and Projections

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Viewing and Projections Dr. Amy Zhang

  2. Reading • Hill, Chapter 5 and 7 • Red Book, Chapter 3, “Viewing”

  3. 3D Graphics Pipeline • The big picture…

  4. Outline • Camera Models • Viewing Transformation • Projection Matrix • OpenGL Transformation Pipeline

  5. Cameras • Cameras have an optical system: • Filters • Lenses • Aperture • The projection surface may be flat or curved, oriented at various angles with respect to the incoming light. • Examples: A camera or the eye.

  6. Camera Obscura • The first cameras – a dark box with a small hole in it

  7. The Pinhole Camera • An abstract camera model • Models the geometry of perspective projection • Used in most of computer graphics

  8. Pinhole Optics

  9. Perspective

  10. Perspective Derivation • Consider the projection of a point onto the projection plane:

  11. By similar triangles we can compute how much the x and y-coordinates are scaled • Looking down y axis:

  12. We get: • This is clearly a non-linear transformation • BUT: We can split it into a linear part followed by a nonlinear part

  13. Homogeneous Coordinates • Remember homogeneous coordinates: • To get a homogeneous point we divide all the coordinates by w: • This is called the perspective divide

  14. Perspective Projections • We can now rewrite the perspective projection as a linear transformation: • After division by the 4th component we get:

  15. The Reason for Lenses *

  16. The Lens Model • Lens, aperture, and image plane

  17. Focal length f: the distance from lens to image plane • A point in focus: the image of a point is on the image plane

  18. An out-of-focus point • The circle of confusion r

  19. The Gaussian / thin lens formula:

  20. A near point in out-of-focus

  21. The depth of focus dfocus and the depth of field (DOF) dfield

  22. Decreasing the aperture size reduces the size of the blur for points not in the focused plane, so that the blurring is imperceptible, and all points are within the dfield.

  23. Viewing and Projection • In OpenGL we distinguish between: • Viewing: placing the camera • Projection: describing the viewing frustum of the camera (and thereby the projection transformation) • Perspective divide: computing homogeneous points

  24. Outline • Camera Models • Viewing Transformation • Projection Matrix • OpenGL Transformation Pipeline

  25. OpenGL Transformations • The viewing transformation V transforms a point from world space to eye space:

  26. Placing the Camera • It is most natural to position the camera in world space as if it were a real camera • Identify the eye point where the camera is located • Identify the look-at point that we wish to appear in the center of our view • Identify an up-vector vector that we wish to be oriented upwards in our final image

  27. Look-At Positioning • We specify the view frame using the look-at vector a and the camera up vector up • The vector a points in the negative viewing direction • In 3D, we need a third vector that is perpendicular to both up and a to specify the view frame

  28. Where does it point to? • The result of the cross product is a vector, not a scalar, as for the dot product • Depending on the basis vectors i, j, and k, the new vector follows the right or left handed rule • In OpenGL, the cross product a x b yields a right hand side (RHS) vector perpendicular to a and b

  29. Computing Cross Products • We can compute the cross product using yet another matrix-vector multiplication: • The matrix is sometimes called the skew-symmetric matrix of the vector (in this case a) • Cross products produce vectors for both vector and point inputs

  30. Constructing a Frame • The cross product between the up and the look-at vector will get a vector that points to the right. • Finally, using the vector a and the vector r we can synthesize a new vector u in the up direction:

  31. World and Camera Frames • The relation between the world and the camera is expressed as: • We move the eye (camera) by updating E

  32. Rotation • Rotation first:

  33. Translation • Translation to the eye point:

  34. Composing the Result • The final camera transformation is: • Why?

  35. The Viewing Transformation • Expressing P in eye coordinates:

  36. The Viewing Transformation • As a single 4x4 matrix: • Where these are normalized vectors:

  37. gluLookAt() • OpenGL provides a very helpful utility function that implements the look‐at viewing specification: • These parameters are expressed in world coordinates

  38. Outline • Camera Models • Viewing Transformation • Projection Matrix • OpenGL Transformation Pipeline

  39. OpenGL Transformations • The projection transformation P transforms a point from eye space to clip space:

  40. Projection Transformations • Projections fall into two categories: • Parallel projections: The camera is placed at an infinite distance from the viewplane; lines of projection are parallel to each other • Perspective projections: Lines of projection converge at a point

  41. Parallel Projections • The simplest form of parallel projection is simply along lines parallel to the z-axis onto the xy-plane • This form of projection is called orthographic • For other parallel projections see, e.g.: http://www.mtsu.edu/~csjudy/planeview3D/tutorialparallel.html

  42. Orthographic Frustum • The user specifies the orthographic viewing frustum by specifying minimum and maximum x/y coordinates • It is necessary to indicate a range of distances along the z-axis by specifying near and far planes

  43. Orthographic Projections to NDC • Normalized Device Coordinate (NDC) makes up a coordinate system that describes positions on a virtual plotting device • Here is the orthographic world-to-clip transformation: • Move center to origin T(-(left+right)/2, -(bottom+top)/2,(near+far)/2)) • Scale to have sides of length 2 S(2/(right-left),2/(top-bottom),2/(far-near)) P = ST =

  44. Orthographic Projection in OpenGL • This matrix is constructed with the following OpenGL call: • And the 2D version (another GL utility function): • Just a call to glOrtho() with near = -1 and far = +1

  45. Properties of Parallel Projections • Not realistic looking • Good for exact measurements • A kind of affine transformation • Parallel lines remain parallel • Ratios are preserved • Angles (in general) not preserved • Most often used in CAD, architectural drawings, etc., where taking exact measurement is important

  46. Isometric Games • A special kind of parallel projection called isometric projection is often used in games • It’s essentially a shear and an orthographic projection • Easier to compute than a full perspective transformation Diablo SimCity The Sims

  47. Perspective Projections • Artists (Donatello, Brunelleschi, and Da Vinci) during the renaissance discovered the importance of perspective for making images appear realistic • Parallel lines intersect at a point

  48. Perspective Viewing Frustum • Just as in the orthographic case, we specify a perspective viewing frustum • Values for left, right, top, and bottom are specified at the near depth.

More Related