1 / 33

Introduction to 3D Rendering in MonoGame

Learn how to draw a 3D model using MonoGame, load models through the Content Management Pipeline, set up matrices, and understand the graphics rendering pipeline.

phunter
Télécharger la présentation

Introduction to 3D Rendering in MonoGame

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Week 2 - Friday COMP 4290

  2. Last time • What did we talk about last time? • Graphics rendering pipeline • Geometry Stage

  3. Questions?

  4. Assignment 1

  5. Project 1

  6. 3D Rendering in MonoGame

  7. Drawing a model • We're going to start by drawing a 3D model • Eventually, we'll go back and create our own primitives • Like other MonoGame content, the easiest way to manage it is to add it to your Content folder and load it through the Content Management Pipeline • MonoGame can load (some).fbx, .x, and .obj files • Note that getting just the right kind of files (with textures or not) is sometimes challenging

  8. Loading a model • First, we declare a member variable to hold the model • Then we load the model in the LoadContent() method Model model; model = Content.Load<Model>("Ship");

  9. Setting up the matrices • To draw anything in 3D, we need a world matrix, a view matrix and a projection matrix • Since you'll need these repeatedly, you could store them as members Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0)); Matrix view = Matrix. CreateLookAt(new Vector3(0, 0, 7), new Vector3(0, 0, 0), Vector3.UnitY); Matrix projection = Matrix.CreatePerspectiveFieldOfView(0.9f, (float)GraphicsDevice.Viewport.Width / GraphicsDevice.Viewport.Height, 0.1f, 100.0f);

  10. What do those matrices mean? • The world matrix controls how the model is translated, scaled, and rotated with respect to the global coordinate system • This code makes a matrix that moves the model 0 units in x, 0 units in y, and 0 units in z • In other words, it does nothing Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0));

  11. And the view matrix? • The view matrix sets up the orientation of the camera • The easiest way to do so is to give • Camera location • What the camera is pointed at • Which way is up • This camera is at (0, 0, 7), looking at the origin, with positive y as up Matrix view = Matrix.CreateLookAt(new Vector3(0, 0, 7), new Vector3(0, 0, 0), Vector3.UnitY);

  12. And projection? • The projection matrix determines how the scene is projected into 2D • It can be specified with • Field of view in radians • Aspect ratio of screen (width / height) • Near plane location • Far plane location Matrix projection = Matrix.CreatePerspectiveFieldOfView( .9f, (float)GraphicsDevice.Viewport.Width / GraphicsDevice.Viewport.Height, 0.1f, 100f);

  13. Drawing • Drawing the model is done by drawing all the individual meshes that make it up • Each mesh has a series of effects • Effects are used for texture mapping, visual appearance, and other things • They need to know the world, view, and projection matrices foreach(ModelMeshmesh inmodel.Meshes) { foreach(BasicEffecteffect inmesh.Effects) { effect.World = world; effect.View = view; effect.Projection = projection; } mesh.Draw(); }

  14. Backface culling • I did not properly describe an important optimization done in the Geometry Stage: backface culling • Backface culling removes all polygons that are not facing toward the screen • A simple dot product is all that is needed • This step is done in hardware in MonoGame and OpenGL • You just have to turn it on • Beware: If you screw up your normals, polygons could vanish

  15. Graphics rendering pipeline • For API design, practical top-down problem solving, and hardware design, and efficiency, rendering is described as a pipeline • This pipeline contains three conceptual stages:

  16. Student Lecture: Rasterizer Stage

  17. Rasterizer Stage

  18. Rasterizer Stage • The goal of the Rasterizer Stage is to take all the transformed geometric data and set colors for all the pixels in the screen space • Doing so is called: • Rasterization • Scan Conversion • Note that the word pixel is actually a portmanteau for "picture element"

  19. More pipelines • As you should expect, the Rasterization Stage is also divided into a pipeline of several functional stages:

  20. Triangle Setup • Data for each triangle is computed • This could include normals • This is boring anyway because fixed-operation (non-customizable) hardware does all the work

  21. Triangle Traversal • Each pixel whose center is overlapped by a triangle must have a fragment generated for the part of the triangle that overlaps the pixel • The properties of this fragment are created by interpolating data from the vertices • Again, boring, fixed-operation hardware does this

  22. Pixel Shading • This is where the magic happens • Given the data from the other stages, per-pixel shading (coloring) happens here • This stage is programmable, allowing for many different shading effects to be applied • Perhaps the most important effect is texturing or texture mapping

  23. Texturing • Texturing is gluing a (usually) 2D image onto a polygon • To do so, we map texture coordinates onto polygon coordinates • Pixels in a texture are called texels • This is fully supported in hardware • Multiple textures can be applied in some cases

  24. Merging • The final screen data containing the colors for each pixel is stored in the color buffer • The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel • Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it)

  25. Z-buffer • To deal with the question of visibility, most modern systems use a Z-buffer or depth buffer • The Z-buffer keeps track of the z-values for each pixel on the screen • As a fragment is rendered, its color is put into the color buffer only if its z value is closer than the current value in the z-buffer (which is then updated) • This is called a depth test

  26. Pros and cons of the Z-buffer • Pros • Polygons can usually be rendered in any order • Universal hardware support is available • Cons • Partially transparent objects must be rendered in back to front order (painter's algorithm) • Completely transparent values can mess up the z buffer unless they are checked • z-fighting can occur when two polygons have the same (or nearly the same) z values

  27. More buffers • A stencil buffer can be used to record a rendered polygon • This stores the part of the screen covered by the polygon and can be used for special effects • Frame buffer is a general term for the set of all buffers • Different images can be rendered to an accumulation buffer and then averaged together to achieve special effects like blurring or antialiasing • A back buffer allows us to render off screen to avoid popping and tearing

  28. Finals words on the pipeline • This pipeline is focused on interactive graphics • Micropolygon pipelines are usually used for film production • Predictive rendering applications usually use ray tracing renderers • The old model was the fixed-function pipeline which gave little control over the application of shading functions • The book focuses on programmable GPUs which allow all kinds of tricks to be done in hardware

  29. Quiz

  30. Upcoming

  31. Next time… • GPU architecture • Programmable shading

  32. Reminders • Read Chapter 3 • Start on Assignment 1, due next Friday, September 13 by 11:59 • Keep working on Project 1, due Friday, September 27 by 11:59 • Amazon Alexa Developer meetup • Thursday, September 12 at 6 p.m. • Here at The Point • Hear about new technology • There might be pizza…

  33. Internship opportunities • Want a Williams-Sonoma internship? • Visit http://wsisupplychain.weebly.com/ • Interested in coaching 7-18 year old kids in programming? • Consider working at theCoderSchool • For more information: • Visit https://www.thecoderschool.com/locations/westerville/ • Contact Kevin Choo at kevin@thecoderschool.com • Ask me!

More Related