1 / 59

Advanced Programming for 3D Applications CE00383-3

Advanced Programming for 3D Applications CE00383-3. Game Architecture. Program Flow. Standard framework. main() {. begin. resources. Models, textures, sounds, etc. acquire. setup state. Camera, Alpha, Lighting, etc. update. Apply changes to your world. render. render.

Télécharger la présentation

Advanced Programming for 3D Applications CE00383-3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Programming for 3D Applications CE00383-3 Game Architecture

  2. Program Flow • Standard framework main() { begin resources Models, textures, sounds, etc. acquire setup state Camera, Alpha, Lighting, etc. update Apply changes to your world. render render Project your world onto the screen. release resources Free resources previously acquired. fin return 0; }

  3. Program Flow begin Main loop Loops once per frame resources acquire setup state update render Resources Infrequent loop when important changes (popup, change resolution, monitor, etc) render release resources fin

  4. int main(void) { CGame Game; if(Game.Start()) { printf("\nGame started\n"); while(Game.GetRunningStatus()) { Game.GameLoop(); } } else printf("Startup Failed\n"); printf("\nGame stopping\n\n"); Game.Stop(); return 0; }

  5. Game Class • There is a single instance of the game class, CGame containing methods • CGame::Start() executes all of the required star-up code while CGame::Stop() performs any necessary clean-up at the end of the game execution. • CGame::GameLoop() is the game loop code that is executed every frame and this contains the state machine implementation. • CGame::GetRunningStatus() returns a Boolean to indicate whether the game is running (true) or has ended (false).

  6. enum GAMESTATE {SPLASH, MENU, LEVEL1, EXIT}; class CGame { public:      CGame(void);      ~CGame(void);      bool Start(void);      void GameLoop(void);      void GameSplash(void);      void GameMenu(void);      void GameLevel1(void);      void GameExit(void);      void Stop(void);      static void SignalHandler(int Signal);      inline bool GetRunningStatus(void)           {return m_bRunning;}      static inline void SetRunningStatus(bool Status)           {m_bRunning = Status;} protected:      static bool m_bRunning; PS2SpriteT Sprite;      CTexture Game;      CFont Font; CTexture FontTex;      enum GAMESTATE GameState;    }; Basic Framework

  7. The CGame::Start() method allocated memory, configures the signal handler, configures the control pads, initialises the screen clear colour, loads the font textures the loads the game textures. • Game::Start() returns true if all of the initialisation code is successful. • CGame::GameLoop() contains the state machine implementation.

  8. void CGame::GameLoop() {      pad_update(PAD_0);      if((pad[0].buttons & PAD_START)&&(pad[0].buttons & PAD_SELECT)) SetRunningStatus(false);      switch (GameState)      {           case SPLASH:               GameSplash();               break;           case MENU:               GameMenu();               break;           case LEVEL1:               GameLevel1();               break;           case EXIT:               GameExit();               break;           default:               printf("Undefined Game State\n");               SetRunningStatus(false);               break;      } }

  9. Game Loop • CGame::GameLoop() is used to switch between the various game states based on the enumerated variable GameState. • The organisation of the state machine can be adapted to meet the requirements of the application. • The CGame::GameLoop() also contains exit code by pressing start and select together on controller pad 0 – this is very useful for debugging code where the developer does not wish to always have to work through a menu system.

  10. Game Engine Architecture Game Content • Control the presentation of game content • Geometry and Texture • Script • Sound • Simulation (i.e. Physics) • AI Geometry Management Animation and Simulation Sound Network Script Interpreter AI Core Library

  11. 3D Rendering systems • The modeling-rendering paradigm • Graphics pipeline

  12. Geometry Management • Move as many polygons as possible as quickly as possible • Apply Texture (usually multiple textures) • Lighting • Color and Blending • Depth sort (ZBuffer, BSP tree) • Bind shader programs (DX9) • … • Control each rendering pass • Cull out invisible geometry • Meshes are exported from the art tool • Usually 3D Studio MAX or Maya • Some additional processing is done • Level-of-detail simplification • Polygons organized into engine-friendly structures • Vertex arrays • Triangle strips • Scene graph • Texture compression • Colors checked for “illegal” values

  13. Vertex Arrays • Vertex Buffers (DX) / Vertex Arrays (OGL) • An array of per-vertex information • x, y, z: position • r, g, b: color • i, j, k: normal (for lighting and physics) • u, v, w: texture (w optional) • Multitexturing would append other (u, v, w) coordinates • Other stuff (tangent, binormal for bump mapping; other application-specific per-vertex information) • Each array has a primitive type • Points • Lines • Triangles • Triangle Strip or Fan • Quadrilaterals • Quad strip • Polygons

  14. Vector operations • Vectors can specify location and direction. • Vectors can be modified by being multiplied by matrices, using affine transformations (rotation, translation, scaling and shear). • The steps in our graphics pipeline are matrix multiplications.

  15. 3D Primitives • Collections of vertices that form a single 3-D entity • Simplest primitive: collection of points, or point list. • Usually polygons, of which the simplest is a triangle. • All three vertices in a triangle are guaranteed to be coplanar. Rendering nonplanar vertices is inefficient. You can combine triangles to form complex polygons and meshes.

  16. 3D Primitives • 3-dimensional shapes can be composed of 2D (coplanar) primitives in space (usually triangles). • Each triangle (or coplanar polygon) on a 3D shape is called a “face”. • Smooth surfaces can be achieved by enough primitives and correct use of shading.

  17. Viewing • We need to project 3D objects on a 2D surface. • This is done by a series of matrix multiplications (transforms) in our graphics pipeline. • We need to specify what part of the world we want to see. Various camera operations all affect the “Viewing Frustum”

  18. The Viewing Frustum

  19. Normal Vectors • Every face has a perpendicular (normal) vector that defines the front of the face. • The direction depends on the order on which vectors are traversed, CCW for a right-handed system, and CW for a left-handed system (like Direct3D)

  20. Normal Vectors • Normal vectors are used for shading, lighting, and texturing. • Direct3D uses the vertex normals (Gouraud shading)

  21. The Phong Light Model • Light in a real-time 3D system is only a rough approximation of reality. • We have a few light sources that interact directly with each surface mathematically. • Lights are ambient lights or direct lights (which come in point lights, directional lights, and spotlights).

  22. Lighting • Global Illumination = Ambient Light + Diffuse Light + Specular Light + Emissive Light + Ambient Light Ambient + Diffuse Ambient + Diffuse + Specular Global + Diffuse Light + Specular Light + Emissive Light Object Color

  23. Materials • Materials describe how polygons reflect light or appear to emit light in a 3-D scene. • How they reflect ambient and diffuse light • What their specular highlights look like • Whether the polygons appear to emit light

  24. Texture Mapping • Adds a new level of realism by adding bitmaps to your objects • Each color element in a texture is called a texel, and the last stage of the rendering pipeline maps texels directly to pixels. • Texels rarely correspond to pixels, so texture filtering is used to approximate pixel values.

  25. Texture Filtering • More complex filters give better results but take more time: • Nearest-point: no filtering, the closest texel is copied. • Linear filtering: texels around the current pixels are averaged. • Anisotropic filtering: uses pixel elongation to map deformations better. • Mipmaps: create smaller versions of your texture to be used at different resolutions. Can be used in conjunction with the above.

  26. Meshes • Meshes are data structures that hold several primitives to create complex shapes • You can associate vertices, faces, materials, textures, and more to a mesh • Meshes can be created with a modeling software (like 3D Studio) and then exported to X file format and loaded

  27. Level of Detail (LOD) • Models use the most polygons in a typical video game • Method 1: store low, medium and high polygon versions of a model, choose model version depending on the distance the model is from the viewer • Used in virtually all modern games • Models can “pop” when they change levels • Method 2: smoothly morph between high polygon and low polygon versions • Progressive Meshes

  28. Spatial Subdivision (1/3) • For action games you need spatial subdivision to quickly figure out what objects are interacting with each other and what objects are visible to the viewer • One of the most important aspects of a video game engine is how fast it can operate with huge numbers of polygons and interacting objects • For strategy games the number of objects and the camera location may be more restricted • Subdivide world into a 2D grid

  29. Spatial Subdivision (2/3) • Typical techniques: • Octrees: put a cube around the world, then split that cube into eight pieces. Recursively split sections that have many objects in them • BSP Trees: pick a plane. Everything on one side of the plane goes in the left node, everything on the right goes in the right node, keep splitting nodes that have many objects (generalization of the octree) • Portals: divide the level into sectors, sectors are connected by portals (such as a door) • Potential Viewing Set: • If all of the level geometry is static then we can solve the visibility problem when we create the level • When you compile a level for Quake III it creates a BSP Tree for the level and then figures out which BSP nodes can see each other

  30. Spatial Subdivision (3/3) • Occlusion Culling: • In new graphics cards you can query the depth buffer to ask if an object (or entire BSP node) is completely occluded • Great for scenes such as a forest where many trees combine to occlude objects but no single tree occludes a large area • This can be used at run time so there is no need to compile static geometry • Some hints that this is used in Doom III • General rules of thumb: • Most current engines will stop subdividing level geometry after there are about 5000 polygons in one leaf node • Spatial subdivision for dynamic models is facilitated by keeping track of a models bounding sphere or bounding box • Level geometry is getting to be much less of an issue compared to highly tesselated models

  31. Illumination (1/3) • Standard Lighting: • Uses the fixed function pipeline that API provides by default; light is calculated at vertices and interpolated across the polygon • This only looks decent with highly tesselated models • Need to do something else for low tesselated models

  32. Illumination (2/3) • Light Maps: • For low tesselated models lighting is done using texture maps instead of using vertex-based lighting • Coarsely tesselated geometry often includes static geometry like walls and floors • Quake II uses a radiosity engine to compute light maps when you compile a level • Light maps are multiplied with base texture (ie a brick texture) to create convincing imagery • Only useful with static geometry

  33. Illumination (3/3) • Per-Pixel Lighting: • If we compute a lighting calculation per-pixel then we don’t need light maps • We can get pixels that are close to the quality of ray tracing by using pixel shaders • To do the calculations per-pixel we not only need a texture map (diffuse color), but a specular map (shininess) and normal map (normals) • To calculate these models Doom III uses 1,000,000 polygon models and then reduces that to a 5,000 polygon model with texture, specular and normal map

  34. Shadows (1/2) • With new hardware shadows can be calculated in real-time • Shadow Maps • Render scene from lights point of view, then use projective texture trick to see if the current pixel is seen by the light • Can have aliases, used in Halo

  35. Shadows (2/2) • Stencil Shadows • Find silhouette edges for all models and create new geometry that represents the volume shadowed by that model (called shadow volume) • New geometry can require big fill rate • Used in Halo 2, Doom III

  36. Camera Motion • First-person games often have the camera fixed at the eye point • Games like Metroid Prime switch to third-person in certain situations • Third-person sports games often have a fixed camera in one position for playing and then let you move the camera around for instant replays • Third-person action games often have camera follow over the shoulder of your character • Resident Evil fixes the camera for a given room, creating more cinematic camera angles • Games like Devil May Cry mix floating camera movement with more cinematic control by restricting the camera to travel along a dolly in certain areas

  37. Camera Motion • Strategy games often keep camera fairly high above the world looking down at a fixed angle • This allows for many visibility simplifications • Camera movement is critical to keeping the player engrossed in the game • Bad camera movement can make a video game unplayable • To smoothly turn between camera orientations you need to use quaternions • Quaternions are 4 element vectors that are good for representing rotations • Interpolating between rotation matrices leads to strange artifacts, including gimbal lock

  38. Animation • Keyframed animation • Interpolate between different poses of a model, gives exact control to artists • Very fast, but requires a lot of space to store the entire model for every frame of an animation • Used in Quake III

  39. Animation • Skeletal Animation • We store animations for the skeleton of the model, not the entire skin • Each vertex on the “skin” of the model is connected to one or more bones • Difficult to get to look just right as some animations will make certain patches of skin (like elbows) pop out at weird angles • Consumes much less memory, especially for highly tesselated models • Easier to do next-generation kinematic effects • Allow models to “balance” • Allow models to react “correctly” to physical impulses • Used in Doom III

  40. SFX • Smoke, Fire: • After the world is rendered a flat quad polygon is drawn, centered around the fire or smoke and facing towards the eye point • Polygons that always face the viewer are called billboards • The quad is textured using an appropriate animated texture that has a transparent background • Bullet Holes: • Small billboards drawn on top of static geometry • Be careful that your bullet holes do not stick out past the side of the wall (tricky) • Sparks: • Uses a particle system, useful for many small objects that need to move but don’t need to interact with other objects

  41. SFX • Trail of Smoke: • A combination of the two previous effects: for each particle in the particle system render a small smoke billboard • Sky Box: • Gives the impression of an infinitely far away horizon or sky • Surprisingly simple • Draw sky box before any other world geometry: • Temporarily set camera at the origin • Temporarily disable writing to z-buffer • Draw a textured box around the origin (because the z-buffer is disabled it doesn’t really matter what size box you chose) • If you have a box around your head you can’t tell how big the box is. Since it doesn’t move when the character translates, it appears infinitely large.

  42. SFX • Heads Up Display (HUD): • Often drawn using an orthographic projection, often uses transparency to look keen • Light Halos: • If a light (like a street light) is visible draw a billboard around the light that represents the glow or halo emanating from the light source

  43. AI • Usually very hacked • no neural networks around here • many tweaks to get characters to act believably according to a host of different variables • Character actions are often a very tenuous balance between scripted events and actual artificial intelligence • A* path planning algorithm is used to go from point A to point B quickly while avoiding obstacles • Group behavior (such as flocking) can add realism • Black and White has creatures that learn from you, probably the most sophisticated AI to date

  44. Physics • Physics is tough • Usually very hacky in video games, games often use a bounding cylinder for the player to see if they hit walls or floor • Two problems: detection of physical events (i.e., “the missile hits Tank-NPC”), and appropriate response (celebrate destruction of irritating boss) • Exploding buildings or flying barrels are almost always scripted events • Physics Done Right • Grand Theft Auto 3 • Flying cars off of ramps, taking turns at 40 mph, and hitting pedestrians feels just like it does in real life • Physics Done Too Right (aka Wrong) • Tresspasser (1998) • Physics engine so advanced that you would slip and fall while walking across a room, your weapon would fly out of your hand if you walked next to a door • Good physics doesn’t make a good game

  45. Physics • Physics Research • rigid body physics right (very complicated) • Timewarp rigid body physics (even more complicated but faster to calculate) • Plausible Animation (how things often look more real if they differ slightly from the exact solution)

  46. Examples in Games Metroid Prime • Heads Up Display (including 3D Map in the upper right) • Standard first person view (hence the genre of first person shooters)

  47. Examples in Games Metroid Prime • Metroid Prime transitions to third person view in some scenarios (even has strange 2D point of view for some puzzle areas)

  48. Examples in Games Resident Evil • Cinematic point of view • The light maps (such as the light coming from the window on the floor) will turn off and on depending on lightning

  49. Examples in Games Quake III • This shows the extensive use of light maps; shadows are computed when compiling the level’s static geometry • players that walk over shadows on the floor are not shadowed

  50. Examples in Games Halo • Billboard used for laser beam • Billboard used for gun muzzle flash • Particle system used for laser sparks

More Related