## Why Triangle Meshes? * Simplest Type of Polygon * Always planar (3 vertices always form a plane) * Mostly always remain triangles after transformations (at worst becomes line segment) * Most graphics hardware designed around triangles ## Triangle edges and face normal * e1,2 = p2 - p1 * e1,3 = p3 - p1 * e2,3 = p3 - p2 * N = e1,2 x e1,3 / |e1,2 x e1,3| * (cross product and vector length) * ![Triangle Plane and Face Normal](tri_face_normal.png) Back faced normals are culled (not drawn) based on winding order: whether the vertices are defined in clockwise or counter-clockwise order ## Coordinate Spaces: Model Space/Local Space/Object Space * Origin/Center point 0,0,0 is the center of the current model/object or another point relative to that object * Axes Front, Left, Right (**F**,**L**,**R**) are defined as basis vectors **i**,**j**, and **k**, e.g. **L** = **i**, **U** = **j**, **F** = **k** ## Coordinate Spaces: World Space * Center coordinates 0,0,0 relative to the total/entire drawing scene * Model to World Matrix, a.k.a the world matrix * ![Model to World Matrix](model_to_world_mat.png) * Upper Left 3x3 mat RS is the rotate and scale matrix, then bottom left tm vector is translation from model space expressed in world space * Model to World matrix using model space unit vectors i,j,k from above, * ![World Matrix using model i,j,k](model_to_world_ijk.png) * Translating a vertex from model to world space using this world matrix: * **v**w = **v**m**M**w ## Light-Object interactions Light can be: * absorbed * reflected * transmitted via refraction (bent while going through an object) * diffracted (when going through a small opening) Sub-surface scattering: light enters and object, bounces around, then exits at a different location and angle. Gives skin, wax, marble their warm appearance ## Vertex Attributes (possible data included for each vertex) * Position vector * Vertex Normal * Vertex tangent and bitangent * When combined with the vertex normal, defines 'tangent space', used for per-pixel calculations like normal and environment mapping * Diffuse Color * Specular Color * Texture coordinates (u,v) * 0,0 is bottom left and 1,1 is top right * Skinning weights (k,w) * for skeletal animation: vertices are attached to a joint via index k, with each joint weighted influence w ## Gouraud shading * Shading on a per-vertex basis, coloring pixels in-between interpolated ## Texture Types * Diffuse map/Albedo map * Contains the diffuse color at each texel * Normal Map * stores unit normal vectors at each texel as RGB values * Gloss Map * how shiny a surface should be at each texel * Environment Map * picture of the surrounding environment for reflection rendering ## Mip-mapping * Making smaller versions of the texture, half the size each time, for use when rendering the textured surface further away * ![Mip Mapping Example](mip_mapping.png) ## Materials * Used to store all visual properties of a mesh * i.e. textures, shader programs to use, input params to shader, etc. * does NOT include vertex attributes - are instead paired with the material for the complete picture ## Lighting Models * Lighting makes a huge difference in realism: * Just lighting ![Last of Us scene with just lighting](lou_just_lighting.png) * Without lighting ![Last of Us scene without lighting](lou_no_lighting.png) * With Lighting: ![Last of Us scene with lighting](lou_lighting.png) * Direct lighting * light is emitted and bounces off a single object * called "local illumination models" - objects do not affect each others appearance * Indirect Lighting * bounces off multiple objects * called "global illumination models" - object affect each other as light bounces and hits others * ray tracing and radiosity methods are examples ## Phong lighting * most common local lighting (local illumination) method * Uses the following vertex attributes: * ambient: overall lighting level * diffuse: main color of the object when light is reflected off it * specular: bright hilights for shiny objects * ![Phong Diagram](phong_lighting.png) * Uses the following values: * view direction vector V * Ambient intensity A * surface normal N * ambient reflectivity, diffuse reflectivity, specular reflectivity, glossiness exponent α * light color intensity Ci, direction vector Li from reflectino point to light source * one i for each light source * ![Phong Diagram](phong_diagram.png) ## Bi-directional Reflection Distribution Function (BRDF) * Terms used by phong lighting are an example of this * calculates ratio of reflected radiance (V) along a view direction to incoming irradiance (L) ## Light types * Static * light is pre-calculated and stored per-pixel in a light map * ambient * global/independant of position and directon, simply applied to all areas * Directional * like sun-rays; infinite distance but has direction * Point/omnidirectional * light emits in all directions from a single point * Spot light * emits rays in a cone-shape, like a flashlight; closer to center of cone the higher the light intensity * Area lights * lights that don't emit from a single point; instead from an area * Emissive objects * such as glowing crystal ball, flames, etc. * use emissive texture map, so the texture at the desired emit locations are always full intensity, such as a neon sign, car headlights * ![Luigi Mansion Example](luigi_mansion.png) ## View Space/Camera Space * Origin point is the 'camera' * ![Camera Space Diagram](camera_space.png) * World to View space matrix is inverse of View to World space Matrix * ![View to World Mat](view_to_world_mat.png) * Can combine both model-to-world and world-to-view matrices in a single matrix: * ![Model to View space](model_to_view.png) ## Perspective and Orthographic projection ![Orthographic Diagram](ortho_diagram.png) ![Orthographic OpenGL Matrix](ortho_matrix.png) ![Perspective Diagram](perspective_diagram.png) ![Perspective OpenGL Matrix](perspective_matrix.png) ## Depth Buffering/Z Buffering * each fragment stores its depth in the depth buffer when its color is stored in the frame buffer * When another fragment from a different triangle trys to draw at the same location, the depths are compared and the one that is smaller is stored ## Rendering pipeline stages * The first two stages: tools (Creation of the assets) and the asset conditioning pipeline (ACP, conversion of formats and storage, etc.) are done before and outside of the actual program * ![Rendering pipeline diagram](rendering_pipeline.png) ## GPU pipeline ![GPU Pipeline Diagram](gpu_pipeline.png) ## Alpha Blending * Combines the color of two pixels based on the alpha of the incoming component (that is writing to the pixel with an existing color) * **C'**D = AS**C**S + (1-AS)**C**D * S = source, D = destination ## Heterogeneous System Architecture * On playstation 4, AMD Jaguar System on a Chip (Soc) CPU and GPU share memory via heterogeneous unified memory architecture (hUMA) * Shaders are given input via a shader resource table (SRT) that can be accessed by both CPU and GPU ## Non-heterogenous shader memory (registers) * GPUs contain registers which are accessed by the shader * 128-bit SIMD format, allowing for 4 32-bit float values (a vec4) per register * Input registers (incoming vertex data) * Constant Registers (attributes such as model-view matrix) * Temporary registers (internal calculations) * Output registers (filled in by the shader, i.e. output pixel color) * Texutre maps (read-only memory for the shader) via coordinates as oppsed to memory addresses ## Anti-aliasing types * Full-Screen Antialiasing (FSAA)/Super-sampled antialiasing (SSAA) * Scene rendered first to frame buffer larger than the actual screen, then downsampled to actual screen size * rarely used due to memory requirements and computation * Multisampled Antialiasing (MSAA) * focuses on the edges of triangles since that's where the jaggedness occurs * uses subsamples based on a number of points (2,3,5,8 or 16) * ![MSAA diagram](msaa_diagram.png) * Coverage Sample Anti-aliasing (CSAA) * optimization of MSAA * pixel coverage test is performed for 16 'coverage subsamples' per fragment * gives quality of 16x MSAA with cost of 4x MSAA * Morphological Anti-aliasing (MLAA) * focuses on scene points that suffer from AA * scene rendered at normal size, then scanned to look for stair-stepped patterns then blurs those areas * Subpixel Morphological Anti-aliasing (SMAA) * combines morphological and multisamples AA that blurs less than MLAA * Arguably best solution currently * http://www.iryoku.com/smaa/ ## Application stage of rendering pipeline * Visibility determination: don't submit non-visible objects for drawing in order to save performance * frustum culling * occlusion and potentially visible sets * portals/occlusion volumes (anti-portals) * ![Portals Diagram](portals_diagram.png) * ![Anti-portals diagram](antiportals_diagram.png) * submit geometry to GPU for drawing * creates a GPU command list, i.e. "set render state for material 1, submit prim A, submit prim B, ..." * control shader parameters and render state ## Scene Graphs * Way to store objects in scene so that its easier to tell what and what not to submit to the GPU for drawing * Usually some kind of tree structure * Quadtrees/Octtrees * ![Quadtree diagram](quadtree_diagram.png) * Bounding Sphere trees * BSP (binary space paritioning) tree * ![BSP diagram](bsp_diagram.png) ## When to use a scene graph * don't need one for mostly static environment like a fighting game * mostly indoor environments: portal or BSP system * mostly outdoors, flat terrain, from above: quad tree * outdoors from person on ground with many objects: antiportal system ## Advanced lighting * Image based lighting * use 2d texture maps * normal map: surface normal direction vector at each texel * ![Normal Map Example](normal_map_example.png) * Height maps (bump, parallax, displacement): height of surface above or below the base triangle * ![Height map example](height_map_example.png) * ![Height map rendered example](height_map_render_example.png) * Specular/gloss map: stores the ks intensity value for specular calculation at each pixel, allowing for different areas of the surface to be differently shiny * ![Specular Map Example](specular_map_example.png) * Environment Map: 360 degree photo of the environment; cubic or spherical * 3D textures: use u,v,w coordinates * High Dynamic Range (HDR) lighting: compensate for the lack of realistic dynamic range on T.V. screens; uses tone mapping to shift intensity values and results in effects such as temporary blindness when entering light from dark area, or bloom (light bleeds from behind darkened surface) ## Global Illumination * Shadow rendering * Penumbra: blurred edges of a shadow * Shadow volumes: each shadow casting object viewed from light source, silhouette edges identified; uses stencil buffer to mask shadowed pixels * ![Shadow Volumes Example](shadow_volumes.png) * Shadow maps: per-fragment depth test from light viewpoint, shadow map texture generated and saved in depth buffer; scene then rendered, then shadow map used to determine if each fragment in shadow or not * ![Shadow Map Example](shadow_map.png) * Ambient Occlusion * contact shadows; describes how accessible each point on a surface is to light * ![Ambient Occlusion Example](ambient_occlusion.png) * Reflection: A static texture of the background scene surrounding the surface (i.e. the mirror) is drawn on the surface, then the scene to be reflected is rendered from the reflected POV of the camera and drawn on top of the surface as another texture * ![Reflection Example](reflection_example.png) * Caustic highlights: hilights from highly reflective surfaces like water/metal, and when the surface moves it causes the reflections to move, like when under water. Done via a possible animated texture with semi-random bright hilights * ![Caustics Example](cuastics_example.png) * Subsurface Scattering * light enters, bounces, leaves, results in warm glow (skin, marble, etc) * BSSRDF (bidirectional surface scattering reflectance distribution function) * Use shadow map but instead of determining shadowed pixels, instead how far beam of light travels to pass through the object * ![Subsurface Scattering Example](subsurface_scattering_example.png) * Precomputed Radiance Transfer (PRT) * pre computes how light ray would interace with a surface from every possible direction * http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/prtSIG02.pdf ## Deferred Rendering * Majority of lighting calculation done in screen space instead of view space * Rendering is done without lighting as usual, then stores necessary lighting info in the "G-buffer" * Much more efficient than view space lighting * ![G buffer components example](g_buffer_example.png) * http://www.slideshare.net/guerrillagames/deferred-rendering-in-killzone-2-9691589 ## Physically Based Rendering * Attempts to approximate light travel and interaction with the real world * https://www.marmoset.co/toolbag/learn/pbr-theory ## Particle Effects * Composed of very large number of simple geometry * often camera-facing * semitransparent or translucent * animated * spawned and killed continuously * ![Particle Examples](particle_example.png) ## Decals * simple geometry overlayed on objects like bullet holes, foot prints, scratches * defined by a rectangular area that is projected along a ray into the scene, and is applied to whatever the ray hits first * ![Decal Example](decal_example.png) ## Skies * Sky rendering usually done after the rest of the scene is rendered, so only need to draw non-occluded areas * Z buffer val set to maximum since its the furthest away * Can uses a sky dome or sky box * Clouds can be done via camera-facing cards (billboards), particle effect, or volumetric effects ## Terrain * Height field terrain: stored in grayscale texture map * ![Height Field Map example](height_field_map.png) ## Water * Can use dynamic motion simulation, dynamic tesselation, other level of detail (LOD) methods based on closeness * Can interact with physics system (e.g. flotation, slippery surface, swimming) * Combination of different systems such as shaders, scrolling textures, particle effects for mist, etc.) ## Overlays * Rendered after scene finished, with Z-test disabled so they're always on top * Typically rendered in view or screen space so not effected by world-view etc. ## Text/Fonts * Texture map known as 'glyph atlas' * Another option is rendering via library like FreeType which uses besier curves ## Gamma correction * Adjust color intensity via a gamma power to compensate for screen range * Vout = pow(Vin, γ) * ![Gamma Diagram](gamma_diagram.png) ## Full screen post effects * Applied after 3d scene rendered for special effect: * Motion blur via a convolutional kernel * Depth of field blur: uses depth buffer to adjust blur for each pixel * Vignette: brightness/saturation at corners decreased or a texture is overlayed on screen, for example binocular view effect or scope * colorization, for example make al pixels grayscale except red ## Further Reading * Overview of entire 3d process for games/film: * The Art of 3-D Computer Animation and Imaging, 2nd ed, Isaac Kerlow * modern real time rendering: * Real Time Rendering, 3rd ed, (2008), Hoffman, Haines, Moller * definitive reference guide for graphics: * Computer Graphics: Principles and Practice in C, 2nd ed, Foley, Dam, Feiner, Hughes * Math of rendering: * Mathematics for 3d Game Programming and Computer Graphics, 2nd ed, by Lengyel * Graphics gems series * Gpu gems series