Notes for "Essential Mathematics for Games and Interactive Applications"

By James M. Van Verth and Lars M. Bishop

Chapter 1 - Representing Real Numbers

Useful c++ for floats:

isinf()Is Infinite
isnan()Is NaN (Not a Number)
abs(a-b)<=epsilon*(abs(a)+abs(b)+1)Accurate are floats equal test
FLT_MAXMaximum float value

Chapter 2 - Vectors and Points

Scalar Multiplication Rules
(ab)\vec{v} = a(b\vec{v})Associative Property
(a+b)\vec{v} = a\vec{v} + b\vec{v}Distributive Property
a(\vec{v}+\vec{w}) = a\vec{v} + a\vec{w}Distributive Property
1\vec{v} = \vec{v}Multiplicative Identity
Dot Product Rules
\vec{v}\bullet\vec{w} = \vec{w}\bullet\vec{v}Symmetry
(\vec{u}+\vec{v})\bullet\vec{w} = \vec{u}\bullet\vec{w} + \vec{v}\bullet\vec{w}Additivity
a(\vec{v}\bullet\vec{w}) = (a\vec{v})\bullet\vec{w} = \vec{v}\bullet(a\vec{w})Homogeneity
\vec{v}\bullet\vec{v} \geq 0Positivity
\vec{v}\bullet\vec{v} = 0 \iff \vec{v} = \vec{0}Definiteness
Cross Product Rules
\vec{v}\times\vec{w} = -(\vec{w}\times\vec{v})
\vec{u}\times(\vec{v}+\vec{w}) = (\vec{u}\times\vec{v})+(\vec{u}\times\vec{w})
(\vec{u}+\vec{v})\times\vec{w} = (\vec{u}\times\vec{w})+(\vec{v}\times\vec{w})
a(\vec{v}\times\vec{w}) = (a\vec{v})\times\vec{w} = \vec{v}\times(a\vec{w})
\vec{v}\times\vec{0} = \vec{0}\times\vec{v} = \vec{0}
\vec{v}\times\vec{v} = \vec{0}
Dot Product Properties
\vec{v}\bullet\vec{w} = 0Perpindicular/Orthogonal
\hat{v}\bullet\hat{w} \cong 1Pointing in similar directions (v,w normalized)
\hat{v}\bullet\hat{w} \cong \minus1Pointing in opposite directions (v,w normalized)
1-(\hat{v}\bullet\hat{w})^2 = 1-\cos^2\theta = \sin^2\thetasine (v,w normalized)
\vec{v}\bullet\vec{w} \gt 0Angle < 90
\vec{v}\bullet\vec{w} \lt 0Angle > 90
\vec{v}^\perp\bullet\vec{w} = \|\vec{v}\|\|\vec{w}\|\sin\thetaPerpindicular dot product in \Re^2
Triple Product Properties
\vec{w}\times(\vec{v}\times\vec{w})Vector triple product; \vec{w}, (\vec{v}\times\vec{w}), \vec{w}\times(\vec{v}\times\vec{w}) are orthogonal
\vec{u}\bullet(\vec{v}\times\vec{w}) = \|\vec{u}\|\|\vec{v}\times\vec{w}\|\cos\thetaScalar triple product; signed volume of box made of u,v,w
\vec{u}\bullet(\vec{v}\times\vec{w}) \gt 0Shortest rotation from v to w is counterclockwise (for right-handed vectors) about u
\vec{u}\bullet(\vec{v}\times\vec{w}) \lt 0Shortest rotation from v to w is clockwise (for right-handed vectors) about u
\vec{u}\times(\vec{v}\times\vec{w}) = (\vec{u}\bullet\vec{w})\vec{v}-(\vec{u}\bullet\vec{v})\vec{w}
(\vec{u}\times\vec{v})\times\vec{w} = (\vec{u}\bullet\vec{w})\vec{v}-(\vec{v}\bullet\vec{w})\vec{u}
\vec{u}\bullet(\vec{v}\times\vec{w}) = \vec{w}\bullet(\vec{u}\times\vec{v}) = \vec{v}\bullet(\vec{w}\times\vec{u})
Real Vector Space V
\forall \vec{u},\vec{v} \in V \Rightarrow \vec{u}+\vec{v} \in VAdditive Closure
\forall a \in \Re, \vec{v} \in V \Rightarrow a\vec{v} \in VMultiplicative Closure

Example: \Re^2 = \{(x,y) | x,y \in \Re\}

Linear Combination for Vectors:
Given vectors \{\vec{v}_0,...,\vec{v}_{n-1}\} and scalars \{a_0,...,a_{n-1}\},
Linear combination = \vec{v} = a_0\vec{v}_0 + ... + a_{n-1}\vec{v}_{n-1} = \sum_{i=0}^{n-1}a_i\vec{v}_i
Span T = all possible linear combinations of all vectors in S; S spans T
Vector Formulas:
\|\vec{v}\| = \sqrt{\sum{v_i^2}} = \sqrt{v_0^2 + v_1^2 + ... + v_{n-1}^2}Vector l2 norm = length
\hat{v} = \frac{\vec{v}}{\|\vec{v}\|}Normalized \vec{v}
\vec{v}\bullet\vec{w} = \|\vec{v}\|\|\vec{w}\|\cos\theta = \sum{v_iw_i} = v_0w_0+...+v_{n-1}w_{n-1}Dot Product/Euclidean Inner Product
proj_\vec{w}\vec{v} = \frac{\vec{v}\bullet\vec{w}}{\|\vec{w}\|^2}\vec{w} = \frac{\vec{v}\bullet\vec{w}}{\vec{w}\bullet\vec{w}}\vec{w}Projection of v onto w. Component of v which is parallel to w.
perp_\vec{w}\vec{v} = \vec{v} - \frac{\vec{v}\bullet\vec{w}}{\|\vec{w}\|^2}\vec{w} = \vec{v} - \frac{\vec{v}\bullet\vec{w}}{\vec{w}\bullet\vec{w}}\vec{w}Perpindicular of v onto w. Component of v which is perpindicular to w.
proj_{\hat{w}}\vec{v} = (\vec{v}\bullet\hat{w})\hat{w}Projection of v onto w if w is normalized.
\vec{v}\times\vec{w} = (v_yw_z,v_zw_x,v_xw_y)-(w_yv_z,w_zv_x,w_xv_y)Cross/Vector product for \Re^3
\vec{v}^\perp = (-v_y,v_x)Perpindicular for \Re^2
Affine Space:
Consists of a set of points W and vector space V, point O = origin
\forall P,Q \in W \Rightarrow \exists\vec{v} = P-Q
\forall P \in W, \vec{v} \in V \Rightarrow \exists Q = P + \vec{v}
\forall P \in W \Rightarrow P = O + \vec{v}
Coordinate frame: Point as linear combination of basis vectors of V: P = O + a_0\vec{v}_0+...+a_{n-1}\vec{v}_{n-1}
Cartesian frame: uses origin O = (0,0,...0) and vector basis {(1,0,...,0),...,(0,0,...1)}
Affine combination: point P = a_0P_0 + a_1P_1 + ... + a_kP_k where \sum_{i=0}^k{a_i}=1,
alternatively written as P = P_0 + a_1\vec{u}_1 + ... + a_k\vec{u}_k, where \vec{u}_i = (P_i-P_0)
Affinely independant: points P_0...P_k affine combinations with linearly independant vectors \vec{u}_1...\vec{u}_k. Ordered points = simplex and coefficients = barycentric coordinates
Convex set: points where line between any pair remains within the set
Convex hull: smallest convex set that includes all the points.
Convex combination: restrict ceofficients of points between 0 and 1. The span of this is the convex hull.
Point formulas:
dist(Q,P)=\|\vec{v}\|=\sqrt{\sum(Q_i-P_i)^2}=\sqrt{(Q_0-P_0)^2+...+(Q_{n-1}-P_{n-1})^2}Distance between points
dist(P,O)=\|\vec{v}\|=\sqrt{\sum{P_i^2}}=\sqrt{P_0^2+...+P_{n-1}^2}Distance of point from origin O
x = r\cos\theta, y = r\sin\thetaPolar to cartesian 2d points
r = \sqrt{x^2+y^2}, \theta = arctan2(y,x)Cartesian 2d to polar points
x = \rho\sin\phi\cos\theta, y = \rho\sin\phi\sin\theta, z = \rho\cos\phiSpherical to cartesian 3d points
\rho = \sqrt{x^2+y^2+z^2}, \phi = arctan2(\sqrt{x^2+y^2},z), \theta = arctan2(y,x)Cartesian 3d to spherical points
s = \frac{b}{c}, t = \frac{a}{c}, a = \frac{1}{2}\|\vec{u}\times\vec{w}\|, b = \frac{1}{2}\|\vec{v}\times\vec{w}\|, c = \frac{1}{2}\|\vec{u}\times\vec{v}\|Point P to barycentric coordinates calculation. \vec{u}=P_1-P_0, \vec{v}=P_2-P_0, \vec{w}=P-P_0
Parameterized Primitives:
L(t) = P_0 + t\vec{d}Parameterized line formula. t between [0,1] gives a line segment. t >= 0 gives ray.
P(s,t) = P_0 + s(P_1-P_0) + t(P_2-P_0)Parameterized plane formula.
ax+by+cz+d = 0, \vec{n}=(a,b,c), P=(x,y,z)Normal-point plane equation
Collinear: 3 or more points that all lie on one line.
Coplanar: 4 or more points that all lie on one plane.

Chapter 3 - Linear Transformations and Matrices

Linear transformation T: mapping between vector spaces V, W where for all \vec{v} \in V and all scalars a: Which can be proven by showing T(a\vec{x}+\vec{y}) = aT(\vec{x}) + T(\vec{y})
Can also be represented the linear combo of basis vectors of V: T(\vec{x}) = x_0T(\vec{v}_0)+...+x_{n-1}T(\vec{v}_{n-1})
Matrix representation of linear transformation:

Given transform of standard basis \{\vec{e}_0,\vec{e}_1,...\vec{e}_{n-1}\} \in V to \{\vec{a}_0,\vec{a}_1,...\vec{a}_{n-1}\} \in W, store the transformed basis vectors as the columns of matrix \mathbf{A}=[\vec{a}_0 \vec{a}_1 ... \vec{a}_{n-1}]

Matrix Definitions:
Matrix algebraic rules:
\mathbf{A}+\mathbf{B} = \mathbf{B}+\mathbf{A}
\mathbf{A}+(\mathbf{B}+\mathbf{C}) = (\mathbf{A}+\mathbf{B})+\mathbf{C}
\mathbf{A}+\mathbf{0} = \mathbf{A}
\mathbf{A}+(-\mathbf{A}) = \mathbf{0}
a(\mathbf{A}+\mathbf{B}) = a\mathbf{A}+a\mathbf{B}
a(b\mathbf{A}) = (ab)\mathbf{A}
(a+b)\mathbf{A}) = a\mathbf{A}+b\mathbf{A}
1\mathbf{A} = \mathbf{A}
\mathbf{A}(\mathbf{B}\mathbf{C}) = (\mathbf{A}\mathbf{B})\mathbf{C}
a(\mathbf{B}\mathbf{C}) = (a\mathbf{B})\mathbf{C}
\mathbf{A}(\mathbf{B}+\mathbf{C}) = \mathbf{A}\mathbf{B}+\mathbf{A}\mathbf{C}
(\mathbf{A}+\mathbf{B})\mathbf{C} = \mathbf{A}\mathbf{C}+\mathbf{B}\mathbf{C}
(\mathbf{A}\mathbf{B})^T = \mathbf{B}^T\mathbf{A}^T
Matrix transpose rules:
(\mathbf{A}^T)^T = \mathbf{A}
(a\mathbf{A}^T) = a\mathbf{A}^T
(\mathbf{A}+\mathbf{B})^T = \mathbf{A}^T+\mathbf{B}^T
Outer Product/Tensor Product
\mathbf{T}=\vec{v}\vec{w}^T=\vec{v}\otimes\vec{w}Tensor product of vectors v,w resulting in matrix T
(\vec{u}\bullet\vec{w})\vec{v}=(\vec{v}\otimes\vec{w})\vec{u}Tensor product representation of dot product expression
(\vec{u}\bullet\hat{v})\hat{v}=(\hat{v}\otimes\hat{v})\vec{u}Tensor product representation of unit vector projection
Cross Product with Skew Symmetric Matrix
\begin{bmatrix}0 & -v_z & v_y \\ v_z & 0 & -v_x \\ -v_y & v_x & 0\end{bmatrix}\begin{bmatrix}w_x\\w_y\\w_z\end{bmatrix}=\begin{bmatrix}v_yw_z-w_yv_z\\v_zw_x-w_zv_x\\v_xw_y-w_xv_y\end{bmatrix}Matrix calculation of cross product \vec{v}\times\vec{w}
Systems of Linear Equations with Gaussian Elimination

Elementary row operations used:

Given multiple linear equations:

To solve for x_0,x_1,...x_{n-1}, Place the a elements on the left side and b elements in the rightmost column of an "augmented" matrix:

\begin{bmatrix} a_{0,0} & a_{0,1} & ... & a_{0,n-1} & | & b_0 \\ a_{1,0} & a_{1,1} & ... & a_{1,n-1} & | & b_1 \\ ... & ... & ... & ... & | & ... \\ a_{m-1,0} & a_{m-1,1} & ... & a_{m-1,n-1} & | & b_{m-1} \end{bmatrix} Then perform gaussian elimination, which steps are shown in this pseudocode:

for p = 1 to n do
    // find the element with the largest value in col p
    
    // if the max is zero, stop
    
    // if the max element is not in row p, swap rows
    
    // set the pivot element to 1
    multiply row p by 1/A[p][p]

    // clear lower column entries
    for r = p+1 to n do
        subtract row p times A[r][p] from current row, so element in pivot column becomes 0

// do backwards substitution
for row = n-1 to 1 do
    for col = row+1 to n do
        // subtract out known quantities
        b[row] = b[row] - A[row][col]*b[col]
Matrix inverse properties
\mathbf{A}^T=\mathbf{A}^{-1}Inverse of orthogonal matrices
(\mathbf{AB})^{-1}=\mathbf{B}^{-1}\mathbf{A}^{-1}Given A,B are square, invertible matrices
Matrix determinant properties
det(\mathbf{AB})=det(\mathbf{A})det(\mathbf{B})
det(\mathbf{A}^{-1})=\frac{1}{det(\mathbf{A})}
\begin{vmatrix}a & b \\ c & d\end{vmatrix}=ad-bc
\begin{vmatrix}a & b & c \\ d & e & f \\ g & h & i\end{vmatrix}=a(ei-fh) - b(di-fg) + c(dh-eg)
Cramer's Inverse Method
\mathbf{A}^{-1} = \frac{1}{det(\mathbf{A})}\mathbf{A}^{adj}
Eigenvalues/Eigenvectors
\mathbf{A}\vec{x} = \lambda\vec{x}lambda=eigenvalue, x=eigenvector
det(\lambda\mathbf{I}-\mathbf{A})=0characteristic equation; nonzero solution IFF this is true
To solve for eigenvalues/eigenvectors, expand characteristic equation to get n-degree polynomial of \lambda, then solve for the polynomial roots

Chapter 4 - Affine Transformations

Matrix definition
\mathbf{A}\vec{x}+\vec{y} = \begin{bmatrix} \mathbf{A} & \vec{y} \\ \vec{0}^T & 1 \end{bmatrix} \begin{bmatrix} \vec{x} \\ 1 \end{bmatrix} = \begin{bmatrix} \mathbf{A}\vec{x} + \vec{y} \\ 1 \end{bmatrix} Affine transformation of point x
\vec{v} = P_0 - P_1= \begin{bmatrix}\vec{x}_0 \\ 1\end{bmatrix}-\begin{bmatrix}\vec{x}_1 \\ 1\end{bmatrix} = \begin{bmatrix}\vec{x}_0-\vec{x}_1 \\ 0\end{bmatrix}, \begin{bmatrix} \mathbf{A} & \vec{y} \\ \vec{0}^T & 1 \end{bmatrix} \begin{bmatrix} \vec{v} \\ 0 \end{bmatrix} = \begin{bmatrix} \mathbf{A}\vec{v} \\ 0 \end{bmatrix} Affine transformation of vector v, where v is difference between 2 points
\begin{bmatrix} \mathbf{A} & \vec{y} \\ \vec{0}^T & 1 \end{bmatrix}^{-1} = \begin{bmatrix} \mathbf{A}^{-1} & -\mathbf{A}^{-1}\vec{y} \\ \vec{0}^T & 1 \end{bmatrix} Affine transformation inverse
\begin{bmatrix} 1 & 0 & 0 & tx \\ 0 & 1 & 0 & ty \\ 0 & 0 & 1 & tz \\ 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} \mathbf{I} & \vec{t} \\ \vec{0}^T & 1 \end{bmatrix} Translation affine matrix (only affects points).
\begin{bmatrix} 1 & 0 & 0 & -tx \\ 0 & 1 & 0 & -ty \\ 0 & 0 & 1 & -tz \\ 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} \mathbf{I} & -\vec{t} \\ \vec{0}^T & 1 \end{bmatrix} Translation inverse
\begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta \\ 0 & \sin\theta & \cos\theta \end{bmatrix} x-axis rotation
\begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos{-\theta} & -\sin{-\theta} \\ 0 & \sin{-\theta} & \cos{-\theta} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos\theta & \sin\theta \\ 0 & -\sin\theta & \cos\theta \end{bmatrix} x-axis rotation inverse (= transpose)
\begin{bmatrix} \cos\theta & 0 & \sin\theta \\ 0 & 1 & 0 \\ -\sin\theta & 0 & \cos\theta \end{bmatrix} y-axis rotation
\begin{bmatrix} \cos{-\theta} & 0 & \sin{-\theta} \\ 0 & 1 & 0 \\ -\sin{-\theta} & 0 & \cos{-\theta} \end{bmatrix} = \begin{bmatrix} \cos\theta & 0 & -\sin\theta \\ 0 & 1 & 0 \\ \sin\theta & 0 & \cos\theta \end{bmatrix} y-axis rotation inverse (= transpose)
\begin{bmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix} z-axis rotation
\begin{bmatrix} \cos{-\theta} & -\sin{-\theta} & 0 \\ \sin{-\theta} & \cos{-\theta} & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} \cos\theta & \sin\theta & 0 \\ -\sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix} z-axis rotation inverse (= transpose)
R\vec{v} = R\vec{v}_\| + R\vec{v}_\perp = \cos\theta\vec{v} + [1-\cos\theta](\vec{v}\bullet\hat{r})\hat{r} + \sin\theta(\hat{r}\times\vec{v}) = [\cos\theta\mathbf{I} + (1-\cos\theta)(\hat{r}\otimes\hat{r}) + \sin\theta\tilde{r}] \vec{v} = \begin{bmatrix} tx^2+c & txy-sz & txz+sy \\ txy+sz & ty^2+c & tyz-sx \\ txz-sy & tyz+sx & tz^2+c \end{bmatrix}\vec{v}, c=\cos\theta, s=\sin\theta,t=1-\cos\theta general axis rotation for \vec{v}about normalized axis \hat{r}=(x,y,z)
\begin{bmatrix} a & 0 & 0 & 0 \\ 0 & b & 0 & 0 \\ 0 & 0 & c & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} Scaling matrix to scale x axis by a, y axis by b, z axis by c
\begin{bmatrix} \mathbf{I}-2(\hat{n}\otimes\hat{n}) & \vec{0} \\ \vec{0}^T & 1 \end{bmatrix} Reflection matrix about normal \hat{n}
\begin{bmatrix} \mathbf{I}+\vec{s}\otimes\hat{n} & \vec{0} \\ \vec{0}^T & 1 \end{bmatrix} Shear matrix about plane with normal \hat{n} and shear vector \vec{s}
\vec{n}' = (\mathbf{M}^{-1})^T\vec{n} transformation matrix for normal \vec{n} given point transform \mathbf{M}
\mathbf{M} = \mathbf{TRS} Usual 3d object transform order (scale, rotation, translation), ordered right-to-left
T(P) = \begin{bmatrix} s\mathbf{R}\vec{x} + \vec{t} \\ 1 \end{bmatrix} Alternative transformation on a point P instead of 4x4 matrix, using uniform scale s, 3x3 rot matrix R, and translation t
s' = s_1s_0, \mathbf{R}' = \mathbf{R}_1\mathbf{R}_0, \vec{t}' = \vec{t}_1 + s_1\mathbf{R}_1\vec{t}_0 Transform concatenation using previous representation (again instead of 4x4 matrix)

Orientation is to rotation as position is to translation

Chapter 5 - Orientation Representation

4 ways to represent rotations:

Quaternion formulas:
\vec{q} = (w,\vec{v})Quaternion representation, w = scalar part, v = vector part
\|\vec{q}\| = \sqrt{w^2+x^2+y^2+z^2}Quaternion length/magnitude
\hat{q} = \frac{\vec{q}}{\|\vec{q}\|}Normalized quaternion
w=\cos(\theta/2), \vec{v}=\sin(\theta/2)\hat{r}Quaternion from angle, axis r
\mathbf{M}_q = \begin{bmatrix} 1-2y^2-2z^2 & 2xy - 2wz & 2xz + 2wy \\ 2xy + 2wz & 1 - 2x^2 - 2z^2 & 2yz - 2wx \\ 2xz - 2wy & 2yz + 2wx & 1 - 2x^2 - 2y^2 \end{bmatrix} Quaternion to 3x3 matrix
\vec{q}_2\vec{q}_1 = (w_1w_2 - \vec{v}_1\bullet\vec{v}_2, w_1\vec{v}_2 + w_2\vec{v}_1 + \vec{v}_2\times\vec{v}_1)Quaternion product/concatenation
\vec{q}^{-1} = (w,-\vec{v})Inverse quaternion, assuming q is normalized
R_q(\vec{p}) = \vec{q}\vec{p}\vec{q}^{-1}Vector rotation (p) using quaternion Rq
R_q(\vec{p}) = (2w^2 - 1)\vec{p} + 2(\vec{v}\bullet\vec{p})\vec{v} + 2w(\vec{v}\times\vec{p})Vector rotation (p) using quaternion Rq
\vec{p}' = \vec{r}(s\vec{p})\vec{r}^{-1} + \vec{t}Alternate transformation representation (instead of matrix), but using quaternion r instead of 3x3 rot mat
s' = s_1s_0, \vec{r}' = \vec{r}_1\vec{r}_0, \vec{t}' = \vec{t}_1 + \vec{r}_1(s_t\vec{t}_0)\vec{r}_1^{-1}Alternate transformation concatenation with quaternion r