Written by Luka Kerr on November 3, 2018

Algebra

Subspaces

A subspace $S$ is only a subspace of $V$ over a field $\mathbb{F}$ if:

Linear Combinations and Spans

Given a set of vectors $S = { v_1, v_1 \dots{} v_n }$:

Linear Independence

Linearly Independent

A set of vectors $S$ is said to be linearly independent if for each vector in $S$, the only values of the scalars $\lambda_1, \lambda_2 \dots{} \lambda_n$ for which $\lambda_1 v_1 + \lambda_2 v_2 + \dots{} + \lambda_n v_n = 0$, are $0$.

Linearly Dependent

A set of vectors $S$ is said to be linearly dependent if it is not a linearly independent set. In other words, there exists a $\lambda_n$ that is not $0$.

Basis and Dimension

A set of vectors $S$ is a basis for a vector space $V$ if:

Linear Maps

A function $T : V \to W$ is called a linear transformation if:

Linear Map Subspaces

Kernel

Definition: $ker(A) = { \bold{v} \in \mathbb{R}^n : A \bold{v} = 0 }$

Calculate: row reduce $A$, solve for $\lambda$’s and form a span

Basis: form a set using the smallest number of linearly independent vectors from $ker(A)$

Nullity

Definition: $nullity(A) = dim(ker(A))$

Calculate: find $ker(A)$ and take its dimension OR take $dim(A) - rank(A)$

Image

Definition: $im(A) = { \bold{b} \in \mathbb{R}^n : A \bold{x} = \bold{b} }$

Calculate: row reduce $(A | b)$

Basis: row reduce, find the linearly independent columns and take those columns from the original matrix as a span

Rank

Definition: $rank(A) = dim(im(A))$

Calculate: find $im(A)$ and take its dimension OR row reduce $A$ and take the number of leading columns

Eigenvalues and Eiegnvectors

Eigenvalue

Definition: $\lambda$’s such that $A \bold{v} = \lambda \bold{v}$

Calculate: solve $det(A - \lambda I) = 0$ where $I$ is the identity matrix for $A$

Eigenvector

Definition: $\bold{v}$ such that $A \bold{v} = \lambda \bold{v}$ for each eigenvalue $\lambda$

Calculate: solve $ker(A - \lambda I)$ for each eigenvalue $\lambda$

Diagonalisation

A square matrix $A$ is said to be diagonalisable if there exists an invertible matrix $M$ and a diagonal matrix $D$ such that $M^{-1} A M = D$.

Given $n$ linearly independent eigenvectors $\bold{v_1}, \dots, \bold{v_n}$ and corresponding eigenvalues $\lambda_1, \dots, \lambda_n$, we let

\[M = (\bold{v_1} | \dots | \bold{v_n}) , \quad D = \begin{pmatrix} \lambda_1 & & 0 \\ & \ddots & \\ 0 & & \lambda_n \end{pmatrix}\]

such that $M^{-1} A M = D$ holds.

Systems of Differential Equations

The system

\[\begin{cases} \frac{dx}{dt} = a_1 y_1 + b_1 y_2 \\ \frac{dy}{dt} = a_2 y_2 + b_2 y_2 \end{cases}\]

can be written as

\[\begin{pmatrix} a_1 & b_1 \\ a_2 & b_2 \end{pmatrix} \begin{pmatrix} y_1 \\ y_2 \end{pmatrix}\]

where

\[\begin{pmatrix} y_1 \\ y_2 \end{pmatrix} = c_1 \bold{v_1} e^{\lambda_1 t} + c_2 \bold{v_2} e^{\lambda_2 t}\]

given that $\lambda_k, \bold{v_k}$ are eigenvalue-eigenvector pairs.

Probability

Rules and Conditions

  1. $P(A \cap B) = P(A) + P(B) - P(A \cup B)$
  2. $P(A^c) = 1 - P(A)$
  3. $P(A | B) = \dfrac{P(A \cap B)}{P(B)}$
  4. Mutual exclusion if $A \cap B = \emptyset$
  5. Statistically independent if $P(A \cap B) = P(A) \times P(B)$

Random Variables

Probability Distribution

To show a sequence $p_k$ is a probability distribution the following properties must be proven

  1. $p_k \ge 0$
  2. $\sum_k^{\infty} p_k = 1$

Expected Value

$E(X) = \sum_{\text{all k}} k p_k$

$E(X^2) = \sum_{\text{all k}} k^2 p_k$

Variance

$Var(X) = E(X^2) - E(X)^2$

Standard Deviation

$SD(X) = \sqrt{Var(X)}$

Special Distributions

Binomial Distribution

$B(n, p, k) = \begin{pmatrix}n \ k\end{pmatrix} p^k (1 - p)^{n - k}$ where $k = 0, 1, \dots{} n$

Geometric Distribution

$G(p, k) = (1 - p)^{k - 1} p$

Continuous Random Variables

A random variable $X$ is continuous if $F_X(x)$ is continuous.

Probability Density Function

The probability density function of a continuous random variable $X$ is defined by

\[f(x) = f_X(x) = \dfrac{d}{dx} F(x) \ , \ x \in \mathbb{R}\]

if $F(x)$ is differentiable, and $\lim_{x \to a^-} \dfrac{d}{dx} F(x)$ if $F(x)$ is not differentiable at $x = a$.

Expected Value

$E(X) = \displaystyle \int_{- \infty}^{\infty} x \ f(x) \ dx$

Variance

$Var(X) = E(X^2) - (E(X))^2$

Special Continuous Distributions

Normal Distribution

A continuous random variable $X$ has a normal distribution $N(\mu, \sigma^2)$ if it has a probability density $\phi (x) = \dfrac{1}{\sqrt{2 \pi \sigma^2}} e^{- \frac{1}{2} (\frac{x - \mu}{\sigma})^2}$ where $- \infty < x < \infty$.

Exponential Distribution

A continuous random variable $T$ has an exponential distribution $Exp(\lambda)$ if it has a probability distribution density

\[f(t) = \begin{cases} \lambda e ^{- \lambda t} & \text{if t} \ge 0 \\ 0 & \text{if t} < 0 \end{cases}\]

Calculus

Partial Differentiation

To find the partial derivative of a function with two variables $x$ and $y$, we can treat one of the variables as a constant and differentiate with respect to the other.

Tangent Plane To Surfaces

Suppose $F$ is a function of two variables, and $P$ is a point $(x_0, y_0, z_0)$ that lies on the surface $z = F(x, y)$.

Tangent Plane Of Surface

$z = z_0 + F_x (x_0, y_0) (x - x_0) + F_y (x_0, y_0)(y - y_0)$

Normal Vector To Surface

\[\begin{pmatrix} F_x (x_0, y_0) \\ F_y (x_0, y_0) \\ -1 \end{pmatrix}\]

Total Differential Approximation

$\triangle F \approx F_x (x_0, y_0) (x - x_0) + F_y (x_0, y_0) (y - y_0)$

Chain Rule

For a function $F$ with two variables $x$ and $y$, the chain rule can be defined as

\[\dfrac{dF}{dt} = \dfrac{\partial F}{\partial x} \dfrac{dx}{dt} + \dfrac{\partial F}{\partial y} \dfrac{dy}{dt}\]

Don’t forget to substitute in $x$ and $y$ after finding the derivatives.

Functions Of Two Or More Variables

Partial Derivatives

For a function $F$ of three variables $x$, $y$ and $z$, the partial derivatives of $F$ can be defined as

\[F_x = \dfrac{\partial F}{\partial x}, \quad F_y = \dfrac{\partial F}{\partial y}, \quad F_z = \dfrac{\partial F}{\partial z}\]

Chain Rule

For a function $F$ of three variables $x$, $y$ and $z$, where $x$ and $y$ are each functions of both $u$ and $v$, the chain rule for $F$ can be defined as

\[\dfrac{\partial F}{\partial u} = \dfrac{\partial F}{\partial x} \dfrac{\partial x}{\partial u} + \dfrac{\partial F}{\partial y} \dfrac{\partial y}{\partial u} + \dfrac{\partial F}{\partial z} \dfrac{\partial z}{\partial u}\] \[\dfrac{\partial F}{\partial v} = \dfrac{\partial F}{\partial x} \dfrac{\partial x}{\partial v} + \dfrac{\partial F}{\partial y} \dfrac{\partial y}{\partial v} + \dfrac{\partial F}{\partial z} \dfrac{\partial x}{\partial v}\]

Integration Techniques

Trigonometric Integrals

Considers integrals of the form \(\int \cos^m x \ \sin^n x \ dx\)

Cases:

  1. $m$ or $n$ or both are odd: $u = \sin x$, $du = \cos x \ dx$
  2. $m$ and $n$ are even: $\cos^2x = \dfrac{1 + \cos 2x}{2}$, $\sin^2x = \dfrac{1 - \cos 2x}{2}$

Ordinary Differential Equations

Seperable ODEs

Seperable ODEs are differential equations where two variables are involved (usually $x$ and $y$) that can be seperated so that all the $y$’s are on one side, and all the $x$’s are on the other. The tend to be in the form

\[f(y) \frac{dy}{dx} = g(x)\]

To solve:

  1. Move all the $y$’s to one side, and the $x$’s to the other. So $f(y) \dfrac{dx}{dy} = g(x)$ becomes $f(y) dy = g(x) dx$
  2. Integrate both sides with respect to the respective variable $\int f(y) dy = \int g(x) dx$
  3. Solve for $y$

First Order Linear ODEs

First Order Linear ODEs are differential equations that involve functions of a single variable. They can be written in the form \(\dfrac{dy}{dx} + f(x) y = g(x)\)

To solve:

  1. Write the differential equation as above
  2. Find the integrating factor $e^{\int f(x) dx}$, this is denoted by $h(x)$
  3. Multiply both sides by the integrating factor $h(x)$ to obtain $\dfrac{d}{dx} (h(x) y) = g(x) h(x)$
  4. Integrate both sides with respect to $x$, and solve for $y$

Exact ODEs

Exact ODEs are differentiable equations involving functions with two or more variables. They are typically of the form \(F(x, y) + G(x, y) \frac{dy}{dx} = 0\) and are said to be exact if \(\dfrac{\partial F}{\partial y} = \dfrac{\partial G}{\partial x}\)

To solve:

  1. Show that a differential equation is exact by proving the above property
  2. Look for a function $H(x, y)$ such that \(\def\arraystretch{2} \begin{array}{l} \dfrac{\partial H}{\partial x} = F(x, y) \qquad (1) \\ \dfrac{\partial H}{\partial y} = G(x, y) \qquad (2) \end{array}\)
  3. Integrate (1) with respect to $x$ to find $H(x, y) = f(x, y) + C(y)$
  4. To find $C(y)$, partially differentiate $H(x, y)$ with respect to $y$ (leaving all of the $x$ components constant) and compare that with the partial derivative of $H$ with respect to $y$. This gives $C’(y)$ and thus allows to find $C(y)$

Second Order Linear ODEs

A second order linear ODE with constant coefficients is said to be homogeneous if it is of the form \(y'' + ay' + by = 0\) where $a$ and $b$ are real numbers.

Characteristic Equation

The characteristic equation of a second order linear ODE is given by

\[\lambda^2 + a\lambda + b = 0\]

Taylor Polynomial

For a differentiable function $f$, the Taylor polynomial $p_n$ of order $n$ at $x = a$ is

\[p_n (x) = f(a) = f'(a) (x - a) + \frac{f''(a)}{2!} (x - a)^2 + \frac{f^{(3)} (a)}{3!} (x - a)^3 + \dots{} + \frac{f^{(n)} (a)}{n!} (x - a)^n\]

Taylor’s Theorem

\[f(x) = p_n(x) + R_{n + 1} (x)\]

where

\[R_{n + 1} (x) = \frac{f^{(n + 1)} (c)}{(n + 1)!} (x - a)^{n + 1}\]

Sequences

When evaluating limits, functions and sequences are identical. This is shown below \(\lim_{x \to \infty} f(x) = L \implies \lim_{n \to \infty} a_n = L\)

A sequence diverges when $\lim_{n \to \infty} a_n \pm \infty$ or $\lim_{n \to \infty} a_n$ does not exist. Otherwise, the sequence converges.

Properties Of Sequences

Given a sequence of real numbers ${ a_n }_{n = 0}^{\infty}$, the following properties hold

Infinite Series

The $k$th Term Divergence Test

$\sum_{k = 1}^{\infty} a_k$ diverges if $\lim_{n \to \infty} a_k$ fails to exist, or is non-zero.

Integral Test

Comparison Test

Suppose that ${ a_k }_{k = 0}^{\infty}$ and ${ b_k }_{k = 0}^{\infty}$ are two positive sequences such that $ak \le bk$ for every natural number $k$.

Usually used for series of the form $\sum_{k = 1}^{\infty} \dfrac{1}{k^p}$, such that this series converges if $p > 1$ and diverges if $p \le 1$.

Ratio Test

Suppose that $\sum a_k$ is an infinite series with positive terms and that $\lim_{k \to \infty} \dfrac{a_{k + 1}}{a_k} = r$.

Alternating Series Test

Taylor Series

$\displaystyle \sum_{k = 0}^{\infty} \frac{f^{(k)} (a)}{k!} (x - a)^k$

Power Series

Given a sequence ${ a_k }_{k = 0}^{\infty}$ is a sequence of real numbers and that $a \in \mathbb{R}$, then

\[\sum_{k = 0}^{\infty} a_k x^k\]

is the power series of $x$, and

\[\sum_{k = 0}^{\infty} a_k (x - a)^k\]

is a power series of $x - a$.