Written by Luka Kerr on June 8, 2018

Introduction To Vectors

Lines

Lines In $\mathbb{R_3}$

Vector Equation Of A Line

$\vec{x} = \vec{a} + λ\vec{v}$ where $\vec{a}$ is the position vector of a point on the line, and $\vec{v}$ is a direction vector on the line. λ is a scalar of the direction vector, and allows to find any point on the line.

Let: $\vec{a} = \begin{pmatrix}a_1 \ a_2 \ a_3\end{pmatrix}$ and $\vec{v} = \begin{pmatrix}v_1 \ v_2 \ v_3\end{pmatrix}$ then $\vec{x} = \begin{pmatrix}a_1 \ a_2 \ a_3\end{pmatrix} + λ \begin{pmatrix}v_1 \ v_2 \ v_3\end{pmatrix}$

Parametric Equation Of A Line

From the vector equation of a line, we can find the parametric equations of the line: \(x = a_1 + λv_1, \\ y = a_2 + λv_2, \\ z = a_3 + λv_3\)

Cartesian Form Of A Line

To convert to cartesian form, we solve for λ: \(\dfrac{x - a_1}{v_1} = \dfrac{y - a_2}{v_2} = \dfrac{z - a_3}{v_3} (= λ)\)

Planes

Linear Combinations

To find a linear combination of vectors, substitute the given vectors into the equation and perform the appropriate arithmetic.

A linear combination ($v$) of two vectors $v_1$ and $v_2$ is a sum of scalar multiples of $ v_1$and $v_2$:

\[v = λv_2 + λv_2\]

Planes In $\mathbb{R_3}$

Parametric Vector Equation Of A Plane

To form a vector equation of a plane, we need a point ($a$) on the plane, and two non-collinear vectors ($\vec{v}$ and $\vec{w}$). From this, we are able to let $λ_1$ and $λ_2$ be any value, such that any point on the plane can be reached. \(\vec{x} = a + λ_1\vec{v} + λ_2\vec{w}\)

Parametric Form Of A Plane

From the parametric vector equation of a plane, we can find the parametric form of the plane: \(x_1 = a_1 + λv_1 + λw_1, \\ x_2 = a_2 + λv_2 + λw_2, \\ x_3 = a_3 + λv_2 + λw_3\)

Cartesian Form Of A Plane

The cartesian form of a plane can be represented as $ax_1 + bx_2 + cx_3 = d$, where \(\vec{n} = \begin{pmatrix}a\\b\\c\end{pmatrix}\) is the normal to the plane, and $P = (x_1, x_2, x_3)$ is any point on the plane. Use the point $P$ to solve for $d$.

Example:

Let $n = \vec{(1, 2, 3)}$

Let $P = (-1, 2, 0)$

$x_1 + 2x_2 + 3x_3 = d$

$= 1(-1) + 2(2) + 3(0) = d$

$= -1 + 4 + 0 = d$

$d = -3$

$\therefore$ the cartesian equation of the plane $ = x_1 + 2x_2 +3x_3 = -3$

Vector Geometry

Lengths

The length of a vector can be calculated as: \(| \begin{pmatrix}x_1\\x_2\\x_3\end{pmatrix} | = \sqrt{(x_1)^2 + (x_2)^2 + (x_3)^2}\)

The Dot Product

Arithmetic Properties Of The Dot Product

Add the multiplication of each row: \(\begin{pmatrix}x_1\\x_2\\x_3\end{pmatrix} . \begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix} = (x_1 \times y_1) + (x_2 \times y_2) + (x_3 \times y_3)\)

This can also be written as $\vec{a} \dot{} \vec{b} = |\vec{a}||\vec{b}|cos \theta$ where $\theta \in [0, \pi]$ is the angle between vectors $\vec{a}$ and $\vec{b}$.

To solve for $\cos \theta$: \(\cos{\theta} = \dfrac{\vec{a} \dot{} \vec{b}}{|\vec{a}||\vec{b}|}\)

Geometric Interpretation Of The Dot Product

The dot product can be understood as a measure of how well one vector aligns with another vector. If two vectors are perpendicular (orthogonal) then the dot product evaluates to zero.

Useful for:

Applications

Vector Projection

Having a vector $\vec{AB}$ and a vector $\vec{AC}$, we can find the projection of $\vec{AB}$ onto $\vec{AC}$: \(Proj_{\vec{AC}} (\vec{AB})\) We think of this as the projection of $\vec{AB}$ onto $\vec{AC}$. This projection gives us the shortest distance between $\vec{AB}$ and $\vec{AC}$, or the vector that is perpendicular to $\vec{AC}$.

The general formula for a projection of $\vec{a}$ onto $\vec{b}$ is: \(Proj_{\vec{b}} (\vec{a}) = (\dfrac{\vec{a} \dot{} \vec{b}}{|\vec{b}|^2}) \vec{b}\)

Shortest Distance From A Point To A Line In $\mathbb{R_3}$

Given:

  1. We can find the point $X$ where the perpendicular line from $l$ to $P$ intersects $l$
  2. Using $X$, we can then find $\vec{PX}$, the vector from $P$ to $X$
  3. We know that the direction of $l \dot{} \vec{PX} = 0$, so we can solve for λ
  4. Once we have λ, we can substitute this back into $\vec{PX}$ and find the magnitude, giving the shortest distance

The Cross Product

Arithmetic Properties Of The Cross Product

The cross product gives us a vector that is perpendicular (orthogonal) to both vectors $\vec{AB}$ and $\vec{AC}$.

When using the cross product, think of the ‘right hand rule’.

For each row of the two vectors, cross subtract the multiplication of two rows below crossed: \(\begin{pmatrix}x_1\\x_2\\x_3\end{pmatrix} \times \begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix} = \begin{pmatrix}(x_2 \times y_3) - (y_2 \times x_3)\\(x_3 \times y_1) - (y_3 \times x_1)\\(x_1 \times y_2) - (y_1 \times x_2)\end{pmatrix}\)

\[e.g. \begin{pmatrix}2\\2\\-2\end{pmatrix} \times \begin{pmatrix}2\\1\\1\end{pmatrix} = \begin{pmatrix}4\\-6\\-2\end{pmatrix}\]

Geometric Interpretation Of The Cross Product

As stated above, the cross product gives us a vector that is perpendicular (orthogonal) to two vectors.

Useful for:

Area Of $\triangle$ In $\mathbb{R_2}$ With Vectors

The area of a triangle $\triangle ABC$ is equal to half the area of the parallelogram formed by sides $\vec{AB}$ and$\vec{AC}$: \(= \dfrac{1}{2}|\vec{AB} \times \vec{AC}|\) Where $|\vec{AB} \times \vec{AC}|$ is the length of the cross product of the vectors $\vec{AB}$ and $\vec{AC}$

Complex Numbers

Introduction To Complex Numbers

The number $\sqrt{-1}$ can be represented as $i$. Thus, $i \times i$ or $i^2$ $= 1$.

A complex number is said to be in Cartesian form when it is written in the form $a + bi$, where $a$ is the real part, and $b$ is the imaginary part. Some examples are $3 + 4i$ and $cos \dfrac{\pi}{3} + i sin \dfrac{\pi}{3}$.

Rules Of Arithmetic For Complex Numbers

Let $z = a + bi$ and $w = c + di$

Addition

$z + w = (a + c) + (b + d)i$

Subtraction

$z - w = (a - c) + (b - d)i$

Multiplication

$(a + bi)(c + di) = ac + bci + adi + (bi)(di) = ac + (bc + ad)i + (bd)i^2$

Since $i^2 = -1$ this can be simplified to $(ac - bd) + (bc + ad)i$

Division

Where $w \ne 0$: \(\dfrac{z}{w} = \dfrac{ac + bd}{c^2 + d^2} + \dfrac{bc - ad}{c^2 + d^2}i\)

Propositions

  1. Uniqueness of Zero. There is one and only one zero in $\mathbb{C}$
  2. Cancellation Property. If $z, v, w \in \mathbb{C}$ satisfy $z + v = z + w$ then $v = w$.
  3. Cancellation Property. If $z, v, w \in \mathbb{C}$ satisfy $zv = zw$ and $z \ne 0$ then $v = w$.
  4. $0z = 0$ for all complez numbers $z$.
  5. $(-1)z = -z$ for all complex numbers $z$.
  6. If $z, w \in \mathbb{C}$ satisft $zw = 0$ then either $z = 0$ or $w = 0$, or both.

Real Parts, Imaginary Parts & Complex Conjugates

Real Parts

The real part of an equation $z = a + bi$ is written $Re(z)$, where $a, b \in \mathbb{R}$ is given by $Re(z) = a$.

Imaginary Parts

The imaginary part of an equation $z = a + bi$ is written $Im(z)$, where $a, b \in \mathbb{R}$ is given by $Im(z) = b$.

Complex Conjugates

If $z = a + bi$ where $a, b \in \mathbb{R}$ then the complex conjugate of $z$ is $\bar{z} = a - bi$.

Properties:

Argand Diagram

The argand diagram is a geometric way to represent complex numbers in the form $z = a + bi$. This complex number can be represented by the coordinates $(a, b)$, and where the $y$ axis represents the imaginary axis, and the $x$ axis represents the real axis. For example, $z = 3 - 2i$ would have coordinates $(3, -2)$.

The polar form of a complex number can also be represented on the Argand diagram, where $\theta$ is the angle measured from the positive $x$-axis and $r$ is the distance of a point from the origin.

Polar Form, Modulus & Argument

Polar Form

Polar form of a complex number can be obtained using plane polar corrdinates $r$ and $\theta$ instead of $x$ and $y$.

The relationship between the real and imaginary parts of a complex number $z = x + yi$ and the polar coordinates $r$ and $\theta$ are:

$Re(z) = x = r cos \theta$ and $Im(z) = y = r sin \theta$

Hence a complex number $z \ne 0$ can be written using the polar coordinates $r$ and $\theta$ as:

$z = r(cos \theta + isin\theta)$

It is important to note that $\theta$ for any complex number $z = x + yi$ is not uniquely defined, since adding or subtracting $2\pi$ produces the exact same values for $x$ and $y$, and hence the same complex number $z$.

Modulus

For $z = x + yi$ where $x, y \in \mathbb{R}$, we define the modulus of $z$ to be $|z| = \sqrt{x^2 + y^2}$. This is also known as the magnitude or absolute value of $z$. When written in polar form, the modulus is defined as $r$, or the distance from the point $z$ from the origin as in the Argand diagram.

Argument

The polar coordinate $\theta$ of a complex number $z$ is called the argument of the complex number, and is written as $arg(z)$. This angle can be increased or decreased by $2\pi$ without changing the corresponding complex number.

To find the principle argument of $z$ we choose a value of $\theta$ such that $-\pi \lt \theta \leq \pi$. This is written as $Arg(z)$.

Properties & Application Of The Polar Form

De Moivre’s Theorem

For any real number $\theta$ and integer $n$ \((cos(\theta) + i sin(\theta))^n = cos(n \theta) + i sin(n \theta)\)

Euler’s Formula

For a real $\theta$ we define \(e^{i\theta} = cos(\theta) + i sin(\theta)\)

The Arithmetic Of Polar Forms

Using Euler’s formula, we can rewrite the compex number $z = r(cos\theta + i sin \theta)$ in an alternative and more useful form. This is called the polar form of the non zero complex number $z = re^{i\theta}$ where $r = |z|$ and $\theta = Arg(z)$.

Four important cases are:

  1. $1 = e^0$
  2. $i = e^{i\pi/2}$
  3. $-1 = e^{i\pi}$
  4. $-i = e^{-i \pi/2}$
Multiplication

$z_1 z_2 = r_1e^{i\theta_1} r_2e^{1\theta_2} = r_1 r_2e^{i(\theta_1 \theta_2)}$

Division

$\dfrac{z_1}{z_2} = \dfrac{r_1e^{i\theta_1}}{r_2e^{i\theta_2}} = \dfrac{r1}{r2}e^{i(\theta_1-\theta_2)}$

Powers Of Complex Numbers

If $z = re^{i\theta}$ then the properties of exponentials give $z^n = r^n e^{in\theta}$.

Roots Of Complex Numbers

A complex number $z$ is an nth root of a number $z_0$ if $z_0$ is the nth power of $z$, that is $z$ is the nth root of $z_0$ if $z^n = z_0$.

Trigonometric Applications Of Complex Numbers

Binomial Theorem

If $a, b \in \mathbb{C}$ and $n \in \mathbb{N}$ then \((a + b)^n = \sum_{k=0}^{n} \begin{pmatrix}n\\k\\\end{pmatrix} a^{n-k}b^k\) where the numbers \(\begin{pmatrix}n\\k\end{pmatrix} = \dfrac{n!}{k!(n-k)!}\) are the binomial coefficients.

From this, we can express $cos(n \theta)$ or $sin(n\theta)$ in terms of powers of $cos(\theta)$ or $sin(\theta)$ by using De Moivres’s Theorem. For example: \(cos(4\theta) + isin(4\theta) = (cos(\theta) + isin(\theta))^4\) can then be solved using the minomial theorem as it is in the form of $(a + b)^n$.

Complex Polynomials

Roots & Factors Of Polynomials

  1. A number $\alpha$ is a root or (zero) of a polynomial $p$ if $p(\alpha) = 0$.
  2. Let $p$ be a polynomial. Then, if there exists polynomials $p_1$ and $p_2$ such that $p(z) = p_1(z)p_2(z)$ for all complex $z$, then $p_1$ and $p_2$ are called factors of $p$.
Remainder Theorem

The remainder $r$ which results when $p(z)$ is divided by $z - \alpha$ is given by $r = p(\alpha)$.

Factor Theorem

A number $\alpha$ is a root of $p$ if and only if $z - \alpha$ is a factor of $p(z)$.

The Fundamental Theorem Of Algebra

A polynomial of degree $n \ge 1$ has at least one root in the complex numbers.

Factorisation Theorem

Every polynomial of degree $n \ge 1$ has a factorisation into $n$ linear factors of the form

\[p(z) = a(z - \alpha_1)(z - \alpha_2)...(z - \alpha_n)\]

where the $n$ complex numbers $\alpha_1, \alpha_2…\alpha_n$ are roots of $p$ and where $a$ is the coefficient of $z^n$.

Linear Equations & Matrices

Linear Equations & Matrix Notation

We can express the following linear equations as an augmented matrix: \(x_1 + 2x_2 - 3x_3 = 5 \\ 4x_2 - 5_2 + 4x_2 = 9\) \({ \left( \begin{array}{ccc|c} 1 & 2 & -3 & 5 \\ 4 & -5 & 4 & 9 \\ \end{array} \right) }\)

An augmented matrix contains the coefficients of each unknown, and the right hand side of the linear equation(s). This can be expressed in an equation: $Ax = b$, where $A$ is the coefficient matrix and $x$ is the unkonwn vector. $Ax = b$ can be visualised as: \(A = { \left( \begin{array}{ccc} 1 & 2 & -3 \\ 4 & -5 & 4 \\ \end{array} \right) } ,\quad x =\begin{pmatrix}x_1\\x_2\\x_3\end{pmatrix} , \quad b = \begin{pmatrix}5\\9\end{pmatrix}\)

Solving Systems Of Equations

Row-Echelon Form

A matrix is in row-echelon form if:

For example, the following matrix is in row-echelon form: \(\begin{pmatrix} 2 & 3 & 4 & 5 \\ 0 & 6 & 7 & 8 \\ 0 & 0 & 9 & 10 \end{pmatrix}\)

Reduced Row-Echelon Form

A matrix is in reduced row-echelon form if:

For example, the following matrix is in reduced row-echelon form: \(\begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & 3 \\ \end{pmatrix}\)

Converting To Row-Echelon Form

The process to convert to row-echelon form is known as Gaussian Elimination. Three row operations can be performed on each row to get the matrix into row-echelon form:

Converting To Reduced Row-Echelon Form

To convert a matrix to row-echelon form we can perform two types of elementary row operations:

The procedure is as follows:

  1. Start with the lowest row which is not all zeros
  2. Multipy it by a constant to make its leading entry $1$
  3. Add multiples of this row to higher rows to get all zeros in the column above the leading entry of this row
  4. Goto step 1 for the next lowest row

Deducing Solubility From Row-Echelon Form

We can determine the number of solutions to the linear system by examining the position of the leading entries in a row-echelon reduced matrix.

0 solutions if the right hand column is a leading column: \({ \left( \begin{array}{cc|c} 1 & 2 & 3 \\ 0 & 0 & 4 \\ \end{array} \right) }\)

1 unique solution if every variable is a leading variable: \({ \left( \begin{array}{cc|c} 1 & 0 & 2 \\ 0 & 3 & 4 \\ \end{array} \right) }\)

$\infty$ many solutions if there is at least one non-leading variable: \({ \left( \begin{array}{cc|c} 1 & 2 & 3 \\ 0 & 0 & 0 \\ \end{array} \right) }\)

Solving For An Indeterminate $b$

When dealing with an unknown right hand side, also known as an indeterminate $b$ we perform similar steps as above:

Matrices

Matrix Arithmetic And Algebra

An $m \times n$ matrix is an array of $m$ rows by $n$ columns. The notation $[A]_{ij}$ denotes a matrix $A$ and the element at row $i$, column $j$.

Equality, Addition & Multiplication Of A Scalar

Two matrices $A$ and $B$ are equal if:

If $A$ and $B$ are $m \times n$ matrices, then the sum $C = A + B$ is the $m \times n$ matrix whose entries are: \([C]_{ij} = [A]_{ij} + [B]_{ij}\)

For any matrix $A \in M_{mn}$ the negative of $A$ is the $m \times n$ matrix $-A$.

If $A$ and $B$ are $m \times n$ matrices, then the subtraction $C = A - B$ is the $m \times n$ matrix whose entries are: \([C]_{ij} = [A]_{ij} - [B]_{ij}\)

If $A$ is an $m \times n$ matrix and $\lambda$ is a scalar, then the scalar multiple $B = \lambda A$ of $A$ is the $m \times n$ matrix whose entries are: \([B]_{ij} = \lambda[A]_{ij}\)

Matrix Multiplication

If $A$ is an $m \times n$ matrix and $X$ is an $n \times p$ matrix, then the product $AX$ is the $m \times p$ matrix whose entries are given by the formula: \([AX]_{ij} = \sum_{k = 1}^{n} [A]_{ik} [X]_{kj}\) An identity matrix ($I$) is a square matrix with 1’s on the diagonal, and 0’s off the diagonal. For example: \({ \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) } \quad or \quad { \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) }\) Properties of Matrix Multiplication:

Using these laws allow us to simplify expressions in unknown matrices in almost the same way as simplifying expressions in algebra.

The Transpose Of A Matrix

The transpose of an $m \times n$ matrix $A$ is the $n \times m$ matrix $A^T$ ($A$ transpose) with entries given by: \([A^T]_{ij} = [A]_{ji}\) Properties:

The Inverse Of A Matrix

A matrix $X$ is said to be an inverse of a matrix $A$ if both $AX = I$ and $XA = I$, where $I$ is an identity matrix of the appropriate size.

A matrix $X$ is said to be a right inverse of $A$ is $A$ if $r \times c$, $X$ is $c \times r$ and $AX = I_r$.

A matrix $Y$ is said to be a left inverse of $A$ if $A$ is $r \times c$, $Y$ is $c \times r$ and $YA I_c$.

Properties:

Calculating The Inverse Of A Matrix

Given $A$, an invertible $n \times n$ matrix, we can write the columns of $A^{-1}$ as $x_1, x_2 … x_n$, then $A A^{-1} = I$ can be written as: \(A A^{-1} = A(x_1|x_2|...|x_n) = (e_1|e_2|...|e_n)\) where ${e_1, e_2 … e_n}$ is the standard basis for $\mathbb{R}^n$, and $x_i$ is the unique solution of $Ax = e_i$. If $Ax = e_i$ doesn’t have a solution, then $A$ is not invertible.

Steps to find $A^{-1}$

  1. Form the augmented matrix $(A | I)$ with $n$ rows and $2n$ columns
  2. Use Gaussian elimination to convert $(A | I)$ to row-echelon form $(U | C)$. Then, if all entries in the bottom row of $U$ are zero, stop - in this case $A$ has no inverse
  3. Otherwise, use further row operations to reduce $(U | C)$ to reduced row-echelon form $(I | B)$. The right hand half $B$ is the inverse

For a $2 \times 2$ matrix, the inverse can be found in a simpler way: \({ \left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right)^{-1} } = \dfrac{1}{ad - bc} { \left( \begin{array}{cc} d & -b \\ -c & a \\ \end{array} \right) } , \textrm{ provided } ab - bc \ne 0\)

Inverses & Solution Of $Ax = b$

Determinants

The determinant of a matrix $A$ is written as $det(A)$ or $|A|$. A determinant is only defined for a matrix is that matrix is square.

The determinant of an $n \times n$ matrix $A$ is defined as: \(\sum_{k = 1}^{n} (-1)^{1 + k} a_{1k} |A_{1k}|\) Properties:

The Efficient Numerical Evaluation Of Determinants

\[det(U) = u_{11} u_{22} ... u_{nn}\]

Determinants & Solutions Of $Ax = b$

Given $A$ is a square $n \times n$ matrix:

Proving 3 Points In 3D Space Are Collinear

Given 3 points $A$, $B$ and $C$, to prove that these 3 points are collinear, you must show that $\vec{AB}$ is parallel to $\vec{AC}$. If $\vec{AC}$ is a scalar multiple of $\vec{AB}$, then the two vectors are parallel, and the 3 points are collinear.

Distance Between Two Points

In 2D Space

For points $(x_1, y_1)$ and $(x_2, y_2)$ the distance can be found: \(\sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}\)

In 3D Space

The above forumla can be extended for points $(x_1, y_1, z_1)​$ and $(x_2, y_2, z_2)​$: \(\sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2 + (z_2 - z_1)^2}\)

Coordinates Where A Line Meets A Plane

Given a line $l$ and a plane $II$, we can find the point $P$ where the line intersects the plane in 3 steps:

  1. Find the parametric equation of the line in terms of λ and $x$, $y$, $z$
  2. Substitute these values into the equation of the plane to find λ
  3. Substitute λ back into $x$, $y$ and $z$

The point $P$ is equal to $(z, y, z)$.

Sets, Inequalities & Functions

Elementary Functions

Elementary functions are functions that can be constructed by combining a finite number of polynomials, exponentials, logarithms, roots and trigonometric functions via function composition, addition, subtraction, multiplication and division.

Examples:

$f(x) = e^{sin(x)} + x^2$

Implicitly Defined Functions

When a function is defined by more than one rule it is said to be implicitly defined. An example is the two functions $f : [-a, a] \to \mathbb{R}$ and $g : (-a, a) \to \mathbb{R}$ defined by the rules: \(\left\{ \begin{array}{l} y = f(x) \\ (x^2 + y^2 -1)^3 -x^2y^3 = 0 \\ y \ge b \end{array} \right.\) and \(\left\{ \begin{array}{l} y = g(x) \\ (x^2 + y^2 -1)^3 -x^2y^3 = 0 \\ y \lt b \end{array} \right.\)

Limits

Limits Of Functions At $\infty$

The Pinching Theorem

On a graph, given a function $h$ that always lies above a function $f$, and given that $h$ and $f$ have the same limit as $x \to \infty$, any function, such as $g$, that always lies between $h$ and $f$ (such that is it ‘pinched’), has the same limit of $x \to \infty$.

Suppose that $f$, $g$ and $h$ are all defined on the interval $(b, \infty)$, where $b \in \mathbb{R}$. If: \(f(x) \le g(x) \le h(x) \ \ \ \ \ \ \ \forall x \in (b, \infty)\) and \(\lim_{x \to \infty} f(x) = \lim_{x \to \infty} h(x) = L\) then \(\lim_{x \to \infty} g(x) = L\)

Limits Of The Form $f(x)/g(x)$

When calculating limits of the form \(\lim_{x \to \infty} \dfrac{f(x)}{g(x)}\) where both $f(x)$ and $g(x)$ tend to infinity as $x \to \infty$, to find the limit we have to divide both $f$ and $g$ by the fastest growing term appearing in the denominator $g$.

For example, evaluate: \(\lim_{x \to \infty} \dfrac{4x^2 -5}{2x^2 +3x}\) Solution: There are two terms appearing in the denominator - $2x^2$ and $3x$. As $x \to \infty$, the fastest growing term is the one involving $x^2$. So we divide the denominator by $x^2$: \(\dfrac{4x^2 -5}{2x^2 + 3x} = \dfrac{4 - 5/x^2}{2 + 3/x} \\ = \dfrac{4 - 0}{2 + 0}\) as $x \to \infty$. Therefore: \(\lim_{x \to \infty} \dfrac{4x^2 -5}{2x^2 +3x} = 2\)

Limits Of The Form $\sqrt{f(x)} = \sqrt{f(x)}$

When calculating limits of the form \(\sqrt{f(x)} - \sqrt{g(x)}\) since both $f(x)$ and $g(x)$ tend to infinity as $x \to \infty$, we have to multiply both the numerator and denominator by $\sqrt{f(x)} + \sqrt{g(x)}$ and expand the numerator as a difference of squares.

For example: \(\sqrt{x + 5} - \sqrt{x + 2} = \dfrac{(\sqrt{x + 5} - \sqrt{x + 2})(\sqrt{x + 5} + \sqrt{x + 2})}{\sqrt{x + 5} + \sqrt{x + 2}} \\ = \dfrac{(x + 5) - (x + 2)}{\sqrt{x + 5} + \sqrt{x + 2}} \\ = \dfrac{3}{\sqrt{x + 5} + \sqrt{x + 2}} \\ \to 0\) Therefore, $\lim_{x \to \infty} \sqrt{x + 5} - \sqrt{x + 2} = 0$

Indeterminate Forms

Given \(\lim_{x \to \infty} \dfrac{f(x)}{g(x)}\) where $f(x) \to \infty$ and $g(x) \to \infty$ as $x \to \infty$, we recognise that a limit of the form $\frac{\infty}{\infty}$ is in indeterminate form.

The Definition Of $\lim\limits_{x \to \infty} f(x)$

Suppose that $L$ is a real number and $f$ is a real-valued function defined on some interval $(b, \infty)$. We say that $\lim_{x \to \infty} f(x) = L$ if for every positive real number $\epsilon$, there is a real number $M$ such that if $x \gt M$, then $|f(x) - L| < \epsilon$.

Limits Of Functions At A Point

Left-hand, Right-hand & Two-sided Limits

If the left-hand limit $\lim_{x \to a^-} f(x)$ and the right-hand limit $\lim_{x \to a^+} f(x)$ both exist and equal the same real number $L$, then we say that the limit of $f(x)$ as $x \to a$ exists and is equal to $L$, and we write \(\lim_{x \to a} f(x) = L\) If any one of these conditions fails then we say that the limit doesn’t exist.

Limits & Continuous Functions

Suppose that $f$ is defined on some open interval containing the point $a$. If $\lim_{x \to a} f(x) = f(a)$ then we say that $f$ is continuous at $a$; otherwise, we say that $f$ is discontinuous at $a$.

If $f : \mathbb{R} \to \mathbb{R}$ is continuous at every point $a$ in $\mathbb{R}$ then we say that $f$ is continuous everywhere.

Rules:

Suppose that $a \in \mathbb{R}$ and that $\lim_{x \to a} f(x)$ and $\lim_{x \to a} g(x)$ exist and are finite real numbers. Then:

Properties Of Continuous Functions

Combining Continuous Functions

Propositions:

Continuity On Intervals

The Intermediate Value Theorem

Theorem:

Suppose that $f$ is continuous on the closed interval $[a, b]$. If $z$ lies between $f(a)$ and $f(b)$ then there is at least one real number $c$ in $[a, b]$ such that $f(c) = z$.

Assumptions:

The Maximum-Minimum Theorem

Definition:

Given $f$ being defined on a closed interval $[a, b]$:

Theorem:

If $f$ is continuous on a closed interval $[a, b]$, then $f$ attains its minimum and maximum on $[a, b]$. Essentially, there exists points $c$ and $d$ in $[a, b]$ such that $f(c) \le f(x) \le f(d)$ for all $x$ in $[a, b]$.

Differentiable Functions

Gradients Of Tangents & Derivatives

Suppose that $f$ is defined on some open interval containing the point $x$. We say that $f$ is differentiable at $x$ if \(\lim_{h \to 0} \dfrac{f(x + h) - f(x)}{h}\) exists. If the limit exists, we denote if by $f^\prime(x)$. This is known as the derivative of $f$ at $x$.

Rules For Differentiation

  1. $(f + g)^\prime(x) = f^\prime(x) + g^\prime(x)$
  2. $(C.f)^\prime(x) = C.f^\prime(x)$, where $C$ is a constant
  3. $(fg)^\prime(x) = f^\prime(x) g(x) + f(x) g^\prime(x)$
  4. $\bigg(\dfrac{f}{g}\bigg)^\prime (x) = \dfrac{f^\prime(x)g(x) - f(x)g^\prime(x)}{g(x)^2}$, given that $g(x) \ne 0$

Rules 3 and 4 are called the product and quotient rules respectively, and can be expressed as \(\dfrac{d(uv)}{dx} = v \dfrac{du}{dx} + u \dfrac{dv}{dx}\) and \(\dfrac{d}{dx} \bigg(\dfrac{u}{v}\bigg) = \dfrac{v \dfrac{du}{dx} - u \dfrac{dv}{dx}}{v^2}\) Where $u$ and $v$ are both functions of $x$.

The Chain Rule

If $y = f(u)$ and $u = g(x)$, then \(\dfrac{dy}{dx} = \dfrac{dy}{du} \dfrac{du}{dx}\)

Implicit Differentiation

Given that a number $q$ is rational, then \(\dfrac{d}{dx} x^q = q x^{q - 1}\)

Differentiation, Continuity & Split Functions

If $f$ is differentiable at $a$, then $f$ is continuous at $a$

If $a$ is a fixed real number and the function $f$ is defined by

\[f(x) = \left\{ \begin{array}{l} p(x) \ \ \ \ \ \ \ if \ \ x \ge a \\ q(x) \ \ \ \ \ \ \ if \ \ x < a \end{array} \right.\]

where $p(x)$ and $q(x)$ are continuous and differentiable in some interval containing $a$, then if $f$ is continuous at $a$ and $p^\prime(a) = q^\prime(a)$, then $f$ is differentiable at $x = a$.

Local Maximum, Local Minimum & Stationary Points

Suppose that $f$ is defined on some interval $I$. We say that a point $c$ in $I$ is a local minimum point if there is a positive number $h$ such that $f(x) \le f(x)$ whenever $x \in (c - h, c + h)$ and $x \in I$.

We say that a point $d$ in $(a, b)$ is a local maximum point if there is a positive number $h$ such that $f(x) \le f(d)$ whenever $x \in (d - h, d + h)$ and $c \in I$.

Theorem

Given $f$ is defined on $(a, b)$ and has a local maximum or minimum point $c$ for some $c$ in $(a, b)$. If $f$ is differentiable at $c$ then $f^\prime(c) = 0$.

Stationary Point Definition

If a function $f$ is differentiable at a point $c$ and $f^\prime(c) = 0$, then $c$ is called a stationary point of $f$.

The Mean Value Theorem & Applications

The Mean Value Theorem

Theorem

Suppose that $f$ is continuous on $[a, b]$ and differentiable on $(a, b)$. Then there is at least one real number $c$ in $(a, b)$ such that \(\dfrac{f(b) - f(a)}{b - a} = f'(c)\) Essentially, given two points and their values, there lies a point $c$ in between, where the tangent to $c$ has the same gradient as the line from $a$ to $b$.

The Sign Of A Derivative

Suppose that a function $f$ is defined on an interval $I$. We say that

Theorem

Suppose that $f$ is continuous on $[a, b]$ and differentiable on $(a, b)$

The Second Derivative & Applications

The Second Derivative Test

Suppose that a function $f$ is twice differentiable on $(a, b)$ and that $c \in (a, b)$

Note:

Critical Points, Maxima & Minima

Suppose that $f$ is defined on $[a, b]$. We say that a point $c$ in $[a, b]$ is a critical point for $f$ on $[a, b]$ if $c$ satisfies one of the following properties:

Theorem

If $f$ is continuous on $[a, b]$, then $f$ has a global maximum and a global minimum on $[a, b]$. Also, the global maximum point and the global minimum point are both critial points for $f$ on $[a, b]$.

Antiderivatives

Suppose that $f$ is continuous on an open interval $I$. A function $F$ is said to be an antiderivative of $f$ on $I$ if $F^\prime(x) = f(x)$ for all $x$ in $I$.

Theorem

Suppose that $f$ is a continuous function on an open interval $I$ and that $F$ and $G$ are two antiderivatives of $f$ on $I$. Then there exists a real constant $C$ such that $G(x) = F(x) + C$ for all $x$ in $I$.

Well Known Antiderivatives

Function Antiderivative
$x^r$, where $r$ is rational and $r \ne -1$ $\dfrac{1}{r + 1} x^{r + 1} + C$
$sin(x)$ $-cos(x) + C$
$cos(x)$ $sin(x) + C$
$e^{ax}$ $\dfrac{1}{a}e^{ax} + C$
$\dfrac{f^\prime(x)}{f(x)}$ $ln|f(x)| + C$

L’Hopital’s Rule

L’Hopital’s Rule

Suppose that $f$ and $g$ are both differentiable functions and $a$ is a real number. Suppose also that either one of the two following conditions hold:

If $\lim_{x \to a} \dfrac{f^\prime(x)}{g^\prime(x)}$ exists, then \(\lim_{x \to a} \dfrac{f(x)}{g(x)} = \lim_{x \to a} \dfrac{f'(x)}{g'(x)}\) This theorem also holds for

Inverse Functions

One To One Functions

A function is one to one if $f(x_1) = f(x_2)$ implies that $x_1 = x_2$ whenevery $x_1, x_2 \in Dom(f)$.

Inverse Functions

Suppose that a function $f$ is a one to one function. Then the inverse function of $f$ is the unique function $g$ such that $g(f(x)) = x$ and $f(g(x)) = x$, and $Dom(g) = Range(f), \ Range(g) = Dom(f)$. The inverse function for $f$ is often written as $f^{-1}$

The Inverse Function Theorem

Suppose that $I$ is an open interval, $f : I \to \mathbb{R}$ is differentiable, and $f^\prime(x) \ne $ 0for all $x \in I$. Then:

Curve Sketching

Curves Defined By A Cartesian Equation

Checklist For Sketching Curves

Oblique Asymptotes

Suppose that $a$ and $b$ are real numbers and that $a \ne 0$. We say that a straight line given by the equation $y = ax + b$ is an oblique asymptote for a function $f$ if $\lim_{x \to \infty} (f(x) - (ax + b)) = 0$.

Essentially, this is saying that given a line $y = ax + b$ and a function $f$, if $f$ approaches $y$ as $x \to \infty$, the line $y$ is known as an oblique asymptote.

Integration

Areas & The Reimann Integral

The Definition Of Area Under The Graph Of A Function & The Reimann Integral

Suppose that $f$ is a bounded function on $[a, b]$ and that $f(x) \ge 0$ for all $x \in [a, b]$. In this subsection we define what is meant by ‘the area under the graph of $f$ from $a$ to $b$’. This is done by constructing upper and lower Reimann sums with respect to partitions of $[a, b]$.

A finite set $P$ of points in $\mathbb{R}$ is said to be a partition of $[a, b]$ if $P = {a_0, a_1, a_2, …, a_n}$ and $a = a_0 < a_1 < a_2 < … < a_n = b$.

Definition of the area under the graph of a function

Suppose that $f$ is bounded on $[a, b]$ and $f(x) \ge 0$ for all $x \in [a, b]$. If there exists a unique real number $A$ such that $\underline{S}_p(f) \le A \le \bar{S}_p(f)$ for every partition $P$ of $[a, b]$, then we say that $A$ is the area under the graph of $f$ from $a$ to $b$.

Definition of the Reimann integral

Suppose that a function $f$ is bounded on $[a, b]$. If there exists a unique real number $I$ such that $\underline{S}_p(f) \le I \le \bar{S}_p(f)$ for every partition $P$ of $[a, b]$, then we say $f$ is Reimann integrable on the interval $[a, b]$. If $f$ is Reimann integrable, then the unique real number $I$ is called the definite integral of $f$ from $a$ to $b$ and we write $I = \int_{a}^{b} f(x) \ dx$.

The function $f$ is called the integrand of the definite integral, while the points $a$ and $b$ are called the limits of the definite integral.