Written by Luka Kerr on November 23, 2019


Data is known recorded facts, with implicit meaning. A Database is a collection of related data, satisfying constraints. A DBMS is a database management system.

Data Modelling

Data modelling is a design process that converts requirements into a data model.

The aims of data modelling are

There are two kinds of data models:

The most important aspects of a data model are correctness, completeness and consistency.

ER Model

When modelling using the ER model we think of the world as a collection of inter-related entities.

The ER model has three main constructs

  1. Attribute: data item describing a property of interest
  2. Entity: collection of attributes describing object of interest
  3. Relationship: association between entities

ER Diagrams

ER diagrams are a graphical tool for data modelling.

An ER diagram consists of


An entity can be viewed as either a set of entities with the same set of attributes (extensional) or an abstract description of a class of entities (intensional).

Entities contain


Relationship Sets

A relationship is an association among several entities. A relationship set is a collection of relationships of the same type.

Relationships contain

In some cases, a relationship needs associated attributes.


Weak Entities

Weak entities exist only because of association with strong entities. They have no key of their own and have a discriminator.

Subclasses & Inheritance

A subclass of an entity set $A$ is a set of entities with all attributes of $A$, plus (usually) it own attributes, that is involved in all of $A$’s relationships, plus its own.

Subclasses can be

Relational Model

The relational data model describes the world as a collection of inter-connected relations (or tables).

A relational data model consists of

The value NULL belongs to all domains.


Eeach relation has


Each attribute has


Constraints are logical statements that describe what values are/are not allowed and what combinations of values are/are not allowed.

Integrity Constraints

Integrity constraints can

Referential Integrity

Referential integrity constraints can

Relational DBMSs

A relational database management system (RDBMS)


PostgreSQL is a fully featured, client server DBMS. It is resource intensive and can run distributed and replicated. It follows the SQL standard closely, but not totally. It also has extra datatypes (e.g. JSON)


SQLite is a fully featured, serverles DBMS. It is lightweight and intended to be embedded in applications. It doesn’t have stored procedures, but does have extra datatypes.

Managing Databases

# Create a new, empty database
$ createdb dbname

# Drop and remove all data from a database
$ dropdb dbname

# Dump the contents of a database
$ pg_dump dbname > dumpfile

# Restore the contents of a database dump
$ psql dbname -f dumpfile


SQL is a Data Definition Language that can formalise relational schemas.

  attrName1 domain1 constraints1,
  attrName2 domain2 constraints2,
  PRIMARY KEY (attri, attrj,...)
  FOREIGN KEY (attrx, attry,...) 
              OtherTable (attrm, attrn,...)


-- Comments are after two dashes

-- Identifiers are alphanumeric and can be inside double quotes
-- Identifiers are case insensitive
"An Identifier", AnIdentifier

-- There are many keywords

-- Strings are inside single quotes
'a string'

-- Numbers are similar to C
1, -5, 3.14159

-- There are multiple types
integer, float, char(n), varchar(n), date

-- There are multiple operators
=, <>, <, <=, >, >=, AND, OR, NOT

Managing Tables

-- Create a table with attributes and constraints
CREATE TABLE table (Attributes + Constraints)

-- Modify a table
ALTER TABLE table TableSchemaChanges

-- Drop a table

-- Truncate a table

Managing Tuples

-- Insert into a table
INSERT INTO table (Attrs) Values Tuple(s)

-- Delete from a table
DELETE FROM table WHERE condition

-- Update a table
UPDATE table SET AttrValueChanges WHERE condition



-- Define own constrained type
CREATE DOMAIN Name AS Type CHECK (Constraint)

-- Define tuple type
CREATE TYPE Name AS (AttrName AttrType, ...)

-- Define enumerated type
CREATE TYPE Name AS ENUM ('Label', ...)

String Comparison

PostgreSQL provides regexp-based pattern matching.

-- ~ and !
Attr ~ 'RegExp'     -- Matches 'RegExp'
Attr !~ 'RegExp'    -- Doesn't match 'RegExp'

-- ~* and !~*
Attr ~* 'RegExp'    -- Matches 'RegExp' case-insensitive
Attr !~* 'RegExp'   -- Doesn't match 'RegExp' case-insensitive

String Manipulation

-- Concatenate a and b
a || b

-- Lowercase a

-- Extract substring from a
substring(a, start, count)


An SQL query consists of a sequence of clauses:

SELECT   projectionList
FROM     relations/joins
WHERE    condition
GROUP BY groupingAttributes
HAVING   groupCondition

The FROM, WHERE, GROUP BY and HAVING clauses are optional.


A view associates a name with a query. Each time the view is invoked (in a FROM clause) the query is evaluated, yielding a set of tuples. The set of tuples is used as the value of the view.

A view can be treated as a ‘virtual table’ and are useful for packaging a complex query to use in other queries.

CREATE VIEW viewName [(attributes)] AS Query

Programming With SQL

PostgreSQL Stored Procedures

The PostgreSQL syntax for defining stored functions:

  funcName(arg1, arg2, ...) RETURNS retType
AS $$
String containing function definition
$$ LANGUAGE funcDefLanguage;



The PLpgSQL interpreter

Function Return Types

A PostgreSQL function can return a value which is

A function returning a set of values is similar to a view.

SQL Functions

PostgreSQL allows functions to be defined in SQL:

  funcName(arg1type, arg2type, ...)
  RETURNS retType
AS $$
  SQL statements
$$ LANGUAGE sql;

Within the function, arguments are accessed as $1, $2, ... corresponding to their position in the function definition.

A parameterless function behaves similar to a view.


PLpgSQL stands for Procedural Language extensions to PostgreSQL. It is a PostgreSQL-specific language integrating features of procedural programming and SQL programming.

It provides a means for extending DBMS functionality, specifically it

PLpgSQL functions are created (and inserted into db) via:

  funcName(param1, param2, ...)
  RETURNS retType 
AS $$
   variable declarations
   code for function
$$ LANGUAGE plpgsql;

The entire function body is stored as a single SQL string.


Data Types

PLpgSQL constants and variables can be defined using

There is also a CURSOR type for interacting with SQL.

Variables can also be defined in terms of


-- Assignment
var := exp

-- Selection

-- Iteration
FOR rec_var IN Query LOOP ...
FOR int_var IN lo..hi LOOP ...

Can capture query results via

SELECT Exp1, Exp2, ..., ExpN
INTO   Var1, Var2, ..., VarN
FROM   TableList
WHERE  Condition ...

where the query is executed as usual, a projection list is returned as usual and each ExpI is assigned corresponding to each VarI.


Aggregates reduce a collection of values into a single result.

The action of an aggregate function can be viewed as

State = initial state for each item V {
  // update State to include V
  State = updateState(State, V)
return makeFinal(State)

PostgreSQL User Defined Aggregates

The SQL standard does not specify user-defined aggregates, but PostgreSQL does provides a mechanism for defining them.

To define a new aggregate, we need to supply

Aggregates are created using the CREATE AGGREGATE statement:

  sfunc = UpdateStateFunction, 
  stype = StateType,
  initcond = InitialValue,
  finalfunc = MakeFinalFunction,
  sortop = OrderingOperator


User Defined count()
CREATE AGGREGATE myCount(anyelement) (
  stype = int, 
  initcond = 0, 
  sfunc = oneMore

CREATE FUNCTION oneMore(sum int, x anyelement) RETURNS int
AS $$
    RETURN sum + 1; 


Column and table constraints ensure validity of one table. Referential integrity constraints ensure connections between tables are valid. However, specifying validity of an entire database often requires constraints involving multiple tables.


Assertions are schema-level constraints. They

To create an assertion, we use the syntax CREATE ASSERTION name CHECK (condition), where the condition is expressed as “there are no violations in the database”.

For example, to ensure that the number of students in any university course must be < 10000, an assertion can be defined as follows

    SELECT c.id FROM Courses c, CourseEnrolments e 
    WHERE c.id = e.course
    GROUP BY c.id
    HAVING count(e.student) > 9999

On each update, it is expensive to determine which assertions need to be checked and to run the queries which check the assertions. Most RDBMSs do not implement general assertions, but rather provide triggers


Triggers are procedures stored in the database that are activated in response to database events.

Triggers provide event-condition-action (ECA) programming where

To define a trigger, we use the syntax

{AFTER|BEFORE} Event1 [ OR Event2 ... ] 
ON TableName
[ WHEN ( Condition ) ]
Block of Procedural/SQL Code;

Possible Events are INSERT, DELETE, UPDATE.


Triggers can be activated BEFORE or AFTER the event.

If activated BEFORE

If activated AFTER


A standard information_schema exists for describing metadata about the database.

PostgreSQL Catalog

PostgreSQL implemented it’s own catalogs long before the standard information_schema was developed.

The PostgreSQL catalog is accessible via pg_XXX tables/views, for example

PostgreSQL also has a standard catalog also available, via information_schema.

Programming With Databases

A common database access pattern used in programming languages is seen below.

db = connect_to_dbms(DBname, User/Password)

query = build_SQL('SQLStatementTemplate', values)

results = execute_query(db, query)

while more_tuples_in(results):
  tuple = fetch_row_from(results)
  # do something with values in tuple

Python & Psycopg2

Psycopg2 is a Python module that provides

A standard Psycopg2 program is seen below.

import psycopg2

conn = psycopg2.connect(DB_connection_string)

cur = conn.cursor()

cur.execute('SQLStatementTemplate', values)


Psycopg2 has various useful methods

Relational Design Theory

A good relational database design

Relational Design & Redundancy

In database design, redundancy is generally a bad thing as it causes problems maintaining consistency after updates.

Functional Dependencies

A relation instance $r(R)$ satisfies a dependency $X \to Y$ if for any $t, u \in r, t[X] = u[X] \implies t[Y] = u[Y]$.

In other words, if two tuples in $R$ agree in their values for the set of attributes $X$, then they must also agree in their values for the set of attributes $Y$. In this case, we say that $Y$ is functionally dependent on $X$.

Attribute sets $X$ and $Y$ may overlap, and it is trivially true that $X \to X$.

Inference Rules

Armstrong’s rules are general rules of inference on functional dependencies.

Other useful rules exist too.


Given a set $F$ of functional dependencies, how many new functional dependencies can we derive? For a finite set of attributes, there must be a finite set of derivable functional dependencies.

The largest collection of dependencies that can be derived from $F$ is called the closure of $F$ and is denoted $F^+$.


Normalisation aims to

Normal Forms

Normalisation theory defines six normal forms

where 1NF allows the most redundancy and 5NF allows the least redundancy.

Normalisation aims to puts a schema into xNF by ensuring that all relations in the schema are in xNF.

Relation Decomposition

The standard transformation technique to remove redundancy is to decompose a relation $R$ into relations $S$ and $T$.

We accomplish decomposition by

Properties include $R = S \cup T$, $S \cap T = \emptyset$ and $r(R) = s(S) \bowtie t(T)$.

Boyce-Codd Normal Form (BCNF)

A relation schema $R$ is in BCNF with reference to a set $F$ of functional dependencies iff for all functional dependencies $X \to Y \in F^+$

A database schema is in BCNF if all of its relation schemas are in BCNF.

The following algorithm converts an arbitrary schema to BCNF.

# Inputs: schema R, set F of functional dependencies 
# Output: set Res of BCNF schemas

Res = {R};

while any schema S in Res is not in BCNF:
  choose any fd X -> Y on S that violates BCNF
  Res = (Res-S) union (S-Y) union XY

Third Normal Form (3NF)

A relation schema $R$ is in 3NF with reference to a set $F$ of functional dependencies iff for all functional dependencies $X \to Y \in F^+$

A database schema is in 3NF if all relation schemas are in 3NF.

If we transform a schema into 3NF, we are guaranteed

The following algorithm converts an arbitrary schema to 3NF.

# Inputs: schema R, set F of fds
# Output: set R_i of 3NF schemas

let F_c be a minimal cover for F

Res = {} 

for each fd X -> Y in F_c:
  if no schema S in Res contains XY:
    Res = Res union XY

if no schema S in Res contains a candidate key for R:
  K = any candidate key for R
  Res = Res union K

Relational Algebra

Relational algebra can be viewed as a

It consists of

The core relational algebra operations are


Operation Standard Notation Our Notation
Selection $\sigma_{expr}(Rel)$ $Sel[expr](Rel)$
Projection $\pi_{A, B, B}(Rel)$ $Proj[A, B, C](Rel)$
Join $Rel_1 \bowtie_{expr} Rel_2$ $Rel_1 \ Join[expr] \ Rel_2$
Rename $\rho_{schema} Rel$ $Rename[schema](Rel)$

We define the semantics of RA operations using


Selection $Sel[C](r)$ returns a subset of the tuples in a relation $r(R)$ that satisfy a specified condition $C$.

result = {}

for each tuple t in relation r:
  if C(t):
    result = result + {t}


Projection $Proj[X](r)$ returns a set of tuples containing a subset of the attributes in the original relation, where $X$ specifies a subset of the attributes of $R$.

In pseudocode

result = {}

for each tuple t in relation r:
  result = result + {t[X]}


Product $r \times s$ combines information from two relations pairwise on tuples.

In pseudocode

result = {}

for each tuple t1 in relation r:
  for each tuple t2 in relation s:
    result = result + {(t1:t2)}

Natural Join

Natural join $r \ Join \ s$ is a specialised product. It contains only pairs that match on common attributes with one of each pair of common attributes eliminated.

In pseusocode

result = {}

for each tuple t1 in relation r:
  for each tuple t2 in relation s:
    if matches(t1, t2):
      result = result + {combine(t1, t2)}

Theta Join

Theta join $r \ Join[C] \ s$ is a specialised product containing only pairs that match on a supplied condition $C$.


Rename provides “schema mapping”. If expression $E$ returns a relation $R(A_1, A_2, \dots, A_n)$ then $Rename[S(B_1, B_2, \dots, B_n)](E)$ gives a relation called $S$ containing the same set of tuples as $E$ but with the name of each attribute changed from $A_i$ to $A_b$.


Union $r_1 \cup r_2$ combines two compatible relations into a single relation via set union of sets of tuples.

In pseudocode

result = r1

for each tuple t in relation r2:
  result = result + {t}


Intersection $r_1 \cap r_2$ combines two compatible relations into a single relation via set intersection of sets of tuples.

In pseudocode

result = {}

for each tuple t in relation r1:
  if t in r2:
    result = result + {t}


Difference $r_1 - r_2$ finds the set of tuples that exist in one relation but do not occur in a second compatible relation.

In pseudocode

result = {}

for each tuple t in relation r1:
  if t not in r2:
    result = result + {t}


Division $r \ Div \ s$ considers each subset of tuples in a relation $R$ that match on $t[R - S]$ for a relation $S$. For this subset of tuples, it takes the $t[S]$ values from each, and if this covers all tuples in $S$, then it includes $t[R - S]$ in the result.

We have $r \ Div \ s = \{ t : t \in r[R - S] \land satisfy \}$ where $satisfy = \forall t_s \in S : \exists t_r \in R : t_r[S] = t_s \land t_r[R - S] = t$.

DBMS Architecture

Query Evaluation

The cost of evaluating a SQL query is determined by

The analysis of costs involves estimating the size of intermediate results then, based on this, the cost of secondary storage accesses. Accessing data from disk is the dominant cost in query evaluation.

Relational algebra operations can use different algorithms:

Query Optimisation

A DBMS query optimiser works as follows:

# Input: relational algebra expression
# Output: execution plan (sequence of RA ops)

bestCost = INF
bestPlan = None 

while more possible plans:
  plan = produce a new evaluation plan 
  cost = estimated_cost(plan)
  if cost < bestCost:
    bestCost = cost
    bestPlan = plan

return bestPlan

Typically, there are very many possible plans, but smarter optimisers generate a subset of possible plans.


Indexes provide efficient content-based access to tuples. Indexes can be built on any combination of attributes.

To define an index

CREATE INDEX name ON table (attr1, attr2, ...)

PostgreSQL Query Tuning

PostgreSQL provides the EXPLAIN statement to give a representation of the query execution plan with information that may help to tune query performance.

To explain an query


Without ANALYZE, EXPLAIN shows a plan with estimated costs, with ANALYZE, EXPLAIN executes the query and prints the real costs.


A transaction is an atomic “unit of work” in an application which may require multiple database changes. Transactions happen in a multi-user, unreliable environment.

To maintain integrity of data, transactions must be

A transaction must always terminate, either successfully (COMMIT) with all changes preserved, or unsuccessfully (ABORT) with the database unchanged.

To describe transaction effects, we consider

normally abbreviated to R(X), W(X), A, C.

Transaction Consistency

Transactions typically have intermediate states that are invalid. However, states before and after transactions must be valid.

A schedule defines a specific execution of one or more transactions, typically concurrent, with interleaved operations.

Abribtrary interleaving of operations causes anomalies, so that two consistency-preserving transactions produce a final state which is not consistent.

Serial Schedules

Given two transactions $T_1$ and $T_2$

T1: R(X) W(X) R(Y) W(Y)
T2: R(X) W(X) R(Y) W(Y)

there exist two serial executions

T1: R(X) W(X) R(Y) W(Y)
T2:                     R(X) W(X) R(Y) W(Y)


T1:                     R(X) W(X) R(Y) W(Y)
T2: R(X) W(X) R(Y) W(Y)

Serial execution guarantees a consistent final state if the initial state of the database is consistent and $T_1$ and $T_2$ are consistency-preserving.

Concurrent Schedules

Concurrent schedules interleave transaction operations. Some concurrent schedules ensure consistency, and some cause anomalies.


A serialisable schedule is a concurrent schedule for $T_1 \dots T_n$ with a final state $S$. $S$ is also a final state of one of the possible serial schedules for $T_1 \dots T_N$.

There are two common formulations of serialisability

Conflict Serialisability

Two transactions have a potential conflict if they perform operations on the same data item and at least one of the operations is a write operation. In such cases, the order of operations affects the result. If there is no conflict, we can swap order without affecting the result.

If we can transform a schedule by swapping the order of non-conflicting operations such that the result is a serial schedule then we can say that the schedule is conflict serialisable.

To check for conflict serialisability we need to show that ordering in concurrent schedule cannot be achieved in any serial schedule. This can be achieved by building a precedence graph where

View Serialisability

View serialisability is an alternative formulation of serialisability that is less strict than conflict serialisability.

Two schedules $S$ and $S^\prime$ on $T_1 \dots T_n$ are view equivalent iff for each shared data item $X$

To check serialisability of $S$, find a serial schedule that is view equivalent to $S$ from among the $n!$ possible serial schedules.

Concurrency Control

Serialisability tests are useful theoretically, but don’t provide a mechanism for organising schedules.

Approaches to ensuring ACID transactions:

Lock Based Concurrency Control

Locak based concurrency control involves synchronising access to shared data items via the following rules

Overall, locking reduces concurrency which leads to lower throughput. Different granularity levels of locking (i.e. field, row, table, whole database) can also impact performance.

Version Based Concurrency Control

Version based concurrency control provides multiple (consistent) versions of the database and gives each transaction access to a version that maintains the serialisability of the transaction.

In version based concurrency control, writing never blocks reading and reading never blocks writing.