This webpage introduces the essential mathematics behind quantum computing, linear algebra.
Linear algebra is the branch of mathematics that studies vectors, matrices, and linear transformations between them. It provides the tools for describing relationships that are linear, that is, systems in which quantities change proportionally with one another. Qubit, the unit of quantum information, obeys the same linear system rules as the geometric vectors we learned about in high school.
This webpage is divided into two sections to introduce the building block of linear algebra: vectors and matrices.
Table of contents
In mathematics and physics, a vector refers to a quantity that has both magnitude and direction. It can represent quantities that cannot be fully described by a single number, or a scalar. More generally, the word "vector" also refers to elements of vector spaces.
An illustration of an Euclidean vector
(Image credit: https://byjus.com)
To begin, we focus on Euclidean vectors, the type of vector most familiar in geometry and physics. You can imagine it as an arrow in space, which has both length (magnitude) and direction.
Scalar quantities: described by a single number
Examples: time, mass, temperature
Vector quantities: require both magnitude and direction.
Examples: displacement, velocity, force
Vectors are usually written in bold, or with an arrow.
Vectors are often pictured as arrows in space:
Direction → the way the arrow points
Magnitude → the length of the arrow
Source: http://emweb.unl.edu/math/mathweb/vectors/vectors.html
Examples:
A velocity vector might point east with a length corresponding to 10 m/s (magnitude).
A 10 N (magnitude) force pointing upward (direction)
In advanced topics like quantum computing, vectors may:
have many components, sometimes even infinitely many
include complex numbers components.
We can’t always draw them as arrows, but the same vector math rules apply.
In an n-dimensional Cartesian coordinate system, a vector can be represented by an ordered list of n numbers.
Each entry is called a component of the vector.
We can think of the components of a vector as the "shadows" projected onto the coordinate axes. The smaller the angle between the vector and the coordinate axis, the larger the component of the vector along that axis.
A vector in a 3D Cartesian coordinate system
(Image credit: Wikipedia)
Example (2D velocity vector):
This describes a car moving at 3 m/s along the x-axis and 4 m/s along the y-axis.
Vectors can also be described another way in terms of basis vectors.
In 3D, the standard basis vectors are:
Each basis vector:
has unit length,
points along the x-, y-, and z-axis respectively.
Any vector can be built from them:
Think of basis vectors as the building blocks of all other vectors.
This is called the linear combination of basis vectors. It constructs a new vector from a set of basis vectors by multiplying each vector by a constant and adding the results.
Example (2D velocity vector in terms of basis vectors):
Referring back to the moving car example, if we let the basis vectors on x- and y-axes be
respectively, then we can rewrite the velocity vector of the car as
Two vectors are considered equal if they have:
The same magnitude, and
The same direction.
Formally, vectors a and b are equal if and only if all their components are equal:
Just like real numbers, vectors can be added, subtracted, scaled, and combined. However, since vectors have direction, these operations carry extra meaning.
Addition & Subtraction
Adding vectors means combining their effects. Imagine you walk 4 m east (vector a) and then 3 m north (vector b), your overall displacement is the diagonal of the path — the resultant vector.
Addition of Vectors in Component Form (3D):
When adding vectors, we calculate the sum of components of each entry.
Similarly, for subtraction
Example
The resultant displacement vector in the example is:
This means the total displacement is 4 m east and 3 m north.
Scalar Multiplication
Vectors can be multiplied by real numbers, called scalars.
The scalar multiplication of a vector a by a scalar k is:
The new vector ka remains parallel to a but its length changes:
Positive k → same direction, stretched or shrunk
Negative k → opposite direction
Fractional k → shorter length
Visual Examples:
(Image Credit: Wikipedia)
2a → twice as long
-1a → same length, opposite direction
The length or norm of a vector is its magnitude, denoted ||a||. It can be found using the Pythagorean theorem:
This measures the size of the vector, regardless of direction.
A unit vector has length 1 and shows direction only.
The unit vector a in the direction of a vector a is found by dividing the vector by its magnitude.
This process is known as vector normalization.
The most commonly used unit vectors are those in the direction of coordinate axes, known as the standard basis. For example, the standard basis of a 3D coordinate system can be denoted as
i → along x-axis
j → along y-axis
k → along z-axis
Thus, any vector can be written as:
Example (Quantum computing)
In quantum computing, the computational basis plays an important role. For a single qubit:
Any qubit state is a linear combination of these basis vectors.
Since we can multiply a vector by a scalar using scalar multiplication, one might ask: Can we directly multiply two vectors?
The answer is yes, and this can be done using the dot product operation.
The dot product combines two vectors into a scalar.
The dot product of two vectors a and b is defined by
where 𝜃 is the angle between a and b.
Dot product of two vectors
(Image credit: Wikipedia)
The dot product can also be written as the sum of the products of the components of each vector as
If both vectors are the same vectors, it gives a neat identity for the norm of a vector
Example: Work in Mechanics
In mechanics, the work done by a force F on an object for a distance d is defined as the dot product of force and displacement:
If you pull a box with:
Force = 50 N
Distance = 10 m
Angle = 30°
Then,
An illustration of mechanical work done
(Source: https://www.cuemath.com/work-formula/)
Vectors have generalizations that cover a wide variety of physical situations including not only ordinary three-dimensional space with its ordinary vectors, such as displacements, forces, and velocities, but also the four-dimensional spacetime of relativity, called four vectors, and even the infinite-dimensional spaces used in quantum physics with their vectors of infinite components.
They unify geometry and algebra into a powerful tool for describing the physical world.
A matrix is a rectangular array of numbers arranged in rows and columns.
You can think of a matrix as a way to organize data like a table, or as a mathematical object for representing linear transformations.
A m×n matrix has m rows and n columns of numbers:
a_ij = entry in the i-th row and j-th column
m = number of rows
n = number of columns
Column and row matrices:
A row matrix has only one row:
A column matrix has only one column:
Column matrices are often used to represent vectors.
Square matrix:
A square matrix has the same number of rows and columns (m = n)
For example, a 2x2 square matrix can be written as follows:
Square matrices are central in linear algebra because we can define determinants, inverses, and eigenvalues only for square matrices.
All quantum gates are represented by square matrices.
Diagonal matrix, D:
A diagonal matrix is a square matrix where all off-diagonal elements are zero:
Identity matrix, I:
An identity matrix is a special diagonal matrix with all diagonal entries equal to 1:
It acts as the multiplicative identity, that is multiplying any square matrix A by an identity matrix of the same dimension gives the original matrix A.
Zero matrix:
A matrix where every entry is zero:
It acts like the additive identity:
Matrices can be combined and manipulated through a variety of operations. These operations form the foundation of linear algebra and are widely used in physics, computer science, as well as quantum computing.
Addition and Subtraction:
Two matrices can be added or subtracted if they have the same dimensions (same number of rows and columns). The operation is done by adding or subtracting elements of two matrices with the same rows and columns.
Where x_ij is the element of row i and column j of a matrix.
For example:
Scalar Multiplication:
A matrix can be multiplied by a scalar (a single number). Each element of the matrix is multiplied by the scalar:
Example:
Matrix-matrix Multiplication:
Matrix multiplication is defined only when the number of columns of the first matrix equals the number of rows of the second matrix. If A is an m×n matrix and B is an n×p matrix, their product C=AB is an m×p matrix.
The general formula for the entry (i, j) is:
If matrices A and B are both 2×2 matrices, their product is also a 2×2 matrix.
Example:
Note that matrix multiplication is not commutative in general:
but it is associative and distributive:
Matrix-vector Multiplication:
This is a special case of matrix multiplication is multiplying a matrix with a vector. If A is an m×n matrix and x is an n×1 column vector, the result is another vector:
A matrix-vector multiplication represents linear transformation acting on vectors by matrices as an operator.
Example:
The transpose of a matrix is obtained by flipping it over its main diagonal, turning rows into columns:
Example:
The transpose of a row vector is a column vector. Basically, we are flipping the vector.
Example:
If a matrix has complex entries, we can define:
Conjugate matrix: Replace each complex entry with its complex conjugate.
Conjugate transpose (Hermitian adjoint): Take the transpose, then the conjugate. It’s denoted by a dagger sign.
This is central in quantum mechanics, where Hermitian matrices represent observables and unitary matrices describe quantum gates.
Example:
The tensor product (also called the Kronecker product) is an operation that combines two matrices (or vectors) into a larger one.
If A is m×n and B is p×q, then the tensor product A⊗B is an mp×nq block matrix:
Example (vector tensor product):
For two vectors
Their tensor product is a 4-dimensional vector:
This operation is crucial in quantum computing: combining two qubits (2-dimensional vectors) gives a 4-dimensional vector representing their joint state.
Matrices provide a compact and powerful way to represent and manipulate linear relationships between quantities. They serve as the mathematical foundation for transformations, systems of equations, and state representations across mathematics, physics, and computer science.