A matrix is a set of numbers formed in a rectangular shape with rows and columns. A lot of these are found from Math is fun so be sure to check it out!
Dimensions: We call a matrix an m*n matrix if it has m rows and n columns.
Notation: Call an element of matrix A that is in row r and column c. We write the element as A_{r,c}. (Memorization: arc) Important takeaway: rows come first.
Operations with 2 matrices:
Adding
Substracting
Multiplying
Dividing (multiply by inverse)
What we can do with 1 matrix:
Multiply it by a scalar
Negate it (multiply it by -1)
Transpose it (the rows <=> columns)
Get its inverse (a unique matrix, written as A^{-1}.)
Get the determinant (a unique number but yes different matrices can have the same determinant)
Types of matrices
The main diagonal: It starts from the top left corner and then goes diagonally downward towards the right.
Zero/Null: 0's everywhere.
Square: # of rows = # of columns.
Diagonal: 0's anywhere not on the main diagonal.
Symmetric: A is symmetric when A = A^T. (This implies that A also has to be square. The main diagonal remains the same as well.)
Identity: Square and all 1's on the main diagonal. I_n represents the identity matrix with dimensions n*n. Property (it's like 1 in the numerical universe): AI = IA = A
Scalar: CI where C is a scalar.
Triangular:
- Lower: 0's anywhere above the main diagonal.
- Upper: 0's anywhere below the main diagonal.
Hermitian: Symmetric except a_{r,c} is a{c,r}* (* denotes the complex conjugate).
Matrix adding and substracting
The dimensions have to be the same.
'Just do it normally', that is: if A + B = C, then A{r,c} + B{r,c} = C{r,c}.
The resulting matrix should also have the same dimensions.
Matrix multiplication
We are looking for 2 matrices A and B to multiply. Note that AB is not necessarily equal to BA.
How do we determine the dimensions of AB? Let's say A is an m*n matrix, and B is an n*p matrix. Then AB is an m*p matrix.
So I am going to have
A = [[a, b],
[c, d],
[e, f]],
and
B = [[g, h, i],
[j, k, l]].
Then
AB = [[ag+ch+ei, bg+dh+fi],
[aj+ck+el, bj+dk+fl]]
and
BA = [[ag+bj, ah+bk, ai+bl],
[cg+dj, ch+dk, ci+dl],
[eg+fj, eh+fk, ei+fl]].
So now there should be a clear idea of how matrix multiplication works.
Determinant of a matrix
The matrix has to be square.
For 1*1 matrices:
The determinant of [a] is, well just a.
For 2*2 matrices:
The determinant of
[[a, b],
[c, d]]
is ad-bc.
For 3*3 matrices:
The determinant of
[[a, b, c],
[d, e, f],
[g, h, i]]
is a(ei−fh) − b(di−fg) + c(dh−eg).
For 4*4 and higher matrices (call them all A):
plus A{1,1} times the determinant of the square matrix not in column 1,
minus A{1,2} times the determinant of the square matrix not in column 2,
plus A{1,3} times the determinant of the square matrix not in column 3,
minus A{1,4} times the determinant of the square matrix not in column 4,
and so on. (The square matrices are also not in row 1, definitely.)
This is called Laplace's expansion and there other methods to get the determinant but I'll be sticking to this.
Augmented matrices and the row operations
Augmented matrices have a divider in them.
Ok, so what are the row operations?
1. You can swap two rows.
2. You can multiply a row by a constant.
3. You can substract a multiple of another row from a row.
Now how do these help us in any way?
We can use them to
solve systems of linear equations, and even
find the inverse of a matrix!
Inverse of a matrix
The inversse A^{-1} of matrix A is a unique matrix that satisfies the condition AA^{-1} = A^{-1}A = I.
So sometimes A has no inverse... that's a bit sad but it's ok!
Let's say we want to get the inverse of A (which again has to be square). There are 2 nice ways to do it:
The not-so-fun way: There are 4 steps involved.
Step 1: Find the matrix of minors. (Very tedious even for 3*3 matrices.)
For each element of A:
ignore the values on the current row and column, and then
calculate the determinant of the remaining values, and finally
write that determinant down in the corresponding space.
Step 2: Apply a 'checkerboard', or basically make the matrix of cofactors.
For each element a_{r,c} of A:
- If r and c are of the same parity, then nothing happens!
- If r and c are of different parity, then a_{r,c} *= -1.
Step 3: Transpose A to get the adjugate/adjoint. => A := A^T.
Step 4: Multiply A by 1/det(A) (which is a scalar), and we're done!
The very fun way (row operations/Gauss-Jordan):
So begin with an augmented matrix with A on the left and I on the right.
Use the row operations to make A into I, then I on the right will become A^{-1}! Magic, right?
I'll add an example solve here soon - but you can always search for one while I'm on it!
Solving systems of equations with matrices
So you can actually do this with augmented matrices and row operations... AND simple matrix division!
I'll be adding example solves soon, but check out Math is fun's page for matrix division for one!