Announcements‎ > ‎

Matrix multiplication

posted Apr 12, 2010, 5:06 PM by karam kassem   [ updated Apr 12, 2010, 5:55 PM ]

Ordinary matrix product

The ordinary matrix product is the most often used and the most important way to multiply matrices. It is defined between two matrices only if the width of the first matrix equals the height of the second matrix. Multiplying an m×n matrix with an n×p matrix results in an m×p matrix. If many matrices are multiplied together, and their dimensions are written in a list in order, e.g. m×n, n×p, p×q, q×r, the size of the result is given by the first and the last numbers (m×r), and the values surrounding each comma must match for the result to be defined. The ordinary matrix product is not commutative.


The element x3,4 of the above matrix product is computed as follows

The first coordinate in matrix notation denotes the row and the second the column; this order is used both in indexing and in giving the dimensions. The element x_{{\color{Blue}i}{\color{Red}j}} at the intersection of row i and column j of the product matrix is the dot product (or scalar product) of row i of the first matrix and column j of the second matrix. This explains why the width and the height of the matrices being multiplied must match: otherwise the dot product is not defined.


The figure to the right illustrates the product of two matrices A and B, showing how each intersection in the product matrix corresponds to a row of A and a column of B. The size of the output matrix is always the largest possible, i.e. for each row of A and for each column of B there are always corresponding intersections in the product matrix. The product matrix AB consists of all combinations of dot products of rows of A and columns of B.

The values at the intersections marked with circles are :

Formal definition

Formally, for

for some field F, then
where the elements of AB are given by

for each pair i and j with 1 ≤ im and 1 ≤ jp. The algebraic system of "matrix units" summarizes the abstract properties of this kind of multiplication.


Relationship with the inner product and the outer product

The Euclidean inner product and outer product are the simplest special cases of the ordinary matrix product. The inner product of two column vectors A and B is A\cdot B = A^TB, where T denotes the matrix transpose. More explicitly,



Matrix multiplication can be viewed in terms of these two operations by considering how the matrix product works on block matrices.

Decomposing A into row vectors and B into column vectors:



This is an outer product where the real product inside is replaced with the inner product. In general, block matrix multiplication works exactly like ordinary matrix multiplication, but the real product inside is replaced with the matrix product.

An alternative method results when the decomposition is done the other way around (A decomposed into column vectors and B into row vectors):



This method emphasizes the effect of individual column/row pairs on the result, which is a useful point of view with e.g. covariance matrices, where each such pair corresponds to the effect of a single sample point. An example for a small matrix:



One more useful decomposition results when B is decomposed into columns and A is left undecomposed. Then A is seen to act separately on each column of B, transforming them in parallel. Conversely, B acts separately on each row of A.

If x is a vector and A is decomposed into columns, then \mathbf{A}x = A_1x_1+A_2x_2+\cdots+A_nx_n. The column vectors of A give directions and units for coordinate axes and the elements of x are coordinates on the corresponding axes. \mathbf{A}x is then the vector which has those coordinates in the coordinate system given by the column vectors of A.


Algorithms for efficient matrix multiplication


The running time of square matrix multiplication, if carried out naively, is O(n3). The running time for multiplying rectangular matrices (one m×p-matrix with one p×n-matrix) is O(mnp). But more efficient algorithms do exist. Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication", is based on a clever way of multiplying two 2 × 2 matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this trick recursively gives an algorithm with a multiplicative cost of O( n^{\log_{2}7}) \approx O(n^{2.807}). Strassen's algorithm is awkward to implement, compared to the naive algorithm, and it lacks numerical stability. Nevertheless, it is beginning to appear in libraries such as BLAS, where it is computationally interesting for matrices with dimensions n > 100, and is very useful for large matrices over exact domains such as finite fields, where numerical stability is not an issue.

The currently O(nk) algorithm with the lowest known exponent k is the Coppersmith–Winograd algorithm. It was presented by Don Coppersmith and Shmuel Winograd in 1990, has an asymptotic complexity of O(n2.376). It is similar to Strassen's algorithm: a clever way is devised for multiplying two k × k matrices with fewer than k3 multiplications, and this technique is applied recursively. It improves on the constant factor in Strassen's algorithm, reducing it to 4.537. However, the constant term hidden by the Big O Notation is so large that the Coppersmith–Winograd algorithm is only worthwhile for matrices that are too large to handle on present-day computers.

Since any algorithm for multiplying two n × n matrices has to process all 2 × n² entries, there is an asymptotic lower bound of ω(n2) operations. Raz (2002) proves a lower bound of Ω(m2logm) for bounded coefficient arithmetic circuits over the real or complex numbers.

Cohn et al. (2003, 2005) put methods, such as the Strassen and Coppersmith–Winograd algorithms, in an entirely different group-theoretic context. They show that if families of wreath products of Abelian with symmetric groups satisfying certain conditions exists, matrix multiplication algorithms with essential quadratic complexity exist. Most researchers believe that this is indeed the case - a lengthy attempt at proving this was undertaken by the late Jim Eve.

Because of the nature of matrix operations and the layout of matrices in memory, it is typically possible to gain substantial performance gains through use of parallelisation and vectorization. It should therefore be noted that some lower time-complexity algorithms on paper may have indirect time complexity costs on real machines.


Relationship to linear transformations


Matrices offer a concise way of representing linear transformations between vector spaces, and (ordinary) matrix multiplication corresponds to the composition of linear transformations. This will be illustrated here by means of an example using three vector spaces of specific dimensions, but the correspondence applies equally to any other choice of dimensions.

Let X, Y, and Z be three vector spaces, with dimensions 4, 2, and 3, respectively, all over the same field, for example the real numbers. The coordinates of a point in X will be denoted as xi, for i = 1 to 4, and analogously for the other two spaces.

Two linear transformations are given: one from Y to X, which can be expressed by the system of linear equations


\begin{align}   x_1 & = a_{1,1}y_1+a_{1,2}y_2 \\   x_2 & = a_{2,1}y_1+a_{2,2}y_2 \\   x_3 & = a_{3,1}y_1+a_{3,2}y_2 \\   x_4 & = a_{4,1}y_1+a_{4,2}y_2 \end{align}

and one from Z to Y, expressed by the system

\begin{align}   y_1 & = b_{1,1}z_1+b_{1,2}z_2+b_{1,3}z_3 \\   y_2 & = b_{2,1}z_1+b_{2,2}z_2+b_{2,3}z_3 \end{align}

These two transformations can be composed to obtain a transformation from Z to X. By substituting, in the first system, the right-hand sides of the equations of the second system for their corresponding left-hand sides, the xi can be expressed in terms of the zk:

\begin{align}   x_1 & = a_{1,1}(b_{1,1}z_1{+}b_{1,2}z_2{+}b_{1,3}z_3)+a_{1,2}(b_{2,1}z_1{+}b_{2,2}z_2{+}b_{2,3}z_3) \\       & = (a_{1,1} b_{1,1}{+}a_{1,2} b_{2,1})z_1+(a_{1,1} b_{1,2}{+}a_{1,2} b_{2,2})z_2+(a_{1,1} b_{1,3}{+}a_{1,2} b_{2,3})z_3 \\   x_2 & = a_{2,1}(b_{1,1}z_1{+}b_{1,2}z_2{+}b_{1,3}z_3)+a_{2,2}(b_{2,1}z_1{+}b_{2,2}z_2{+}b_{2,3}z_3) \\       & = (a_{2,1} b_{1,1}{+}a_{2,2} b_{2,1})z_1+(a_{2,1} b_{1,2}{+}a_{2,2} b_{2,2})z_2+(a_{2,1} b_{1,3}{+}a_{2,2} b_{2,3})z_3 \\   x_3 & = a_{3,1}(b_{1,1}z_1{+}b_{1,2}z_2{+}b_{1,3}z_3)+a_{3,2}(b_{2,1}z_1{+}b_{2,2}z_2{+}b_{2,3}z_3) \\       & = (a_{3,1} b_{1,1}{+}a_{3,2} b_{2,1})z_1+(a_{3,1} b_{1,2}{+}a_{3,2} b_{2,2})z_2+(a_{3,1} b_{1,3}{+}a_{3,2} b_{2,3})z_3 \\   x_4 & = a_{4,1}(b_{1,1}z_1{+}b_{1,2}z_2{+}b_{1,3}z_3)+a_{4,2}(b_{2,1}z_1{+}b_{2,2}z_2{+}b_{2,3}z_3) \\       & = (a_{4,1} b_{1,1}{+}a_{4,2} b_{2,1})z_1+(a_{4,1} b_{1,2}{+}a_{4,2} b_{2,2})z_2+(a_{4,1} b_{1,3}{+}a_{4,2} b_{2,3})z_3 \end{align}

These three systems can be written equivalently in matrix–vector notation – thereby reducing each system to a single equation – as follows:

  \begin{bmatrix}     x_1 \\     x_2 \\     x_3 \\     x_4   \end{bmatrix} =   \begin{bmatrix}     a_{1,1} & a_{1,2} \\     a_{2,1} & a_{2,2} \\     a_{3,1} & a_{3,2} \\     a_{4,1} & a_{4,2}   \end{bmatrix}   \begin{bmatrix}     y_1 \\     y_2   \end{bmatrix}
  \begin{bmatrix}     y_1 \\     y_2   \end{bmatrix} =   \begin{bmatrix}     b_{1,1} & b_{1,2} & b_{1,3} \\     b_{2,1} & b_{2,2} & b_{2,3}   \end{bmatrix}   \begin{bmatrix}     z_1 \\     z_2 \\     z_3   \end{bmatrix}
  \begin{bmatrix}     x_1 \\     x_2 \\     x_3 \\     x_4   \end{bmatrix} =   \begin{bmatrix}     a_{1,1} b_{1,1}{+}a_{1,2} b_{2,1} & a_{1,1} b_{1,2}{+}a_{1,2} b_{2,2} & a_{1,1} b_{1,3}{+}a_{1,2} b_{2,3} \\     a_{2,1} b_{1,1}{+}a_{2,2} b_{2,1} & a_{2,1} b_{1,2}{+}a_{2,2} b_{2,2} & a_{2,1} b_{1,3}{+}a_{2,2} b_{2,3} \\     a_{3,1} b_{1,1}{+}a_{3,2} b_{2,1} & a_{3,1} b_{1,2}{+}a_{3,2} b_{2,2} & a_{3,1} b_{1,3}{+}a_{3,2} b_{2,3} \\     a_{4,1} b_{1,1}{+}a_{4,2} b_{2,1} & a_{4,1} b_{1,2}{+}a_{4,2} b_{2,2} & a_{4,1} b_{1,3}{+}a_{4,2} b_{2,3}   \end{bmatrix}   \begin{bmatrix}     z_1 \\     z_2 \\     z_3   \end{bmatrix}

Representing these three equations symbolically and more concisely as

  \begin{align}     \mathbf{x} = \mathbf{Ay} \\     \mathbf{y} = \mathbf{Bz} \\     \mathbf{x} = \mathbf{Cz} \\   \end{align}

inspection of the entries of matrix C reveals that C = AB.

This can be used to formulate a more abstract definition of matrix multiplication, given the special case of matrix–vector multiplication: the product AB of matrices A and B is the matrix C such that for all vectors z of the appropriate shape Cz = A(Bz).



Scalar multiplication


The scalar multiplication of a matrix A = (aij) and a scalar r gives a product r A of the same size as A. The entries of r A are given by

 (r\mathbf{A})_{ij} = r \cdot a_{ij}. \,

For example, if

\mathbf{A}=\begin{bmatrix} a & b \\ c & d \end{bmatrix}

then

 r \cdot \mathbf{A}=\begin{bmatrix} r \cdot a & r \cdot b \\ r \cdot c & r \cdot d \end{bmatrix}

If we are concerned with matrices over a more general ring, then the above multiplication is the left multiplication of the matrix A with scalar p while the right multiplication is defined to be

 (\mathbf{A}r)_{ij} = a_{ij} \cdot r. \,

When the underlying ring is commutative, for example, the real or complex number field, the two multiplications are the same. However, if the ring is not commutative, such as the quaternions, they may be different. For example

  i\begin{bmatrix}      i & 0 \\      0 & j \\    \end{bmatrix} = \begin{bmatrix}     -1 & 0 \\      0 & k \\   \end{bmatrix} \ne \begin{bmatrix}     -1 & 0 \\     0 & -k \\   \end{bmatrix} = \begin{bmatrix}     i & 0 \\     0 & j \\   \end{bmatrix}i.



Powers of matrices


Square matrices can be multiplied by themselves repeatedly in the same way that ordinary numbers can. This repeated multiplication can be described as a power of the matrix. Using the ordinary notion of matrix multiplication, the following identities hold for an n-by-n matrix A, a positive integer k, and a scalar c:

\mathbf{A}^{0} = \mathbf{I}
\mathbf{A}^{k} = \prod_{i=1}^k \mathbf{A}
\ ( c\mathbf{A} )^k = c^k\mathbf{A}^k
\ \text{det} (A^k) = \text{det}(A)^k.

The naive computation of matrix powers is to multiply k times the matrix A to the result, starting with the identity matrix just like the scalar case. This can be improved using the binary representation of k, a method commonly used to scalars. An even better method is to use the eigenvalue decomposition of A.

Calculating high powers of matrices can be very time-consuming, but the complexity of the calculation can be dramatically decreased by using the Cayley-Hamilton theorem, which takes advantage of an identity found using the matrices' characteristic polynomial and gives a much more effective equation for Ak, which instead raises a scalar to the required power, rather than a matrix.

Powers of Diagonal Matrices

The power, k, of a diagonal matrix A, is the diagonal matrix whose diagonal entries are the k powers of the original matrix A.

  A^k = \begin{bmatrix}      a_{11} & 0 & 0 & 0 & \cdots \\      0 & a_{22} & 0 & 0 & \cdots \\      0 & 0 & a_{33} & 0 & \cdots \\      \vdots & \vdots & \vdots & \vdots & \vdots \\      0 & 0 & 0 & \cdots & a_{nn}   \end{bmatrix}^k = \begin{bmatrix}      a_{11}^k & 0 & 0 & 0 & \cdots \\      0 & a_{22}^k & 0 & 0 & \cdots \\      0 & 0 & a_{33}^k & 0 & \cdots \\      \vdots & \vdots & \vdots & \vdots & \vdots \\      0 & 0 & 0 & \cdots & a_{nn}^k   \end{bmatrix}

When raising an arbitrary matrix (not necessarily a diagonal matrix) to a power, it is often helpful to diagonalize the matrix first.



s
Ċ
karam kassem,
Apr 12, 2010, 5:36 PM
Comments