Matrix-Vector Operations with GSL-BLAS

Introduction

In previous examples, we have already seen how GSL handles vectors, matrices and basic vector/matrix operations like addition, subtraction, scaling, element-wise multiplication etc. We have not yet seen how standard Linear Algebra operations like scalar product of vectors, matrix vector multiplication and matrix-matrix multiplication can be performed using library routines. GSL implements these in a different way, than the other routines seen so far. It used the BLAS standard. Basic Linear Algebra Subprograms (BLAS) is a set of rules that describe how to call low-level functions for performing standard linear algebra operations. Thus, BLAS is not a software or library as such, but a set of universally agreed upon rules upon which numeric experts can build a library of computational linear algebra routines.

The idea behind BLAS is that you will code linear algebra operations in your program by calling routines using the rules of BLAS , then link your program to any one of the many available BLAS libraries in order to execute it. Using a BLAS library frees you from the drudgery of writing your own code for basic linear algebra operations, reducing the amount of code for you to write, allowing you to focus on more important parts of your numerical task, reducing the number of bugs you need to deal with, and allowing numerics experts who design BLAS libraries to deal with technical details that make BLAS libraries faster and more accurate than simple codes composed by lay-people.

There are many different software implementations of BLAS, such as ATLAS, OpenBLAS, Intel MKL-BLAS etc. BLAS can be implemented in numerous programming languages. The C-language implementations are called CBLAS. GSL has its own implementation of CBLAS called gslcblas, and is included with GSL. This library needs to be linked to your programs, together with the usual GSL library. The lab instructions already show you how to link both libraries while using an IDE like quincy, so you need not do anything extra if you have followed those instructions. For the purposes of this lab, we will use the gslcblas implementation, though you can readily install and link to any other CBLAS.


A diagram explaining how GSL does linear algebra computations using BLAS. Users write C code that interacts with the GSL library by calling functions from there. A subset of that library consists of the GSL-BLAS routines. These routines, when called, unwrap the input data and convert it to raw data. The raw data is then passed as input to a corresponding routine in any available CBLAS library that has been linked, like gslcblas, or ATLAS, or OpenBLAS. The CBLAS routine finally executes on the machine with the raw data as input.

Instead of using the bare-bones CBLAS implementation, we will use GSL's built-in interface to CBLAS. This is called GSL-BLAS and consists of simple wrapper functions that access the more complex CBLAS routines. These are defined in the header "gsl/gsl_blas.h". We are doing this because using CBLAS directly is cumbersome. In CBLAS, you have to provide messy and confusing details about the vector and matrix input data that are not necessary while using GSL-BLAS. GSL-BLAS only takes gsl structures as arguments, like gsl_matrix and gsl_vector types, automatically figures out their inner details and feeds the relevant information to CBLAS.

We will not discuss CBLAS specifications in too much detail here, but will give an introduction to the most widely used GSL-BLAS routines.

Illustration of how vector/matrix is called in GSL-BLAS (above) vs how they are called in CBLAS (below)

Syntax rules for BLAS functions

In CBLAS, each routine has a name which specifies the operation, the type of matrices involved and the data precision. Some of the most common operation names are:

In addition to the name of the operation, the routine must specify what type of matrices, if any, are being used. Some examples of the different types of matrices supported are:

In addition to the above-mentioned names, the routine must also specify what is the precision of the data in the matrices. They are:

Thus, the name of each CBLAS routine is composed from the names above as cblas_XYYZZZ(details of the input data), where the meanings of the different parts labelled with XYZ are as follows:

In GSL-BLAS, each routine is composed from the names described above as gsl_blas_XYYZZZ(gsl datatypes only), where the meaning of the different parts are as follows:

Examples of GSL-BLAS functions:

BLAS routines, including GSL-BLAS, are classified into 3 levels.

  • Level 1 BLAS consists of routines that perform vector operations, such as addition, scaling, scalar product, and norms.
  • Level 2 BLAS consists of routines that perform matrix-vector operations, such as matrix-vector products,triangular matrix-vector solves,rank-1 and symmetric rank-2 updates
  • Level 3 BLAS consists of routines that perform matrix-matrix operations, such as matrix-matrix products,triangular matrix solves, and low-rank updates

You can figure out the name of the GSL-BLAS function that you need based on the rules described in the previous section. So, for example, if you want to perform scalar dot product of two real vectors with the data in single precision float, i.e.

then, the relevant GSL_BLAS routine is in level 1, and is called as follows:

Here, as per CBLAS rules:

  • The 's' in 'sdot' means 'single precision float',
  • The 'dot ' in 'sdot' means 'dot product operation'.

Among the arguments:

  • x and y are gsl_vector types
  • The resultant dot product is stored in float r.

Note that the result is not given as a program output, but as a third argument, to which you will have to give the pointer to the result variable.

For the more conventional double precision case, the GSL_BLAS routine can be determined similarly. The routine is:

For implementing a matrix - vector product like A|x>, the general BLAS operation, found in level 2 BLAS, is

Here, the input arguments are two floats alpha and beta, two vectors |x> and |y>, and a matrix. The routine performs an operation op() on matrix A, multiplies it with vector |x>, scales the result by the constant alpha, and adds the result to |y> after scaling it with beta. The result is re-written into the vector |y>. The routine op() can be one of three routines: doing nothing (identity operation), transpose of A or conjugate-transpose of A. If you simply want to perform |y> = A|x>, then the constants should be set to alpha = 1, beta = 0, and op() needs to be set to doing nothing. The relevant GSL_BLAS function is

Here, as per CBLAS rules:

  • The 'd' in 'dgemv' means 'double precision',
  • The 'ge' in 'dgemv' means 'general matrix',
  • The 'mv' in 'dgemv' means 'matrix-vector' operation.

In the arguments:

  • The first 'CblasNoTrans' is an enumerated datatype and means 'do nothing to the matrix'

Alternative values are 'CblasTrans' which means 'transpose the matrix' or 'CblasConjTrans' which means 'conjugate transpose

  • The second argument is the constant alpha (set to 1 here),
  • The third argument is the gsl_matrix datatype A.
  • The fourth argument is the gsl_vector datatype vector |x>,
  • The fifth argument is the constant beta (set to 0 here), and
  • The last argument is the gsl_vector datatype vector |y>.

As another example, if you want to do this with single precision float datatypes, and perform a transpose of a symmetric matrix A before matrix-vector-multiplication, the relevant function call would be

Here, as per CBLAS rules:

  • The 's' in 'ssymv' means 'single precision',
  • The 'sy' in 'ssymv' means 'symmetric matrix',
  • The 'mv' in 'ssymv' means 'matrix-vector' operation.

In the arguments:

  • The first argument, 'CblasUpper', means 'store only the upper triangular part of the matrix (since it is symmetric) and use the lower triangular part for scratch work'. Other choice is CblasLower,
  • The second 'CblasTrans' means 'do a transpose',
  • The rest are the same as the previous example

Finally, we have matrix-matrix multiplications. The general BLAS operation, found in level 3, is:

Here, A, B, and C are all matrix datatypes, and all other symbols have the same meanings as those in the examples discussed above. For simply multiplying two matrices A and B and storing the result in matrix C, the relevant GSL_BLAS function is

Here, as per CBLAS rules:

  • The 'd' in 'dgemm' means 'double precision',
  • The 'ge' in 'dgemv' means 'general matrix',
  • The 'mm' in 'dgemm' means 'matrix-matrix' operation.

In the arguments:

  • The first and second 'CblasNoTrans' means ' do nothing' ( as opposed to CblasTrans which means 'transpose' or CblasConjTrans which means 'conjugate transpose),
  • The third argument is the constant alpha (set to 1 here),
  • The fourth and fifth arguments is the gsl_matrix datatypes A and B respectively,
  • The sixth argument is the constant beta (set to 0 here), and
  • The last argument is the gsl_matrix datatype C, where the output will be stored.

Example programs with GSL-BLAS

The program below computes the following product of two matrices using a GSL-BLAS function:

Further Reading

This is a very brief and dirty introduction to the BLAS standard and its implementation in GSL-BLAS. BLAS has many routines specified, and a complete treatment is too advanced for this lab. A quick reference guide to BLAS can be found here. The complete description of GSL-BLAS can be found in the GSL manual here. Deeper introductions to BLAS can be found here and here.