←to practical programming
Objective

Implement functions to solve linear equations, calculate matrix inverse, and matrix determinant.

Introduction

Working with vectors and matrices you can use GSL's gsl_vector and gsl_matrix containers. Alternatively, you can creaite your own matrix library the way you like it. At the bare minimum you should implement something like

typedef struct {int size1, size2; double *data;} matrix;
matrix* matrix_alloc(int n, int m){
	matrix* A=(matrix*)malloc(sizeof(matrix));
	(*A).size1=n; (*A).size2=m;
	(*A).data=(double*)malloc(n*m*sizeof(double));
	return A;}
void matrix_free(matrix* A){ free((*A).data); free(A); }
void matrix_set(matrix* A, int i, int j, double x){ (*A).data[i+j*(*A).size1] = x; }
double matrix_get(matrix* A, int i, int j){ return (*A).data[i+j*(*A).size1]; }

The total number of CPU-seconds used by a program my_prog can be determined by the POSIX time utility. For example, the command

\time --format "my_prog used %U CPU-seconds" --append --output times.txt ./my_prog
or, in short
\time -f "my_prog used %U CPU-seconds" -ao times.txt ./my_prog
runs the program and appends the number of consumed CPU-seconds to the file times.txt. Without the --append option (or, in short, -a) the file times.txt gets overwritten. Backslash here is needed to run the actual utility rather than built-in bash command (with similar possibilities, actually).
Tasks
  1. (6 points) Solving linear equations using QR-decomposition by modified Gram-Schmidt orthogonalization

    The Gram-Schmidt orthogonalization process, even modified, is less stable and accurate than the Givens roation algorithm. On the other hand, the Gram-Schmidt process produces the j-th orthogonalized vector after the j-th iteration, while orthogonalization using Givens rotations produces all the vectors only at the end. This makes the Gram–Schmidt process applicable for iterative methods like the Arnoldi iteration. Besides, its ease of implementation makes it a useful exercise for the students.

    1. Implement a function, say

      void GS_decomp(matrix* A, matrix* R)
      
      which performs in-place modified Gram-Schmidt orthogonalization of an n×m (n≥m) matrix A: on exit matrix A is replaced with Q and the m×m matrix R holds the computed matrix R.

      Check that you function works as intended:

      • generate a random tall (n>m) matrix A (of a modest size);
      • factorize it into QR;
      • check that R is upper triangular;
      • check that QTQ=1;
      • check that QR=A;
    2. Implement a function, say

      void GS_solve(matrix* Q, matrix* R, vector* b, vector* x)
      
      that—given the matrices Q and R from GS_decomp—solves the equation QRx=b by applying QT to the vector b (saving the result in vector x) and then performing in-place back-substitution on x.

      Check that your function works as intended:

      • generate a random square matrix A (of a modest size);
      • generate a random vector b (of the same size);
      • factorize A into QR;
      • solve QRx=b;
      • check that Ax=b;
  2. (3 points) Matrix inverse by Gram-Schmidt QR factorization

    Implement a function, say

    void GS_inverse(matrix* Q, matrix* R, matrix* B)
    
    that—given the matrices Q and R from GS_decomp—calculates the inverse of the matrix A into the matrix B.

    Check that you function works as intended:

    • generate a random square matrix A (of a modest size);
    • factorize A into QR;
    • calculate the inverse B;
    • check that AB=I=BA, where I is the identity matrix;
  3. (1 point) Operations count for QR-decomposition and comparison with GSL

    Measure the time it takes to QR-factorize (with your implementation) a random NxN matrix as function of N. Convince yourself that it goes like O(N^3). Compare the speed of your implementation with gsl_linalg_QR_decomp.