Exercise "Linear Equations"

Objective

Implement functions for solving linear equations, calculating matrix inverse and matrix determinant.

Introduction

To work with vectors and matrices you can use:

  1. C, C++: gsl_vector and gsl_matrix structures from GSL library. This is the easiest and best documented option, I think.
  2. C++: vector and matrix class from Armadillo library.
  3. Python: python lists, numpy arrays.
  4. You can bulid your own structures and the associated functions (a good exercise to learn a language). At the bare minimum, you only need to implement four functions:

    1. matrix_allocate,
    2. matrix_set,
    3. matrix_get,
    4. matrix_free.

    For example, C,

    typedef struct { int size1, size2; double *data;} matrix;
    matrix* matrix_alloc(int n, int m){
    	matrix* A=(matrix*)malloc(sizeof(matrix));
    	(*A).size1=n; (*A).size2=m;
    	(*A).data=(double*)malloc(n*m*sizeof(double));
    	return A;}
    void matrix_free(matrix* A){ free((*A).data); free(A); }
    void matrix_set(matrix* A, int i, int j, double x){ (*A).data[i+j*(*A).size1] = x; }
    double matrix_get(matrix* A, int i, int j){ return (*A).data[i+j*(*A).size1]; }
    
    C++,
    struct mat {
    int size1,size2; double* data;
    mat(int n,int m):size1(n),size2(m){ data=new double[n*m]; }
    ~mat(){ if(data!=nullptr) delete data; } /* nullptr: -std=c++11 */
    void set(int i,int j,double x){data[i+size1*j]=x;}
    double get(int i,int j){return data[i+size1*j];}
    double& operator()(int i,int j){ return data[i+size1*j]; }
    };
    
    Java,
    public class matrix {
    int size1,size2; double[] data;
    matrix(int n,int m){ size1=n; size2=m; data=new double[n*m]; }
    double get(int i, int j){ return data[i+size1*j]; }
    void set(int i, int j, double x){ data[i+size1*j]=x; }
    }
    
    Csharp,
    public class matrix{
    public readonly int size1,size2; double[] data;
    public matrix(int n,int m){ size1=n; size2=m; data = new double[size1*size2]; }
    public void set(int i, int j, double x){ data[i+j*size1]=x; }
    public double get(int i, int j){ return data[i+j*size1]; }
    public double this[int i,int j]{ get{return data[i+j*size1];} set{data[i+j*size1]=value;} }
    }
    
    Python,
    import array
    class matrix(object) :
            def __init__ (self,n:int,m:int,t:type='d') :
                    self.size1=n; self.size2=m
                    self.data=array.array(t,[0.0]*(n*m))
            def set(self,i:int,j:int,x) : self.data[i+self.size1*j]=x
            def get(self,i:int,j:int)   : return self.data[i+self.size1*j]
            def __getitem__ (self,ij)   : i,j=ij; return self.get(i,j)
            def __setitem__ (self,ij,x) : i,j=ij; self.set(i,j,x)
    

    Building your own vector/matrix library should help you learn your language better.

Tasks
  1. (6 points) Solving linear equations using QR-decomposition by modified Gram-Schmidt orthogonalization

    The Gram-Schmidt orthogonalization process, even modified, is less stable and accurate than the Givens roation algorithm. On the other hand, the Gram-Schmidt process produces the j-th orthogonalized vector after the j-th iteration, while orthogonalization using Givens rotations produces all the vectors only at the end. This makes the Gram–Schmidt process applicable for iterative methods like the Arnoldi iteration.

    Also its ease of implementation makes it a useful exercise for the students.

    1. Implement a function, say

      void qr_gs_decomp(matrix* A, matrix* R);
      
      that performs in-place modified Gram-Schmidt orthogonalization of an n×m (n≥m) matrix A: A turns into Q and the square m×m matrix R is computed.

      Check that you function works as intended:

      • generate a random tall (n>m) matrix A (of a modest size);
      • factorize it into QR;
      • check that R is upper triangular;
      • check that QTQ=1;
      • check that QR=A;
    2. Implement a function, say

      void qr_gs_solve(const matrix* Q, const matrix* R,const vector* b,vector* x);
      
      that—given the matrices Q and R from qr_gs_decomp—solves the equation QRx=b by applying QT to the vector b (saving the result in x) and then performing in-place back-substitution on x.

      Check that you function works as intended:

      • generate a random square matrix A (of a modest size);
      • generate a random vector b (of the same size);
      • factorize A into QR;
      • solve QRx=b;
      • check that Ax=b;
  2. (3 points) Matrix inverse by Gram-Schmidt QR factorization

    Implement a function, say

    void qr_gs_inverse(const matrix* Q, const matrix* R, matrix* B)
    
    that—given the matrices Q and R from qr_gs_decomp—calculates the inverse of the matrix A into the matrix B.

    Check that you function works as intended:

    • generate a random square matrix A (of a modest size);
    • factorize A into QR;
    • calculate the inverse B;
    • check that AB=I, where I is the identity matrix;
  3. (1 point) Golub-Kahan-Lanczos bidiagonalization

    Bidiagonalization is a representation of a real matrix A in the form A=UBVT where U and V are orthogonal matrices and B is a bidiagonal matrix with non-zero elements only on the main diagonal and first sup-diagonal.

    Bidiagonalization can be used on its own to solve linear systems and ordinary least squares problems, calculate the determinant and the (pseudo-)inverse of a matrix. But it mostly is used as the first step in SVD. Here is a description of the Golub-Kahan-Lanczos algorithm for bidiagonalization. Implement it together with the corresponding linear solver, determinant, and inverse.