Examination projects (numbering starts with zero)


| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 |

— # 0 —

Golub-Kahan-Lanczos bidiagonalization

Bidiagonalization is a representation of a real matrix, A, in the form A=UBVT where U and V are orthogonal matrices and B is a bidiagonal matrix (a matrix with non-zero elements only on the main diagonal and first sup-diagonal). Bidiagonalization can be used on its own to solve linear systems and ordinary least squares problems, calculate the determinant and the (pseudo-)inverse of a matrix. But it is mostly used as the first step in SVD.

Implement the [Golub-Kahan-Lanczos algorithm] for matrix bidiagonalization.

For more points implement also the corresponding linear equation solver, the determinant, and the inverse matrix.

— # 1 —

Bi-linear interpolation on a rectilinear grid in two dimensions

Introduction
A rectilinear grid (note that rectilinear is not necessarily cartesian nor regular) in two dimensions is a set of nx×ny points where each point can be adressed by a double index (i,j) where 1 ≤ i ≤ nx, 1 ≤ j ≤ ny and the coordinates of the point (i,j) are given as (xi,yj), where x and y are vectors with sizes nx and ny correspondingly. The values of the tabulated function F at the grid points can then be arranged as a matrix {Fi,j=F(xi,yj)}.
Problem
Build an interpolating routine which takes as the input the vectors {xi} and {yj}, and the matrix {Fi,j} and returns the bi-linear interpolated value of the function at a given 2D-point p=(px,py).
Hints
See the chapter "Bi-linear interpolation" in the book.

The signature of the interpolating subroutine can be

double bilinear(int nx, int ny, double* x, double* y, double** F, double px, double py)

or

double bilinear(gsl_vector* x, gsl_vector* y, gsl_matrix* F, double px, double py)

— # 2 —

ODE with complex-valued functions of complex variable

Generalize the ODE solver of your choice to solve ODE with complex-valued functions of complex variable along a straight line between the given start- and end-point.

— # 3 —

Multidimensional pseudo-random (plain Monte Carlo) vs quasi-random (Halton and/or lattice sequence) integrators

Investigate the convergence rates (of some interesting integrals in different dimensions) as function of the number of sample points.

— # 4 —

ODE: a two-step method

Implement a two-step stepper for solving ODE (as in the book). Use your own stepsize controlling driver.

— # 5 —

Cubic (sub-)spline for data with derivatives

Introduction
In practice a function is often tabulated together with its derivative (for example, after numerical integration of a second order ordinary differential equation). The table is then given as {xi, yi, y'i}i=1,..,n , where yi is the value of the tabulated function, and y'i is the value of the derivative of the tabulated function at the point xi.
Problem
Given a data-set, {xi, yi, y'i}i=1,..,n , build a cubic sub-spline,

S(x) x∈[xi,xi+1] =Si(x)

where

Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3.

For each interval the three coefficients bi, ci, di are determined by the three conditions,

Si(xi+1)=yi+1,
S'i(xi)=y'i,
S'i(xi+1)=y'i+1.

See the subsection "Akima sub-spline interpolation" for the inspiration.

Extra
  1. Derivative and integral of the spline.
  2. Make the second derivative of the spline continuous by increasing the order of the spline to four, for example, in the form

    Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3 +ei(x-xi)2(x-xi+1)2 ,

    and choosing the coefficients ei such that the spline has continuous second derivative.

— # 6 —

Hessenberg factorization of a real square matrix using Jacobi transformations

Why Hessenberg matrix?

If the constraints of a linear algebra problem do not allow a general matrix to be conveniently reduced to a triangular form, reduction to Hessenberg form is often the next best thing. Reduction of any matrix to a Hessenberg form can be achieved in a finite number of steps. Many linear algebra algorithms require significantly less computational effort when applied to Hessenberg matrices.

Definitions

A lower Hessenberg matrix H is a square matrix that has zero elements above the first sup-diagonal, {Hij=0}j>i+1.

An upper Hessenberg matrix H is a square matrix that has zero elements above the first sub-diagonal, {Hij=0}i>j+1.

Hessenberg factorization is a representation of a square real matrix A in the form A=QHQT where Q is an orthogonal matrix and H is a Hessenberg matrix.

If A is a symmetrix matrix its Hessenberg factorization produces a tridiagonal matrix H.

Hessenberg factorization by Jacobi transformations

Consider a Jacobi transformation with the matrix J(p,q),

A→J(p,q)TAJ(p,q)

where the rotation angle is chosen not to zero the element Ap,q but rather to zero the element Ap-1,q. Argue that the following sequence of such Jacobi transformations,

J(2,3), J(2,4), ..., J(2,n); J(3,4), ..., J(3,n); ...; J(n-1,n)

reduces the matrix to the lower Hessenberg form.

Implement this algorithm. Remember to accumulate the total transformation matrix.

Extra
What is faster: Hessenberg factorization or QR-factorization?

Calculate the determinant of the Hessenberg matrix.

— # 7 —

Lanczos tridiagonalization algorithm

Lanczos tridiagonalization is a representation of real symmetric matrix, A, in the form A=VTVT (where V is orthogonal matrix and T is tridiagonal matrix) using power iteration and Gram-Schmidt orthogonalization. Read the book and/or the Wikipedia article. The algorithm is often used as the first step in matrix diagonalization (particularly with lossy compression that preserves extreme eigenvalues).

Implement the Lanczos tridiagonalization algorithm for real symmetric matrices.

— # 8 —

Rootfinding: 1D complex vs. 2D real

Implement a (quasi) Newton method for rootfinding for a complex function f of complex variable z,

f(z) = 0 .

Hints: do it the ordinary way, just with complex numebers:

Compare the effectiveness of your complex implementation with your homework multi-dimensional implementation of real rootfinding applied to the equivalent 2D system of two real equation with two real variables x and y,

Re f(x + iy) = 0 ,
Im f(x + iy) = 0 .

— # 9 —

Inverse iteration algorithm for eigenvalues (and eigenvectors)

Inverse iteration algorithm allows calculation of the eigenvalue (and the corresponding eigenvector) that is closest to a given value. Read the book and/or the [inverse iteration] article in Wikipedia.

Implement the inverse iteration algorithm that calculates the eigenvalue closest to a given value (and the corresponding eigenvector). You should use your own linear algebra routines.

— # 10 —

Adaptive 1D integrator with random nodes

Implement an adaptive one-dimensional integrator with random abscissas. Reuse points. Note that you don't need to pass the points to the next level of recursion, only statistics.

— # 11 —

Two-sided Jacobi algorithm for Singular Value Decomposition (SVD)

Introduction
The SVD of a (real square, for simplicity) matrix A is a representation of the matrix in the form

A = U D VT ,

where matrix D is diagonal with non-negative elements and matrices U and V are orghogonal. The diagonal elements of matrix D can always be chosen non-negative by multiplying the relevant columns of matrix U with (-1).

SVD can be used to solve a number of problems in linear algebra.

Problem
Implement the two-sided Jacobi SVD algorithm.

Algorithm (as described in the "Eigenvalues" chapter of the book)
In the cyclic Jacobi eigenvalue algorithm for symmetric matrices we applied the elementary Jacobi transformation

A → JT A J

where J ≐ J(θ,p,q) is the Jacobi matrix (where the angle θ is chosen to eliminate the off-diagonal elements Apq=Aqp) to all upper off-diagonal matrix elements in cyclic sweeps until the matrix becomes diagonal.

The two-sided Jacobi SVD algorithm for general real square matrices is the same as the Jacobi eigenvalue algorithm, except that the elementary transformation is slightly different,

A → JT GT A J ,

where

  • G ≐ G(θ,p,q) is the Givens (Jacobi) matrix where the angle is chosen such that the matrix A' ≐ GT(θ,p,q) A has identical off-diagonal elements, A'pq=A'qp:

    tan(θ) = (Apq - Aqp)/(Aqq + App)

    (check that this is correct).
  • J ≐ J(θ,p,q) is the Jacobi matrix where the angle is chosen to eliminate the off-diagonal elements A'pq=A'qp.

The matrices U and V are updated after each transformation as

U → U G J ,
V → V J .

Of course you should not actually multiply Jacobi matrices—which costs O(n³) operations—but rather perform updates which only cost O(n) operations.

— # 12 —

LU factorization of a Hessenberg matrix

Why?
LU-factorization of a Hessebberg matrix takes only O(n²) operations!

An upper n×n Hesseberg matrix H can be LU-factorized, H=LU, with the L-matrix being bidiagonal: made of a main diagonal (containing ones) and a subdiagonal with elements vi, i=1,..,n-1.

Algorithm
The elementary operation for 2x2 matrix is given as
[  1 0 ] [ h11 h12 ] = [     h11           h12   ]
[ -v 1 ] [ h21 h22 ]   [ -v*h11+h21   -v*h11+h22 ]
If we choose v=h21/h11 then the matrix becomes upper triangular,
[  1 0 ] [ h11 h12 ] = [  h11       h12   ] ≡ U
[ -v 1 ] [ h21 h22 ]   [   0   -v*h11+h22 ]
Now the inverse of the left matrix is easy:
[  1 0 ] [ 1 0 ] = [ 1 0 ]
[ -v 1 ] [ v 1 ]   [ 0 1 ]
Therefore the sought operation, which eliminates one sub-diagonal element of matrix H, is
[ h11 h12 ] = [  1 0 ] [ u11 u12 ]
[ h21 h22 ]   [  v 1 ] [  0  u22 ]
where v=h21/h11, u11=h11, u12=h12, u22=h22-v*h11.

Now one simply has to apply this operation to all sub-diagonal elements of the upper Hessenber matrix H, which leads to the following algorithm (see A note on the stability of the LU factorization of Hessenberg matrices):

for i=1:n
	u1,i=h1,i
endfor
for i=1:n-1
	vi=hi+1,i/ui,i
	for j=i+1:n
		ui+1,j=hi+1,j-vi*ui,j
	endfor
endfor
Problem
Implement this O(n²) algorithm for LU factorization of a Hessenberg matrix.

— # 13 —

One-sided Jacobi algorithm for Singular Value Decomposition

Introduction
The singular value decomposition (SVD) of a (real square, for simplicity) matrix A is a representation of the matrix in the form

A = U D VT ,

where matrix D is diagonal with non-negative elements and matrices U and V are orghogonal. The diagonal elements of matrix D can always be chosen non-negative by multiplying the relevant columns of matrix U with (-1).

SVD can be used to solve a number of problems in linear algebra.

Problem
Implement the one-sided Jacobi SVD algorithm.

Algorithm
In this method the elementary iteration is given as

A → A J(θ,p,q)

where the indices (p,q) are swept cyclicly (p=1..n, q=p+1..n) and where the angle θ is chosen such that the columns number p and q of the matrix AJ(θ,p,q) are orthogonal. One can show that the angle should be taken from the following equation (you should use atan2 function),

tan(2θ)=2apTaq /(aqTaq - apTap)

where ai is the i-th column of matrix A (check this).

After the iterations converge and the matrix A'=AJ (where J is the accumulation of the individual rotations) has orthogonal columns, the SVD is simply given as

A=UDVT

where

V=J, Dii=||a'i||, ui=a'i/||a'i||,

where a'i is the i-th column of matrix A' and ui us the i-th column of matrix U.

— # 14 —

Yet another cubic sub-spline

Introduction
The ordinary cubic spline simetimes makes unpleasant wiggles, for example, around an outlier, or around a step-like feature of the tabulated function (read the Akima sub-spline chapter in the lecture notes). Here is yet another attempt to reduce the wiggles by building a sub-spline.
Problem
Consider a data set {xi, yi}i=1,..,n which represents a tabulated function.

Implement a sub-spline of this data using the following algorithm:

  1. For each inner point xi, i=2,..,n-1, build a quadratic interpolating polynomial through the points xi-1, xi, xi+1 and estimate, using this polynomial, the first derivative, pi, of the function at the point xi;
  2. For the the first and the last points estimate p1 and pn from the the same polynomial that you used to estimate correspondingly p2 and pn-1.
  3. Now you have a data set {xi, yi, pi}i=1,..,n where the function is tabulated together with its derivative: build a cubic sub-spline for it,

    Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3,

    where for each interval the three coefficients bi, ci, di are determined by the three conditions,

    Si(xi+1)=yi+1,
    S'i(xi)=pi,
    S'i(xi+1)=pi+1.

Extra
Derivative and integral of the spline.

— # 15 —

Akima sub-spline

Implement the Akima sub-spline. See the section "Akima sub-spline interpolation" in the book for the inspiration.

— # 16 —

Symmetric rank-1 update of a size-n symmetric eigenvalue problem

The matrix A to diagonalize is given in the form

A = D +uuT,

where D is a diagonal matrix and u is a column-vector.

Given the diagonal elements of the matrix D and the elements of the vector u find the eigenvalues of the matrix A using only O(n2) operations (see section "Eigenvalues of updated matrix" in the book).

— # 17 —

Adaptive integration of complex-valued functions

Implement an adaptive integrator which calculates the integral of a complex-valued function f(z) of a complex variable z along a straight line between two points in the complex plane.

— # 18 —

Adaptive integrator with subdivision into three subintervals.

Implement a (one-dimensional) adaptive integrator which at each iteration subdivides the interval not into two, but into three sub-intervals.

— # 19 —

Symmetric row/column update of a size-n symmetric eigenvalue problem

The matrix to diagonalize is given in the form

A = D + e(p) uT + u e(p)T

where D is a diagonal matrix with diagonal elements {dk, k=1,...,n}, u is a given column-vector, and the vector e(p) with components e(p)iip is a unit vector in the direction p where 1≤p≤n.

Given the diagonal elements of the matrix D, the vector u, and the integer p, calculate the eigenvalues of the matrix A using O(n2) operations (see section "Eigenvalues of updated matrix" in the book).

— # 20 —

Two-dimensional integrator

Implement a two-dimensional integrator for integrals in the form

abdx d(x)u(x)dy f(x,y)

which consecutively applies your favourite adaptive one-dimensional integrator along each of the two dimensions. The signature might be something like

double integ2D(double a, double b, double d(double x), double u(double x), double f(double x, double y), double acc, double eps)

— # 21 —

Cholesky decomposition of a real symmetric positive-definite matrix

For a real symmetric positive-definite matrix, A, the Cholesky decomposition, A=LLT (where L is lower-triangular), is more efficient than the LU-decomposition. Read the book and/or Wikipedia article Cholesky-decomposition.

Implement Cholesky decomposition of a real symmetric positive-definite matrix using Cholesky-Banachiewicz or Cholesky-Crout algorithms.

For more points implement also the corresponding linear equation solver, calculation of determinant, and calculation of the inverse matrix.