Examination projects (numbering starts with zero)

Consider your examination project like an extra homework. Make the project in a separate folder in your repository with an indicative name (like "exam") which I should easily find. Supply a README.TXT (or similar) file with a short description of what you did and (optionally) a self-evaluation of the project on the scale [0,10].

Remember to check that your repository is publicly browseable and publicly cloneable, and that everything is built.

The index of the project assigned to you (from the indexed list below) is given by the last two digits of your student's number modulo the number N of the projects (N=26). For example,


| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |

— 0 —

Lanczos tridiagonalization algorithm

Lanczos tridiagonalization is a representation of real symmetric matrix, A, in the form A=VTVT (where V is orthogonal matrix and T is tridiagonal matrix) using power iteration and Gram-Schmidt orthogonalization. Read the book and/or the Wikipedia article. The algorithm is often used as the first step in matrix diagonalization (particularly with lossy compression that preserves extreme eigenvalues).

Implement the Lanczos tridiagonalization algorithm for real symmetric matrices.

— 1 —

One-sided Jacobi algorithm for Singular Value Decomposition

Introduction
The singular value decomposition (SVD) of a (real square, for simplicity) matrix A is a representation of the matrix in the form

A = U D VT ,

where matrix D is diagonal with non-negative elements and matrices U and V are orghogonal. The diagonal elements of matrix D can always be chosen non-negative by multiplying the relevant columns of matrix U with (-1).

SVD can be used to solve a number of problems in linear algebra.

Problem
Implement the one-sided Jacobi SVD algorithm.

Algorithm
In this method the elementary iteration is given as

A → A J(θ,p,q)

where the indices (p,q) are swept cyclicly (p=1..n, q=p+1..n) and where the angle θ is chosen such that the columns number p and q of the matrix AJ(θ,p,q) are orthogonal. One can show that the angle should be taken from the following equation (you should use atan2 function),

tan(2θ)=2apTaq /(aqTaq - apTap)

where ai is the i-th column of matrix A (check this).

After the iterations converge and the matrix A'=AJ (where J is the accumulation of the individual rotations) has orthogonal columns, the SVD is simply given as

A=UDVT

where

V=J, Dii=||a'i||, ui=a'i/||a'i||,

where a'i is the i-th column of matrix A' and ui us the i-th column of matrix U.

— 2 —

Akima sub-spline

Implement the Akima (sub-)spline interpolation. See the section "Akima sub-spline interpolation" in the book for the inspiration.

— 3 —

Bi-linear interpolation on a rectilinear grid in two dimensions

Introduction
A rectilinear grid (note that rectilinear is not necessarily cartesian nor regular) in two dimensions is a set of nx×ny points where each point can be adressed by a double index (i,j) where 1 ≤ i ≤ nx, 1 ≤ j ≤ ny and the coordinates of the point (i,j) are given as (xi,yj), where x and y are vectors with sizes nx and ny correspondingly. The values of the tabulated function F at the grid points can then be arranged as a matrix {Fi,j=F(xi,yj)}.
Problem
Build an interpolating routine which takes as the input the vectors {xi} and {yj}, and the matrix {Fi,j} and returns the bi-linear interpolated value of the function at a given 2D-point p=(px,py).
Hints
See the chapter "Bi-linear interpolation" in the book.

The signature of the interpolating subroutine can be

static double bilinear(double[] x, double[] y, matrix F, double px, double py)

— 4 —

Yet another stochastic global optimizer

Implement a stochastic global optimizer using the following algorithm,

  1. Use the given number of seconds to search for the global minimum of the given function in the given volume by sampling the function using a low-discrepancy sequence.
  2. From the best point found at the previous step run your quasi-newton minimizer.

— 5 —

LU decomposition of a real square matrix

Implement the LU decomposition of a real square matrix.

Try implement also the corresponding linear equation solver, determinant, and the inverse matrix calculator.

Is LU faster than QR?

— 6 —

Adaptive 1D integrator with random nodes

Implement our ordinary adaptive one-dimensional division-by-two integration algorithm where the abcissas at each iteration are chosen randomly (basically, the stratified sampling strategy applied to one-dimensional integral). Reuse points by passing the relevant statistics to the next level of iteration. For the error estimate you can use either the variance of your sample (as in plain monte-carlo), or simply the difference between the function average sampled with the new points and the function average inherited from the previous iteration.

Find the optimum number N (the number of new points thrown at each iteration) by experimenting with your favourite integrals.

You might need to use Clenshaw-Curtis transformation for hight-precision singular integrals as random abscissas probably provide a relatively slow convergence rate, if any.

Compare with our predefined-nodes adaptive integrators.

— 7 —

Yet another cubic sub-spline
Introduction

The ordinary cubic spline simetimes makes unpleasant wiggles, for example, around an outlier, or around a step-like feature of the tabulated function (read the Akima sub-spline chapter in the lecture notes). Here is yet another attempt to reduce the wiggles by building a sub-spline.

Project

Consider a data set {xi, yi}i=1,..,n which represents a tabulated function.

Implement a sub-spline of this data using the following algorithm:

  1. For each inner point xi=2,..,n-1, build a quadratic interpolating polynomial through the points xi-1, xi, xi+1 and estimate, using this polynomial, the first derivative, pi, of the function at the point xi;
  2. For the the first and the last points estimate p1 and pn from the the same polynomial that you used to estimate correspondingly p2 and pn-1.
  3. Now you have a data set {xi, yi, pi}i=1,..,n where the function is tabulated together with its derivative: build a cubic sub-spline for it,

    Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3,

    where for each interval the three coefficients bi, ci, di are determined by the three conditions,

    Si(xi+1)=yi+1,
    S'i(xi)=pi,
    S'i(xi+1)=pi+1.

Extra
Derivative and integral of the spline.

— 8 —

Adaptive two-dimensional integrator

Implement a two-dimensional integrator for integrals in the form

abdx d(x)u(x)dy f(x,y)

which applies your favourite adaptive one-dimensional integrator along each of the two dimensions. The signature might be something like

static double integ2D(
	Func<double,double,double> f,
	double a, double b,
	Func<double,double> d,
	Func<double,double> u,
	double acc, double eps)

— 9 —

Numeric Hessian matrix in Newton's minimization method

Implement the (modified) Newton's minimization method where the Hessian matrix is computed numerically using a finite-difference approximation (for example, equation (48) in the Minimization Chapter).

Compare with your quasi-Newton minimizer.

— 10 —

Adaptive recursive integrator with subdivision into three subintervals.

Implement a (one-dimensional) adaptive recursive integrator which at each iteration subdivides the interval not into two, but into three sub-intervals. Reuse points. Compare its effectiveness with the adaptive integrator from your homework.

For example, a rule like this (check that the numbers are correct),

xi={1/6,3/6,5/6} reusable points for division by 3;
wi={3/8,2/8,3/8} trapezium rule;
vi={1/3,1/3,1/3} rectangle rule;
Extra: try quadratic vs trapezium.

— 11 —

Symmetric rank-1 update of a size-n symmetric eigenvalue problem

The matrix A to diagonalize is given in the form

A = D +uuT,

where D is a diagonal matrix and u is a column-vector.

Given the diagonal elements of the matrix D and the elements of the vector u find the eigenvalues of the matrix A using only O(n2) operations (see section "Eigenvalues of updated matrix" in the book).

— 12 —

Bare Bones Particle Swarm Optimization

Implement the bare bones particle swarm optimization algorithm as described in the book. The normal distribution can be approximated as n=12 Irwin-Hall distribution (read the book),

Σi=1..12 ui - 6
where u is a sample from the uniform [0,1] distribution.

— 13 —

Inverse iteration algorithm for eigenvalues (and eigenvectors)

Inverse iteration algorithm allows calculation of the eigenvalue (and the corresponding eigenvector) that is closest to a given value. Read the book and/or the [inverse iteration] article in Wikipedia.

Implement the inverse iteration algorithm that calculates the eigenvalue (and the corresponding eigenvector) of the given symmetric matrix A that is closest to the given shift-value s. Use your own linear algebra routines.

The interface could be like this (check the formulae!),

(double,vector) inverse_iteration (matrix A,double s) :
	x_old = random_vector
	Q,R = qrdecomp(A-s*1)
	x_new = qrsolve(Q,R,x_old)
	lambda_new = s + (x_new•x_old)/(x_new•x_new)
	do :
		x_old = x_new
		lambda_old = lambda_new
		x_new = qrsolve(Q,R,x_old)
		lambda_new = s+(x_new•xold)/(x_new•x_new)
		if abs(lambda_new-lambda_old) < acc :
			break
	while true
	return (lambda_new,x_new)
Alternatively, the user can (optionally) provide the starting vector if they have already gotten a good approximation,
(double,vector) inverse_iteration (matrix A,double s,vector x_old=null) :
	if(x_old == null) x_old = random_vector
	...

— 14 —

Cholesky decomposition of a real symmetric positive-definite matrix

For a real symmetric positive-definite matrix, A, the Cholesky decomposition, A=LLT (where L is lower-triangular), is more efficient than the LU-decomposition. Read the book and/or Wikipedia article Cholesky-decomposition.

Implement Cholesky decomposition of a real symmetric positive-definite matrix using Cholesky-Banachiewicz or Cholesky-Crout algorithms.

Try implement also the corresponding linear equation solver, calculation of determinant, and calculation of the inverse matrix.

— 15 —

Symmetric row/column update of a size-n symmetric eigenvalue problem

The matrix to diagonalize is given in the form

A = D + e(p) uT + u e(p)T

where D is a diagonal matrix with diagonal elements {dk, k=1,...,n}, u is a given column-vector, and the vector e(p) with components e(p)iip is a unit vector in the direction p where 1≤p≤n.

Given the diagonal elements of the matrix D, the vector u, and the integer p, calculate the eigenvalues of the matrix A using O(n2) operations (see section "Eigenvalues of updated matrix" in the book).

— 16 —

Eigenvalues via minimization: Rayleigh quotient variational method

Introduction

In mathematics the Rayleigh quotient for a given real symmetrix matrix H and a non-zero vector v is defined as

R(H,v)=(vTHv)/(vTv) .

It can be shown that the Rayleigh quotient reaches its minimum value λmin—the smallest eigenvalue of H—when v equals vmin (the corresponding eigenvector),

∀v≠0 R(H,v)≥λmin, R(H,vmin)=λmin if Hvminminvmin.
Analogously,
∀v≠0 R(H,v)≤λmax, R(H,vmax)=λmax if Hvmaxmaxvmax.

In quantum mechanics this is called Variational Method.

Project

Implement a function that calculates the lowest eigenvalue and the corresponding eigenvector of a given symmetric matrix H by minimizing its Rayleigh quotient R(H,v) using the components of the vector v as free parameters. Use your own quasinewton minimization routine and customize it to use analytic gradient and re-normalization (see below).

The gradient ∂R/∂vk is analytic in this case (check the formula before using),

∂R/∂vk=2[(Hv)k-Rvk]/(vTv)
which helps enormously in your search for the minimum.

Hint: you need to keep the norm of your v-vector close to unity: if the norm becomes larger than, say, maxnorm≈1.5 (or smaller than 1/maxnorm) you need to re-normalize your v-vector to unity and (possibly) reset your B-matrix to identity matrix.

Extra

Investigate the scaling of this method (time to find the eigenvalue as function of the size of the marix) and find out how far in the size of the matrix you can realistically go (that is, within few seconds of calculation time).

Calculate also the largest eigenvalue.

Calculate the ground state of the Hydrogen atom.

Actually, the Hessian matrix for this problem is also analytic, you might want to investigate whether this might help to accelerate the convergence.

— 17 —

Particle swarm optimization

Implement the particle swarm optimization algorithm.

— 18 —

Cubic (sub-)spline for data with derivatives

Introduction
In practice a function is often tabulated together with its derivative (for example, after numerical integration of a second order ordinary differential equation). The table is then given as {xi, yi, y'i}i=1,..,n , where yi is the value of the tabulated function, and y'i is the value of the derivative of the tabulated function at the point xi.
Project
Given a data-set, {xi, yi, y'i}i=1,..,n , build a cubic sub-spline,

S(x) x∈[xi,xi+1] =Si(x)

where

Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3.

For each interval the three coefficients bi, ci, di are determined by the three conditions,

Si(xi+1)=yi+1,
S'i(xi)=y'i,
S'i(xi+1)=y'i+1.

See the subsection "Akima sub-spline interpolation" for the inspiration.

Extra
  1. Derivative and integral of the spline.
  2. Make the second derivative of the spline continuous by increasing the order of the spline to four, for example, in the form

    Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3 +ei(x-xi)2(x-xi+1)2 ,

    and choosing the coefficients ei such that the spline has continuous second derivative.

— 19 —

Implicit Heun's stepper for Ordinary Differential Equations

The step is given by the formula,

yx+h = yx + h/2(f(x,yx)+f(x+h,yx+h))
where the vector yx+h has to be found by solving numerically the above equation using your own root-finder.

The start point for your rootfinding can be the Euler's step,

yx+hyx + h f(x,yx)
and the error estimate could be the difference between the Euler's step and the implicit Heun's step.

— 20 —

ODE: a two-step method

Implement a two-step stepper for solving ODE (as in the book). Use your own stepsize controlling driver.

— 21 —

Hessenberg factorization of a real square matrix using Jacobi transformations

Why Hessenberg matrix?

If the constraints of a linear algebra problem do not allow a general matrix to be conveniently reduced to a triangular form, reduction to Hessenberg form is often the next best thing. Reduction of any matrix to a Hessenberg form can be achieved in a finite number of steps. Many linear algebra algorithms require significantly less computational effort when applied to Hessenberg matrices.

Definitions

A lower Hessenberg matrix H is a square matrix that has zero elements above the first sup-diagonal, {Hij=0}j>i+1.

An upper Hessenberg matrix H is a square matrix that has zero elements below the first sub-diagonal, {Hij=0}i>j+1.

Hessenberg factorization is a representation of a square real matrix A in the form A=QHQT where Q is an orthogonal matrix and H is a Hessenberg matrix.

If A is a symmetrix matrix its Hessenberg factorization produces a tridiagonal matrix H.

Hessenberg factorization by Jacobi transformations

Consider a Jacobi transformation with the matrix J(p,q),

A→J(p,q)TAJ(p,q)

where the rotation angle is chosen not to zero the element Ap,q but rather to zero the element Ap-1,q. Argue that the following sequence of such Jacobi transformations,

J(2,3), J(2,4), ..., J(2,n); J(3,4), ..., J(3,n); ...; J(n-1,n)

reduces the matrix to the lower Hessenberg form.

Implement this algorithm. Remember to accumulate the total transformation matrix.

When multiplying with the J-matrices you should use your Jtimes and timesJ routines from Jacobi-eigenvalue homework that take O(n) operations.

Extra
What is faster: Hessenberg factorization or QR-factorization?

Calculate the determinant of the Hessenberg matrix.

— 22 —

Eigenvalues via rootfinding: Variational method with Lagrange multipliers

Introduction
One can find an eigenvalue and the corresponding eigenvector of a real symmetric matrix A by searching for the stationary points of the following constrained form
E(v)=vTAv , where vTv=1 .

This can be done using the method of Lagrange multipliers: the original problem is reformulated into unconstrained search for the stationary point (the point where all partial derivatives are zero) of the Lagrangian function defined as

L(v,λ) = vTAv - λ(vTv-1) .
in the space {v,λ}. At the stationary point: λ is the eigenvalue and v is the corresponding eigenvector normalized to unity. Indeed, at the stationary point
∂L(v,λ)/∂v = 2(Av - λv) = 0 ,
∂L(v,λ)/∂λ = vTv-1 = 0 .
Of course this is equivalent to finding the root in the n+1 dimensional space x={v,λ} of the n+1 dimensional vector-function
f(x) = {Av-λv, vTv-1}
where n is the size of v.

Note that the Jacobian of this function is analytic,

∂fi(x)/∂vj = Aij-λδij , where i,j≤n ,
∂fi(x)/∂λ = -vi , where i≤n ,
∂fn+1(x)/∂vj = 2vj , where j≤n 
∂fn+1(x)/∂λ = 0 .
The analytic Jacobian greatly facilitates the search for the root.

Project

Implement a subroutine that calculates the eigenvalue (close to a given value λstart) and the corresponding eigenvector of a given symmetric matrix A by searching for the root of the function

f(v,λ) = {Av-λv, vTv-1}
using the components of the vector v and the (Lagrangian multiplier) λ as free parameters. Use your own quasinewton rootfinding routine and customize it to use analytic Jacobian.

Extra

Investigate the scaling of this method (time to find the eigenvalue as function of the size of the marix) and find out how far in the size of the matrix you can realistically go (that is, within few seconds of calculation time).

Calculate the ground state of the Hydrogen atom.

It seems that the line-search here can be done analytically, you might want to try to investigate this possibility.

— 23 —

Two-sided Jacobi algorithm for Singular Value Decomposition (SVD)

Introduction
The SVD of a (real square, for simplicity) matrix A is a representation of the matrix in the form

A = U D VT ,

where matrix D is diagonal with non-negative elements and matrices U and V are orghogonal. The diagonal elements of matrix D can always be chosen non-negative by multiplying the relevant columns of matrix U with (-1).

SVD can be used to solve a number of problems in linear algebra.

Problem
Implement the two-sided Jacobi SVD algorithm.

Algorithm (as described in the "Eigenvalues" chapter of the book)
In the cyclic Jacobi eigenvalue algorithm for symmetric matrices we applied the elementary Jacobi transformation

A → JT A J

where J ≐ J(θ,p,q) is the Jacobi matrix (where the angle θ is chosen to eliminate the off-diagonal elements Apq=Aqp) to all upper off-diagonal matrix elements in cyclic sweeps until the matrix becomes diagonal.

The two-sided Jacobi SVD algorithm for general real square matrices is the same as the Jacobi eigenvalue algorithm, except that the elementary transformation is slightly different,

A → JT GT A J ,

where

  • G ≐ G(θ,p,q) is the Givens (Jacobi) matrix where the angle is chosen such that the matrix A' ≐ GT(θ,p,q) A has identical off-diagonal elements, A'pq=A'qp:

    tan(θ) = (Apq - Aqp)/(Aqq + App)

    (check that this is correct).
  • J ≐ J(θ,p,q) is the Jacobi matrix where the angle is chosen to eliminate the off-diagonal elements A'pq=A'qp.

The matrices U and V are updated after each transformation as

U → U G J ,
V → V J .

Of course you should not actually multiply Jacobi matrices—which costs O(n³) operations—but rather perform updates which only cost O(n) operations.

— 24 —

Berrut interpolation

Implement the Berrut B1 rational function interpolation algorithm.

— 25 —

Adaptive integration of complex-valued functions

Generalize your favourite adaptive integrator such that it can calculate the integral of a complex-valued function f(z) of a complex variable z along a straight line between two points a and b in the complex plane.

Illustrate with some interesting contour integrals in complex plane. For example, implement the Bessel function Jn(x) of integer order using the following contour integral representation,

Jn(x)= 1/(2πi) dz z-n-1 ex(z-1/z)/2

where the contour goes around the origin.