Consider your examination project like an extra homework. Make the project in your repository in a separate folder with an indicative name (like "exam") which I should easily find. Supply a README.TXT or similar file with a short description and (optionally) a self-evaluation of the project on the scale [0,10].
The index of the project assigned to you (from the indexed list below) is given by the last two digits of your student's number modulo the number of projects (22). For example,
— 0 —
Akima sub-spline
Implement the Akima sub-spline. See the section "Akima sub-spline interpolation" in the book for the inspiration.
— 1 —
ODE: a two-step method
Implement a two-step stepper for solving ODE (as in the book). Use your own stepsize controlling driver.
— 2 —
Lanczos tridiagonalization algorithm
Lanczos tridiagonalization is a representation of real symmetric matrix, A, in the form A=VTVT (where V is orthogonal matrix and T is tridiagonal matrix) using power iteration and Gram-Schmidt orthogonalization. Read the book and/or the Wikipedia article. The algorithm is often used as the first step in matrix diagonalization (particularly with lossy compression that preserves extreme eigenvalues).
Implement the Lanczos tridiagonalization algorithm for real symmetric matrices.
— 3 —
Bi-linear interpolation on a rectilinear grid in two dimensions
The signature of the interpolating subroutine can be
double bilinear(int nx, int ny, double* x, double* y, double** F, double px, double py)
or
double bilinear(gsl_vector* x, gsl_vector* y, gsl_matrix* F, double px, double py)
— 4 —
Multidimensional pseudo-random (plain Monte Carlo) vs quasi-random (Halton and/or lattice sequence) integrators
Investigate the convergence rates (of some interesting integrals in different dimensions) as function of the number of sample points.
— 5 —
Adaptive 1D integrator with random nodes
Implement an adaptive one-dimensional integrator with random abscissas. Reuse points. Note that you don't need to pass the points to the next level of recursion, only statistics.— 6 —
Adaptive integration of complex-valued functions
Implement an adaptive integrator which calculates the integral of a complex-valued function f(z) of a complex variable z along a straight line between two points in the complex plane.
— 7 —
Cholesky decomposition of a real symmetric positive-definite matrix
For a real symmetric positive-definite matrix, A, the Cholesky decomposition, A=LLT (where L is lower-triangular), is more efficient than the LU-decomposition. Read the book and/or Wikipedia article Cholesky-decomposition.
Implement Cholesky decomposition of a real symmetric positive-definite matrix using Cholesky-Banachiewicz or Cholesky-Crout algorithms.
For more points implement also the corresponding linear equation solver, calculation of determinant, and calculation of the inverse matrix.
— 8 —
One-sided Jacobi algorithm for Singular Value Decomposition
A = U D VT ,
where matrix D is diagonal with non-negative elements and matrices U and V are orghogonal. The diagonal elements of matrix D can always be chosen non-negative by multiplying the relevant columns of matrix U with (-1).SVD can be used to solve a number of problems in linear algebra.
A → A J(θ,p,q)
where the indices (p,q) are swept cyclicly (p=1..n, q=p+1..n) and where the angle θ is chosen such that the columns number p and q of the matrix AJ(θ,p,q) are orthogonal. One can show that the angle should be taken from the following equation (you should use atan2 function),tan(2θ)=2apTaq /(aqTaq - apTap)
where ai is the i-th column of matrix A (check this).After the iterations converge and the matrix A'=AJ (where J is the accumulation of the individual rotations) has orthogonal columns, the SVD is simply given as
A=UDVT
whereV=J, Dii=||a'i||, ui=a'i/||a'i||,
where a'i is the i-th column of matrix A' and ui us the i-th column of matrix U.— 9 —
Rootfinding: 1D complex vs. 2D real
Implement a (quasi) Newton method for rootfinding for a complex function f of complex variable z,
Hints: do it the ordinary way, just with complex numebers:
Compare the effectiveness of your complex implementation with your homework multi-dimensional implementation of real rootfinding applied to the equivalent 2D system of two real equation with two real variables x and y,
— 10 —
Yet another cubic sub-spline
Implement a sub-spline of this data using the following algorithm:
Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3,
where for each interval the three coefficients bi, ci, di are determined by the three conditions,
Si(xi+1)=yi+1,
S'i(xi)=pi,
S'i(xi+1)=pi+1.
— 11 —
Symmetric row/column update of a size-n symmetric eigenvalue problem
The matrix to diagonalize is given in the form
A = D + e(p) uT + u e(p)T
where D is a diagonal matrix with diagonal elements {dk, k=1,...,n}, u is a given column-vector, and the vector e(p) with components e(p)i=δip is a unit vector in the direction p where 1≤p≤n.
Given the diagonal elements of the matrix D, the vector u, and the integer p, calculate the eigenvalues of the matrix A using O(n2) operations (see section "Eigenvalues of updated matrix" in the book).
— 12 —
LU factorization of a Hessenberg matrix
An upper Hessenberg matrix H is a square matrix that has zero elements below the first sub-diagonal, {Hij=0}i>j+1.
LU-factorization of a Hessebberg matrix takes only O(n²) operations!
An upper n×n Hesseberg matrix H can be LU-factorized, H=LU, with the L-matrix being bi-diagonal: made of a main diagonal (containing ones) and a subdiagonal with elements vi, i=1,..,n-1.
[ 1 0 ] [ h11 h12 ] = [ h11 h12 ] [ -v 1 ] [ h21 h22 ] [ -v*h11+h21 -v*h11+h22 ]If we choose v=h21/h11 then the matrix becomes upper triangular,
[ 1 0 ] [ h11 h12 ] = [ h11 h12 ] ≡ U [ -v 1 ] [ h21 h22 ] [ 0 -v*h11+h22 ]Now the inverse of the left matrix is easy:
[ 1 0 ] [ 1 0 ] = [ 1 0 ] [ -v 1 ] [ v 1 ] [ 0 1 ]Therefore the sought operation, which eliminates one sub-diagonal element of matrix H, is
[ h11 h12 ] = [ 1 0 ] [ u11 u12 ] [ h21 h22 ] [ v 1 ] [ 0 u22 ]where v=h21/h11, u11=h11, u12=h12, u22=h22-v*h11.
Now one simply has to apply this operation to all sub-diagonal elements of the upper Hessenber matrix H, which leads to the following algorithm (see A note on the stability of the LU factorization of Hessenberg matrices):
for i=1:n u1,i=h1,i endfor for i=1:n-1 vi=hi+1,i/ui,i for j=i+1:n ui+1,j=hi+1,j-vi*ui,j endfor endfor
— 13 —
Two-sided Jacobi algorithm for Singular Value Decomposition (SVD)
A = U D VT ,
where matrix D is diagonal with non-negative elements and matrices U and V are orghogonal. The diagonal elements of matrix D can always be chosen non-negative by multiplying the relevant columns of matrix U with (-1).SVD can be used to solve a number of problems in linear algebra.
A → JT A J
where J ≐ J(θ,p,q) is the Jacobi matrix (where the angle θ is chosen to eliminate the off-diagonal elements Apq=Aqp) to all upper off-diagonal matrix elements in cyclic sweeps until the matrix becomes diagonal.
The two-sided Jacobi SVD algorithm for general real square matrices is the same as the Jacobi eigenvalue algorithm, except that the elementary transformation is slightly different,
A → JT GT A J ,
where
tan(θ) = (Apq - Aqp)/(Aqq + App)
(check that this is correct).The matrices U and V are updated after each transformation as
Of course you should not actually multiply Jacobi matrices—which costs O(n³) operations—but rather perform updates which only cost O(n) operations.
— 14 —
Cubic (sub-)spline for data with derivatives
S(x) x∈[xi,xi+1] =Si(x)
whereSi(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3.
For each interval the three coefficients bi, ci, di are determined by the three conditions,
Si(xi+1)=yi+1,
S'i(xi)=y'i,
S'i(xi+1)=y'i+1.
See the subsection "Akima sub-spline interpolation" for the inspiration.
Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3 +ei(x-xi)2(x-xi+1)2 ,
and choosing the coefficients ei such that the spline has continuous second derivative.— 15 —
Symmetric rank-1 update of a size-n symmetric eigenvalue problem
The matrix A to diagonalize is given in the form
A = D +uuT,
where D is a diagonal matrix and u is a column-vector.
Given the diagonal elements of the matrix D and the elements of the vector u find the eigenvalues of the matrix A using only O(n2) operations (see section "Eigenvalues of updated matrix" in the book).
— 16 —
Adaptive recursive integrator with subdivision into three subintervals.
Implement a (one-dimensional) adaptive recursive integrator (open or closed quadrature, at your choice) which at each iteration subdivides the interval not into two, but into three sub-intervals. Reuse points. Compare with the adaptive integrator from your homework.
— 17 —
Hessenberg factorization of a real square matrix using Jacobi transformations
If the constraints of a linear algebra problem do not allow a general matrix to be conveniently reduced to a triangular form, reduction to Hessenberg form is often the next best thing. Reduction of any matrix to a Hessenberg form can be achieved in a finite number of steps. Many linear algebra algorithms require significantly less computational effort when applied to Hessenberg matrices.
A lower Hessenberg matrix H is a square matrix that has zero elements above the first sup-diagonal, {Hij=0}j>i+1.
An upper Hessenberg matrix H is a square matrix that has zero elements below the first sub-diagonal, {Hij=0}i>j+1.
Hessenberg factorization is a representation of a square real matrix A in the form A=QHQT where Q is an orthogonal matrix and H is a Hessenberg matrix.
If A is a symmetrix matrix its Hessenberg factorization produces a tridiagonal matrix H.
Consider a Jacobi transformation with the matrix J(p,q),
A→J(p,q)TAJ(p,q)
where the rotation angle is chosen not to zero the element Ap,q but rather to zero the element Ap-1,q. Argue that the following sequence of such Jacobi transformations,
J(2,3), J(2,4), ..., J(2,n); J(3,4), ..., J(3,n); ...; J(n-1,n)
reduces the matrix to the lower Hessenberg form.
Implement this algorithm. Remember to accumulate the total transformation matrix.
Calculate the determinant of the Hessenberg matrix.
— 18 —
Two-dimensional integrator
Implement a two-dimensional integrator for integrals in the form
∫abdx
∫d(x)u(x)dy
f(x,y)
which consecutively applies your favourite adaptive one-dimensional integrator along each of the two dimensions. The signature might be something like
double integ2D(double a, double b, double d(double x), double u(double x),
double f(double x, double y), double acc, double eps)
— 19 —
Golub-Kahan-Lanczos bidiagonalization
Bidiagonalization is a representation of a real matrix, A, in the form A=UBVT where U and V are orthogonal matrices and B is a bidiagonal matrix (a matrix with non-zero elements only on the main diagonal and first sup-diagonal). Bidiagonalization can be used on its own to solve linear systems and ordinary least squares problems, calculate the determinant and the (pseudo-)inverse of a matrix. But it is mostly used as the first step in SVD.
Implement the [Golub-Kahan-Lanczos algorithm] for matrix bidiagonalization.
For more points implement also the corresponding linear equation solver, the determinant, and the inverse matrix.
— 20 —
ODE with complex-valued functions of complex variable
Generalize the ODE solver of your choice to solve ODE with complex-valued functions of complex variable along a straight line between the given start- and end-point.
— 21 —
Inverse iteration algorithm for eigenvalues (and eigenvectors)
Inverse iteration algorithm allows calculation of the eigenvalue (and the corresponding eigenvector) that is closest to a given value. Read the book and/or the [inverse iteration] article in Wikipedia.
Implement the inverse iteration algorithm that calculates the eigenvalue closest to a given value (and the corresponding eigenvector). You should use your own linear algebra routines.