Examination projects (numbering starts with zero)


  1. Cubic sub-spline for data with derivatives

    Introduction
    In practice a function is often tabulated together with its derivative (for example, after numerical integration of a second order ordinary differential equation). The table is then given as {xi, yi, y'i}i=1,..,n , where yi is value of the tabulated function, and y'i is the value of the derivative of the tabulated function at the point xi.
    Problem
    Given a data-set, {xi, yi, y'i}i=1,..,n , build a cubic sub-spline,

    S(x) x∈[xi,xi+1] =Si(x)

    where

    Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3.

    For each interval the three coefficients bi, ci, di are determined by the three conditions,

    Si(xi+1)=yi+1,
    S'i(xi)=y'i,
    S'i(xi+1)=y'i+1.

    See the subsection "Akima sub-spline interpolation" for the inspiration.

    Extra
    1. Derivative and the integral of the spline.
    2. Make the second derivative of the spline continuous by increasing the order of the spline to four, for example, in the form

      Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3 +ei(x-xi)2(x-xi+1)2 ,

      and choosing the coefficients ei such that the spline has continuous second derivative.
  2. Yet another cubic sub-spline

    Introduction
    The ordinary cubic spline simetimes makes unpleasant wiggles, for example, around an outlier, or around a step-like feature of the tabulated function (read the Akima sub-spline chapter in the lecture notes). Here is yet another attempt to reduce the wiggles by building a sub-spline.
    Problem
    Consider a data set {xi, yi}i=1,..,n which represents a tabulated function.

    Implement a sub-spline of this data using the following algorithm:

    1. For each inner point xi, i=2,..,n-1, build a quadratic interpolating polynomial through the points xi-1, xi, xi+1 and estimate, using this polynomial, the first derivative, pi, of the function at the point xi;
    2. For the the first and the last points estimate p1 and pn from the the same polynomial that you used to estimate correspondingly p2 and pn-1.
    3. Now you have a data set {xi, yi, pi}i=1,..,n where the function is tabulated together with its derivative: build a cubic sub-spline for it,

      Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3,

      where for each interval the three coefficients bi, ci, di are determined by the three conditions,

      Si(xi+1)=yi+1,
      S'i(xi)=pi,
      S'i(xi+1)=pi+1.

    Extra
    Derivative and integral of the spline.
  3. Bi-linear interpolation on a rectilinear grid in two dimensions

    Introduction
    A rectilinear grid (note that rectilinear is not necessarily cartesian nor regular) in two dimensions is a set of nx×ny points where each point can be adressed by a double index (i,j) where 1 ≤ i ≤ nx, 1 ≤ j ≤ ny and the coordinates of the point (i,j) are given as (xi,yj), where x and y are vectors with sizes nx and ny correspondingly. The values of the tabulated function F at the grid points can then be arranged as a matrix {Fi,j=F(xi,yj)}.
    Problem
    Build an interpolating routine which takes as the input the vectors {xi} and {yj}, and the matrix {Fi,j} and returns the bi-linear interpolated value of the function at a given 2D-point p=(px,py).
    Hints
    See the chapter "Bi-linear interpolation" in the book.

    The signature of the interpolating subroutine can be

    double bilinear(int nx, int ny, double* x, double* y, double** F, double px, double py)

    or

    double bilinear(gsl_vector* x, gsl_vector* y, gsl_matrix* F, double px, double py)

  4. Generalized eigenvalue problem

    Introduction
    A generalized eigenvalue problem is the problem of finding all generalized-eigenvectors x and the corresponding generalized-eigenvalues λ which obey the generalized-eigenvalue-equation

    Ax=λNx,

    where A is a real symmetric matrix and N is a real symmetric positive-definite matrix (all eigenvalues of a positive-definite matrix are positive).
    Problem
    Implement a routine for solving the generalized eigenvalue problem using the following algorithm:
    1. Using your Jacobi-eigenvalue routine find the eigenvalue decomposition of the matrix N,

      N=VDVT,

      where D is the diagonal matrix with (positive) eigenvalues of the matrix N on the diagonal and V is the matrix with the corresponding eigenvectors as its columns. Now the original generalized eigenvalue problem can be represented as an ordinary eigenvalue problem (prove it analytically),

      By=λy,

      where the real symmetric matrix B=√D-1VTAV√D-1, and y=√DVTx.
    2. Using your Jacobi-eigenvalue routine find the the eigenvalues and eigenvectors of the matrix B.
    3. Restore the original eigenvectors x.
  5. Jacobi eigenvalue algorithm: several smallest (largest) eigenvalues.

    Introduction
    In practice, for example in quantum mechanics, one often needs to calculate only few lowest (or largest) eigenvalues and eigenvectors of a given (real symmetric) matrix. In this case the full diagonalization of the matrix might be ineffective.
    Problem
    Implement the variant of the Jacobi eigenvalue algorithm which calculates the given number n of the smallest (largest) eigenvalues and corresponding eigenvectors by consecutively zeroing the off-diagonal elements only in the first (last) n rows of a real symmetrix matrix.
    Extra
    Check how faster it is to calculate only the lowest eigenvalue as compared to full diagonalization.

    Find out for how large n the full diagonalization becomes more effective.

  6. Hessenberg factorization of a real square matrix using Jacobi transformations

    Why Hessenberg matrix?

    If the constraints of a linear algebra problem do not allow a general matrix to be conveniently reduced to a triangular form, reduction to Hessenberg form is often the next best thing. Reduction of any matrix to a Hessenberg form can be achieved in a finite number of steps. Many linear algebra algorithms require significantly less computational effort when applied to Hessenberg matrices.

    Definitions

    A lower Hessenberg matrix H is a square matrix that has zero elements above the first sup-diagonal, {Hij=0}j>i+1.

    An upper Hessenberg matrix H is a square matrix that has zero elements above the first sub-diagonal, {Hij=0}i>j+1.

    Hessenberg factorization is a representation of a square real matrix A in the form A=QHQT where Q is an orthogonal matrix and H is a Hessenberg matrix.

    If A is a symmetrix matrix its Hessenberg factorization produces a tridiagonal matrix H.

    Hessenberg factorization by Jacobi transformations

    Consider a Jacobi transformation with the matrix J(p,q),

    A→J(p,q)TAJ(p,q)

    where the rotation angle is chosen not to zero the element Ap,q but rather to zero the element Ap-1,q. Argue that the following sequence of such Jacobi transformations,

    J(2,3), J(2,4), ..., J(2,n); J(3,4), ..., J(3,n); ...; J(n-1,n)

    reduces the matrix to the lower Hessenberg form.

    Implement this algorithm. Remember to accumulate the total transformation matrix.

    Extra
    What is faster: Hessenberg factorization or QR-factorization?

    Calculate the determinant of the Hessenberg matrix.

  7. LU factorization of a Hessenberg matrix
    Why?
    LU-factorization of a Hessebberg matrix takes only O(n²) operations!

    An upper n×n Hesseberg matrix H can be LU-factorized, H=LU, with the L-matrix being bidiagonal: made of a main diagonal (containing ones) and a subdiagonal with elements vi, i=1,..,n-1.

    Algorithm
    The elementary operation for 2x2 matrix is given as
    [  1 0 ] [ h11 h12 ] = [     h11           h12   ]
    [ -v 1 ] [ h21 h22 ]   [ -v*h11+h21   -v*h11+h22 ]
    
    If we choose v=h21/h11 then the matrix becomes upper triangular,
    [  1 0 ] [ h11 h12 ] = [  h11       h12   ] ≡ U
    [ -v 1 ] [ h21 h22 ]   [   0   -v*h11+h22 ]
    
    Now the inverse of the left matrix is easy:
    [  1 0 ] [ 1 0 ] = [ 1 0 ]
    [ -v 1 ] [ v 1 ]   [ 0 1 ]
    
    Therefore the sought operation, which eliminates one sub-diagonal element of matrix H, is
    [ h11 h12 ] = [  1 0 ] [ u11 u12 ]
    [ h21 h22 ]   [  v 1 ] [  0  u22 ]
    
    where v=h21/h11, u11=h11, u12=h12, u22=h22-v*h11.

    Now one simply has to apply this operation to all sub-diagonal elements of the upper Hessenber matrix H, which leads to the following algorithm (see A note on the stability of the LU factorization of Hessenberg matrices):

    for i=1:n
    	u1,i=h1,i
    endfor
    for i=1:n-1
    	vi=hi+1,i/ui,i
    	for j=i+1:n
    		ui+1,j=hi+1,j-vi*ui,j
    	endfor
    endfor
    
    Problem
    Implement this O(n²) algorithm for LU factorization of a Hessenberg matrix.
  8. Two-sided Jacobi algorithm for Singular Value Decomposition (SVD)

    Introduction
    The SVD of a (real square, for simplicity) matrix A is a representation of the matrix in the form

    A = U D VT ,

    where matrix D is diagonal with non-negative elements and matrices U and V are orghogonal. The diagonal elements of matrix D can always be chosen non-negative by multiplying the relevant columns of matrix U with (-1).

    SVD can be used to solve a number of problems in linear algebra.

    Problem
    Implement the two-sided Jacobi SVD algorithm.

    Algorithm (as described in the "Eigenvalues" chapter of the book)
    In the cyclic Jacobi eigenvalue algorithm for symmetric matrices we applied the elementary Jacobi transformation

    A → JT A J

    where J ≐ J(θ,p,q) is the Jacobi matrix (where the angle θ is chosen to eliminate the off-diagonal elements Apq=Aqp) to all upper off-diagonal matrix elements in cyclic sweeps until the matrix becomes diagonal.

    The two-sided Jacobi SVD algorithm for general real square matrices is the same as the Jacobi eigenvalue algorithm, except that the elementary transformation is slightly different,

    A → JT GT A J ,

    where

    • G ≐ G(θ,p,q) is the Givens (Jacobi) matrix where the angle is chosen such that the matrix A' ≐ GT(θ,p,q) A has identical off-diagonal elements, A'pq=A'qp:

      tan(θ) = (Apq - Aqp)/(Aqq + App)

      (check that this is correct).
    • J ≐ J(θ,p,q) is the Jacobi matrix where the angle is chosen to eliminate the off-diagonal elements A'pq=A'qp.

    The matrices U and V are updated after each transformation as

    U → U G J ,
    V → V J .

    Of course you should not actually multiply Jacobi matrices—which costs O(n³) operations—but rather perform updates which only cost O(n) operations.

  9. One-sided Jacobi algorithm for Singular Value Decomposition (SVD)

    Introduction
    See the previous exercise.

    Problem
    Implement the one-sided Jacobi SVD algorithm.

    Algorithm
    In this method the elementary iteration is given as

    A → A J(θ,p,q)

    where the indices (p,q) are swept cyclicly (p=1..n, q=p+1..n) and where the angle θ is chosen such that the columns number p and q of the matrix AJ(θ,p,q) are orthogonal. One can show that the angle should be taken from the following equation,

    tan(2θ)=2apTaq /(aqTaq-apTap)

    where ai is the i-th column of matrix A.

    After the iterations converge and the matrix A'=AJ (where J is the accumulation of the individual rotations) has orthogonal columns, the SVD is simply given as

    A=UDVT

    where

    V=J, Dii=||a'i||, ui=a'i/||a'i||,

    where a'i is the i-th column of matrix A' and ui us the i-th column of matrix U.
  10. Golub-Kahan-Lanczos bidiagonalization

    Introduction
    Bidiagonalization is a representation of a real matrix A in the form A=UBVT where U and V are orthogonal matrices and B is a bidiagonal matrix with non-zero elements only on the main diagonal and first sup-diagonal.

    Bidiagonalization can be used on its own to solve linear systems and ordinary least squares problems, calculate the determinant and the (pseudo-)inverse of a matrix. But it mostly is used as the first step in SVD.

    Problem
    Here is a description of the Golub-Kahan-Lanczos algorithm for bidiagonalization. Implement it.
  11. Inverse iteration algorithm for eigenvalues (and eigenvectors)

    Introduction
    See the chapter "Power iteration methods and Krylov subspaces" in the book.

    Problem
    Implement the variant of the inverse iteration method that calculates the eigenvalue closest to a given number s (and the corresponding eigenvector).
  12. Lanczos tridiagonalization algorithm

    Implement the Lanczos algorithm (Lanczos interation) for real symmetric matrices.

  13. Symmetric rank-1 update of a size-n symmetric eigenvalue problem

    The matrix A to diagonalize is given in the form

    A = D +uuT,

    where D is a diagonal matrix and u is a column-vector.

    Given the diagonal elements of the matrix D and the elements of the vector u find the eigenvalues of the matrix A using only O(n2) operations (see section "Eigenvalues of updated matrix" in the book).

  14. Symmetric row/column update of a size-n symmetric eigenvalue problem

    The matrix to diagonalize is given in the form

    A = D + e(p) uT + u e(p)T

    where D is a diagonal matrix with diagonal elements {dk, k=1,...,n}, u is a given column-vector, and the vector e(p) with components e(p)iip is a unit vector in the direction p where 1≤p≤n.

    Given the diagonal elements of the matrix D, the vector u, and the integer p, calculate the eigenvalues of the matrix A using O(n2) operations (see section "Eigenvalues of updated matrix" in the book).

  15. ODE: a two-step method

    Implement a two-step stepper for solving ODE (as in the book). Use your own stepsize controlling driver.

  16. ODE with complex-valued functions of complex variable

    Generalize the ODE solver of your choice to solve ODE with complex-valued functions of complex variable along a straight line between the given start- and end-point.

  17. Adaptive 1D integrator with random nodes

    Implement an adaptive one-dimensional integrator with random abscissas. Reuse points. Note that you don't need to pass the points to the next level of recursion, only statistics.
  18. Adaptive integration of complex-valued functions

    Implement an adaptive integrator which calculates the integral of a complex-valued function f(z) of a complex variable z along a straight line between two points in the complex plane.

  19. Adaptive integrator with subdivision into three subintervals.

    Implement a (one-dimensional) adaptive integrator which at each iteration subdivides the interval not into two, but into three sub-intervals.

  20. Two-dimensional integrator

    Implement a two-dimensional integrator for integrals in the form

    abdx d(x)u(x)dy f(x,y)

    which consecutively applies your favourite adaptive one-dimensional integrator along each of the two dimensions. The signature might be something like

    double integ2D(double a, double b, double d(double x), double u(double x), double f(double x, double y), double acc, double eps)

  21. Multidimensional pseudo-random (plain Monte Carlo) vs quasi-random (Halton and/or lattice sequence) integrators

    Investigate the convergence rates (of some interesting integrals in different dimensions) as function of the number of sample points.

  22. Rootfinding: 1D complex vs. 2D real

    Implement a (quasi) Newton method for rootfinding for a complex function f of complex variable z,

    f(z) = 0 .

    Hints: do it the ordinary way, just with complex numebers:

    Compare the effectiveness of your complex implementation with your homework multi-dimensional implementation of real rootfinding applied to the equivalent 2D system of two real equation with two real variables x and y,

    Re f(x + iy) = 0 ,
    Im f(x + iy) = 0 .
  23. Artificial neural network (ANN) for solving ODE

    At the lectures we have constructed an ANN that can be trained to approximate a given tabulated function. Here you should build a topologically similar network that can be trained to approximate a solution to a one-dimenstional first-order ordinary differential equation,

    y'=f(x,y) ,

    on a given interval x∈[a,b] with a given initial condition y(x0)=y0. After training the response of the network to the input parameter x should approximate the sought solution y(x).

    Solve the following equations:

    y'=y*(1-y), x∈[-5,5], y(0)=0.5 (logistic function)

    y'=-x*y, x∈[-5,5], y(0)=1 (gaussian function)

    Hints: