my_prog
can be determined by the POSIX time
utility.
The command
\time --format "%U" --append --output time.txt mono main.exeor, in short
\time -f "%U" -ao time.txt mono main.exeruns the program (disregarding the standard output) and appends the number of consumed CPU-seconds to the file
time.txt
.
Without the --append
option (or, in short, -a
)
the file time.txt
gets overwritten. Backslash here is
needed to run the actual utility rather than built-in bash command (with
similar possibilities, actually).
An example of the usage of time
can be found here (see the
makefile).
Implement the Jacobi eigenvalue algorithm for real symmetric matrices using the cyclic sweeps: eliminate the off-diagonal elements in a predefined sequence which spans all off-diagonal elements, for example row after row, and repeat the sweeps until converged. The convergence criterion could be that the eigenvalues did not change after a whole sweep.
Prove that your implementation works as intended: generate a random symmetric matrix A, apply your routine to perform the eigenvaluedecomposition, A=VDVT (where V is the orthogonal matrix of eigenvectors and D is the diagonal matrix with the corresponding eigenvalues), and check that VTAV==D.
Hints:
Consider a quantum particle in a box (or a standing wave in a string). You might also want to read the articles "Quantum particle in a box" and/or "Standing wave in a string".
The wave-function \(\psi(x)\) of a particle in a box of length \(L\) satisfies the Schrödinger equation \[ \hat{\cal H}\psi = E\psi \;, \] where \(E\) is the energy of the particle and where the Hamiltonian \(\hat H\) is given as \[ \hat{\cal H}=-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2} \;, \] where \(m\) is the mass of the particle. The boundary condition for an infinitely deep box is that the wave-function vanishes at the borders, \[ \psi(0)=\psi(L)=0 \;. \]
Let us introduce dimensionless variables \(\xi\doteq \frac{x}{L}\), \(\epsilon \doteq \frac{2mL^2}{\hbar^2}E\), and the new function \(u(\xi)\doteq\psi(\xi L)\). The Schrödinger equation then reduces to \[ \hat H u = \epsilon u \;,\] where \(\hat H = -\frac{\partial^2}{\partial\xi^2}\), with the boundary condition \[ u(0)=u(1)=0 \;. \] Numerically one can represent this problem on an equidistant \(N\)-point grid with step \(s\), \[ \xi_i = is \,,\; s=\frac{1}{N} \,,\; i=0\dots N \;,\] where the wave-function is represented by a vector \(u_i = u(x_i)\), and where the second derivative operation in the Hamiltonian can then be approximated using the three point finite difference formula, \[ \frac{\partial^2u_i}{\partial\xi^2} = \frac{u_{i-1}-2u_i+u_{i+1}}{s^2} \;. \] Now the Schrödinger equation becomes a matrix eigenvalue equation, \[ \sum_{j=1}^{N-1} H_{ij} u_j = \epsilon u_i \,,\; i=1...N-1 \;, \] or, in matrix notation, \[ H u=\epsilon u \;, \] where the matrix \(H\) is given as \[ H = -\frac{1}{s^2}\left( \begin{array}{cccccc} -2 & 1 & 0 & 0 &\dots & 0 \\ 1 & -2 & 1 & 0 &\dots & 0 \\ 0 & 1 & -2 & 1 &\dots & 0 \\ \vdots &\vdots &\vdots &\vdots &\vdots & \vdots \\ 0 &\dots & 0 & 1 & -2 & 1 \\ 0 &\dots & 0 & 0 & 1 & -2 \\ \end{array} \right) \;. \] Notice that—since the wave-function at the end-points is zero—the summation goes over \(1\dots(N-1)\) and the vanishing components \(u_0=0\) and \(u_N=0\) have been omitted and the boundary condition is thus built in the formulation of the matrix problem.
Solving the matrix eigenvalue equation amounts to matrix diagonalization. If you give the matrix H to your diagonalization routine it will return a vector e with eigenvalues and a matrix V whith the corresponding eigenvectors as columns, such that \[ \sum_j H_{ij}V_{jk} = e_k V_{ik} \;. \]
Now calculate numerically the egenvalues and eigenfunctions for a quantum particle in a box.
Build the Hamiltonian matrix \(H\),
int n=20; double s=1.0/(n+1); matrix H = new matrix(n,n); for(int i=0;i<n-1;i++){ matrix.set(H,i,i,-2); matrix.set(H,i,i+1,1); matrix.set(H,i+1,i,1); } matrix.set(H,n-1,n-1,-2); matrix.scale(H,-1/s/s);
Diagonalize the matrix using your Jacobi routine,
matrix V = new matrix(n,n); vector egenvals = jacobi.cyclic(H,V);
Check that your energies are correct,
for (int k=0; k < n/3; k++){ double exact = pi*pi*(k+1)*(k+1); double calculated = vector.get(eigenvals,k); WriteLine($"{k} {calculated} {exact}"); }
Plot the several lowest eigenfunctions (and compare with the analytical result),
for(int k=0;k<3;k++){ WriteLine($"{0} {0}"); for(int i=0;i<n;i++) WriteLine($"{(i+1.0)/(n+1)} {matrix.get(V,i,k)}"); WriteLine($"{1} {0}"); }
(3 points) Jacobi diagonalization eigenvalue-by-eigenvalue
Eliminate the off-diagonal elements in the first row only (by running the sweeps not over the whole matrix, but only over the first row until the off-diagonal elements of the first row are all zeroed). Argue, that the corresponding diagonal element is the lowest eigenvalue.
If needed, eliminate the off-diagonal elements in the second row. Argue, that the eliminated elements in the first row will not be affected and can therefore be omitted from the elimination loop. Argue that the corresponding diagonal element is the second lowest eigenvalue.
If needed, continue elimination of the subsequent rows (omitting the operations with the already eliminated rows!) until the given number of the lowest eigenvalues is calculated.
(1 point) Classic Jacobi eigenvalue algorithm
In the classic Jacobi eigenvalue algorithm one searches for the largest off-diagonal element and eliminates it. Unfortunately, the search for the largest element is O(n²) process which is not effective if implemented directly. Instead, in the beginning one creates an array with indices of the largest elements in each row and then, after each rotation, only updates the indices of the affected rows.
A possible algorithm might look like this,
create array of indices of largest elements in rows; do : for each row : eliminate the largest element of the row by Jacobi transformation; update the array of indices for the affected rows; until converged.
Find out whether this gives any improvements in i) the number of rotations, ii) the run-time.