Numerical methods. Note 5
[~Home~] Linear Least-Squares Problem Numerical Methods. Note~ « 5 »

Least-Squares Solution. Let A be an n×m matrix, c an m-component vector and b an n-component vector. For n~>~m the equation Ac~=~b generally has no solution. However one can find the "best possible" (least squares) solution which minimizes the Eucledian norm of the difference between Ac and b.

The problem can be solved by the QR decomposition. The matrix A factorizes as A=QR, where Q is n×m matrix with orthogonal columns and R is an m×m upper triangular matrix. We can then minimize

|Ac-b|2 = |QRc-b|2 = |Rc - QTb|2 + |(1 - QQT)b|2
by solving a m×m set of linear equations Rc~=~QTb by simple back-substitution.

Linear Least-Squares Fit is a problem of fitting n data points yi (with errors σi) by a linear combination of m functions F~(x)~=~∑k~ck~fk~(x). The best fit is then when the square deviation χ2~=~∑i~(yi~-~F(xi)/σi)2 is minimized. One can recognise the above problem of minimizing |Ac~-~b|2 where Aik~=~fk(xi)/σi~, bi~=~yi/σi with the formal solution c~=~R-1QTb (in practice you should backsubstitute Rc~=~QTb).

The error of the least-squares fit Δci can be approximated as Δci~=~∑k~∂ci/∂yk×σk =~∑k~∂ci/∂bk  ≈ ~√(∑k~(∂ci/∂bk)2). The error matrix (also called the covariance matrix) Eij~=~ΔciΔcj is then simply

E = (∂c/∂b) (∂c/∂b)T ~=~R-1(R-1)T~=~A-1(A-1)T ~=~(RTR)-1~=~(ATA)-1

The inverse A-1 of a matrix A can be calculated by solving m linear systems Aikzk(~j~)~=~δij, j=1:m. The inverse matrix then will be equal (A-1)ij~=~zi(~j~).

Problems

  1. Make a subroutine that calculates an inverse of a matrix A (with the help of QR decomposition).
  2. Make a linear fit to the following data
    xi = i + sin(i)/2, yi = i + cos(i2), σi=0.1, i=0..10
    Calculate the covariance matrix.

"Copyleft" © 2004 D.V.Fedorov (fedorov at phys dot au dot dk)