[~Home~] | Integration of Ordinary Differential Equations | Numerical Methods. Note~ « 8 » |
Runge-Kutta methods. These are one-step methods where the solution #{y} is advanced by one step #{h} as #{y1=y0+hk}, where #{k} is a cleverly chosen constant. The first order Runge-Kutta method is simply the Euler's method #{k=f(x0,y0)}. Second order Runge-Kutta method advances the solution by an auxiliary evaluation of the derivative:
#{k=k1/2} | : | #{k0 = f(x0, y0)}, #{y1/2 = y0 + h/2 k0}, #{k1/2 = f(x1/2 , y1/2)}, #{y1 = y0 + hk1/2} |
#{k= 1/2 (k0+k1)} | : | #{k1 = f(x1, y0+hk0)}, #{y1 = y0 + h 1/2 (k0+k1)}. |
Bulirsch-Stoer methods. The idea is to evaluate y1 with several (ever smaller) step-sizes and then make an extrapolation to step-size zero.
Multi-step and predictor-corrector methods. Multi-step methods try to use the information about the function gathered at previous steps. For example the sought function y can be approximated as y(x)~=~y1+y'1(x-x1)+c(x-x1)2, where the coefficient c can be found from the condition y(x0)=y0, which gives c=[y0-y1-y'1(x0-x1)]/(x0-x1)2. We can now make a prediction for y2 as y(x2) and calculate f2=f(x2,y(x2)). Having the new information, f2, we can improve the approximation, namely y(x)~=~y1+y'1(x-x1)+c(x-x1)2+d(x-x1)2(x-x0). The coefficient d is found from the condition y'(x2)=f2, which gives d=( f2-y'1-2c(x2-x1) )/ (2(x2-x1)(x2-x0)+(x2-x1)2), from which we get a corrected estimate y2=y(x2). The correction can serve as an estimate of the error.
Step-size control.
Tolerance \tt{tol} is the maximal accepted error consistent with the
required absolute acc
and relative eps
accuracies
tol~=~(eps*|y|+acc)*√(h/b-a)
.
Error err
is e.g. estimated from comparing the solution for a
full-step and two half-steps (the Runge principle):
err~=~|y(h)~-~y(2(h/2))|/(2n-1)
, where n is the
order of the algorithm used.
The next step hnext is estimated according to the prescription
hnext~=~hprevious*(tol/err)power*Safety, where
#{power}∼0.25, #{Safety}∼0.95
Problems