[ Home ] | Monte Carlo Integration | Numerical Methods. Note « 8 » |
Random sampling. Plain Monte Carlo. We sample N points randomly from the integration region and evaluate the average of the integrand. The integral is equal to the volume of the integration region times the average of the integrand ∫ab f(x) dx ≈ V⟨f⟩ = V 1/N ∑i f(xi), where V is the integration volume: V=b-a. For a d-dimentional rectangular region {x[] : a[i] <= x[i] <= b[i], i=1:d} the volume is V = ∏(i=1:n)(bi-ai). The error estimate is (according to the central limit theorem) close to V σ/√(N) where σ is the variance, σ2 = ⟨f2⟩ - ⟨f⟩2. The variance can be reduced by a cleverer sampling.
Importance sampling. Metropolis algorithm. Distribute points with larger density in the regions which contribute most (that is where the function is largest). Suppose that the points are distributed with the density ρ, that is the number of points Δn in the interval Δx is equal Δn = ρ(x)Δx. Then the estimate of the integral I is I = ∑ fi Δxi = ∑ fi 1/ρi = ∑ fi/ρi The corresponding variance is now σ2=⟨(f/ρ)2⟩-⟨f/ρ⟩2. Apparently if the ratio f/ρ is close to a constant the variance is reduced. It would be best to take ρ=|f| and sample directly from the function. However in practice it is typically expensive to evaluate the integrand. Therefore one builds the approximate density in a factorized form ρ(x,y,...,z) = ρx(x)ρy(y)...ρz(z). A popular routine of this sort is called VEGAS.
Stratified sampling. Distribute more points in the regions, where variance is largest.
function strata(f, a[], b[], n, acc) : throw n points uniformly; if err>acc then : for each dimension : subdivide the volume into two subvolumes along this dimension estimate the sub-variances in the two subvolumes pick the dimension whith the largest sub-variance subdivide your volume in two along this dimension dispatch two recursive calls to each of the sub-volumes estimate the grand average and grand varianceThe standard MISER routine employs a somewhat different algorithm.
Quasi-random (low-discrepancy) sampling. Lattices. Distribute points "more uniformly" than in random sampling thus (hopefully) reducing the error. One of the quasi-random methods is lattices, which can be considered as a grid with non-orthogonal axes. Let αk, k=1:n be a set of (wisely chosen) irrational numbers, fx square-roots of prime numbers. Then the i-th point is generated as xi={frac(i α1), ... ,frac(i αn)}, where frac(z) denotes the fractional part of z. The problem with this method is that a high accuracy arithmetics is needed in order to generate a reasonable amount of quasi-random numbers.
Problems
function plainmc(d, a[], b[], f, N) returns [I,err] for k=1..N : for i=1..d : x[i] = a[i] + random()*(b[i]-a[i]) sum += f(x[]); sum2 += f(x[])^2 average = sum/N; variance = sqrt( sum2/N - average^2 ) vol = product( b[i]-a[i], i=1..d) I = vol*average; err = vol*variance/sqrt(N)