Richardson Extrapolation and Midpoint Rule

The Richardson extrapolation is a simple yet extremely useful idea. Suppose we compute some quantity {Q} (such as an integral) where the error of computation depends on a positive parameter {h} (such as step size). Let’s write {Q=A(h)+E(h)} where {A(h)} is the result of computation and {E(h)} is the error. Often, one can observe either numerically or theoretically that {E(h)} is proportional to some power of {h}, say {h^k}.

The quantity {A(h)-2^k A(h/2)} is an approximation to {(1-2^k)Q} in which the main source of error is cancelled out. Thus, writing

\displaystyle Q\approx \frac{2^kA(h/2)-A(h)}{2^k-1}

should give us a much better approximation than either {A(h)} or {A(h/2)} were.

It is not quite intuitive that one can improve the accuracy of {A(h/2)} by mixing it with a less accurate approximation {A(h)}.


For a concrete example, let’s apply the trapezoidal rule to the integral {\int_{-1}^1 e^x\,dx=2\sinh 1} with step size {h=2}. The rule gives

\displaystyle A(2) = \frac{2}{2}(f(-1)+f(1)) = 2\cosh 1

which is quite wrong: the error is {2e^{-1}\approx 0.74}. Little surprise, considering the geometry of this approximation:

trap
Trapezoidal approximation to exp(x)

With the smaller step we get

\displaystyle A(1) = \frac{1}{2}(f(-1)+2f(0)+f(1)) = 1+\cosh 1

with the error {\approx 0.19}. The rule being of second order, this error is about {4} times smaller.

trap2
Half-step trapezoidal approximation

The Richardson extrapolation gives

\displaystyle \frac{4A(1)-A(2)}{4-1} = \frac{f(-1)+4f(0)+f(1)}{3} = \frac{4+2\cosh 1}{3}

reducing the error by the factor of {16}: it’s now only {0.012}. And this did not require any additional evaluations of the integrand.

The formula {\frac{f(-1)+4f(0)+f(1)}{3}} may look familiar: it’s nothing but Simpson’s rule. But the aboveĀ derivation is much easier than what is typically shown in a calculus textbook: fitting parabolas to the graph of {f} and integrating those.


All this is nice. But what if we started with the Midpoint rule? Let’s find out, following the example above. With {h=2},

\displaystyle A(2) = 2f(0) = 2

which is off by {0.35} (the Midpoint rule is generally twice as accurate as Trapezoidal). Then

\displaystyle A(1) = f(-1/2)+f(1/2) = 2\cosh(1/2)

with error {\approx 0.1}. Extrapolate:

\displaystyle \frac{4A(1)-A(2)}{4-1} = \frac{4f(-1/2)-2f(0)+4f(1/2)}{3} = \frac{8\cosh(1/2) - 2}{3}

and the error drops to {0.010}, slightly less than for Simpson’s rule.

So, the extrapolated-midpoint (EM) rule appears to be slighly more accurate than Simpson’s, with an equal number of evaluations. Why isn’t it in textbooks alongside Simpson’s rule, then?


A few factors make the EM rule impractical.

1. There is a negative coefficient, of {f(0)}. This means that the rule can give a negative result when integrating a positive function! For example, {\int_{-1}^1 \exp(-9x^2)\,dx} evaluates to {-0.39} with EM rule. Simpson’s rule, as pretty much any practical quadrature rule, preserves positivity.

2. The sample points for EM are less convenient than for Simpson’s rule.

3. The gain in accuracy isn’t that great. The reason that Midpoint outperforms Trapezoidal by the factor of {2} has to do with how these rule approximate {\int_{-1}^1 x^2\,dx=\frac23}. Midpoint gives {0}, Trapezoidal gives {2}; the former is twice as accurate. To compare Simpson’s and EM rules, we should consider {\int_{-1}^1 x^4\,dx} since both are of the {4}th order of accuracy: they evaluate cubic polynomials exactly. The results are {1/6} for EM and {2/3 } for Simpson’s: these are nearly equidistant from the correct value {2/5}. The difference in accuracy is less than {15\%}.


Despite being impractical, the EM rule has an insight to offer. By moving two evaluation points from {\pm 1} to {\pm 1/2}, we made the coefficient of {f(0)} go from positive {4/3} to negative {-2/3}. (The coefficients are determined by the requirement to evaluate {\int_{-1}^1 x^2\,dx} exactly.) So, there must be an intermediate position at which the coefficient of {f(0)} is zero, and we don’t need to compute {f(0)} at all. This idea leads to the two-point Gaussian quadrature

\displaystyle \int_{-1}^1 f(x)\,dx \approx f(-1/\sqrt{3})+f(1/\sqrt{3})

which on the example {\int_{-1}^1 e^x\,dx} beats both EM and Simpson’s rules in accuracy, while using fewer function evaluations.


After writing the above, I found that the Extrapolated Midpoint rule has a name: it is Milne’s rule, a member of the family of open Newton-Cotes formulas.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s