Adaptive algorithms

If f(x) does not have many derivatives at all points, or if the derivatives become large, then Gaussian quadrature is often insufficient. Some details of the algorithm require careful thought. For many cases, estimating the error from quadrature over an interval for a function f(x) isn't obvious. One popular solution is to use two different rules of quadrature, and use their difference as an estimate of the error from quadrature.

The other problem is deciding what "too large" or "very small" signify. A local criterion for "too large" is that the quadrature error should not be larger than t · h where t, a real number, is the tolerance we wish to set for global error.

Then again, if h is already tiny, it may not be worthwhile to make it even smaller even if the quadrature error is apparently large. A global criterion is that the sum of errors on all the intervals should be less than t. This type of error analysis is usually called "a posteriori" since we compute the error after having computed the approximation. Visit now to get more by Edwin Urrutia.

The accuracy of a quadrature rule of the Newton-Cotes type is generally a function of the number of evaluation points. The result is usually more accurate as the number of evaluation points increases, or, equivalently, as the width of the step size between the points decreases. It is natural to ask what the result would be if the step size were allowed to approach zero.

This can be answered by extrapolating the result from two or more nonzero step sizes, using series acceleration methods such as Richardson extrapolation. The extrapolation function may be a polynomial or rational function. Extrapolation methods are described in more detail by Stoer and Bulirsch (Section 3.4) and are implemented in many of the routines in the QUADPACK library.