mmfutils.math.integrate

Integration Utilities.

mmfutils.math.integrate.Richardson(f, ps=None, l=2, n0=1)[source]

Compute the Richardson extrapolation of $f$ given the function

\[f(N) = f + \sum_{n=0}^{\infty} \frac{\alpha_n}{N^{p_n}}\]

The extrapolants are stored in the array S`[n, s] where `S[n, 0] = f(n0*l**n) and S[n, s] is the s’th extrapolant.

Note

It is crucial for performance that the powers $p_n$ be properly characterized. If you do not know the form of the errors, then consider using a non-linear acceleration technique such as levin_sum().

Parameters

ps (iterable) – Iterable returning the powers $p_n$. To generate the sequence $p_0 + m d_p$ for example, use itertools.count``(p0, dp)().

Examples

Here we consider

\[f(N) = \sum_{n=1}^{N} \frac{1}{n^2} = \frac{\pi^2}{6} + \order(N^{-1})\]
>>> def f(N): return sum(np.arange(1, N+1, dtype=float)**(-2))
>>> r = Richardson(f, l=3, n0=2)
>>> for n in range(9):
...     x = next(r)
>>> err = abs(x - np.pi**2/6.0)
>>> assert err < 1e-14, 'err'

Now some other examples with different p values:

\[f(N) = \sum_{n=1}^{N} \frac{1}{n^4} = \frac{\pi^4}{90} + \order(N^{-3})\]
>>> def f(N): return sum(np.arange(1, N+1, dtype=float)**(-4))
>>> r = Richardson(f, ps=itertools.count(3,1))
>>> for n in range(8):
...     x = next(r)
>>> err = abs(x - np.pi**4/90.0)
>>> assert err < 1e-14, 'err'
\[f(N) = \sum_{n=1}^{N} \frac{1}{n^6} = \frac{\pi^6}{945} + \order(N^{-5})\]
>>> def f(N): return sum(np.arange(1, N+1, dtype=float)**(-6))
>>> r = Richardson(f, ps=itertools.count(5))
>>> for n in range(7):
...     x = next(r)
>>> err = abs(x - np.pi**6/945.0)
>>> assert err < 1e-14, 'err'

Richardson works with array valued functions:

>>> def f(N): return np.array([sum(np.arange(1, N+1, dtype=float)**(-2)),
...                            sum(np.arange(1, N+1, dtype=float)**(-4))])
>>> r = Richardson(f, l=3, n0=2)
>>> for n in range(7):
...     x = next(r)
>>> err = abs(x - np.array([np.pi**2/6.0, np.pi**4/90.0])).max()
>>> assert err < 1e-13, 'err'

It also works for complex valued functions:

>>> def f(N): return (sum(np.arange(1, N+1, dtype=float)**(-2)) +
...                       1j*sum(np.arange(1, N+1, dtype=float)**(-4)))
>>> r = Richardson(f, l=3, n0=2)
>>> for n in range(7):
...     x = next(r)
>>> err = abs(x - (np.pi**2/6.0 + 1j*np.pi**4/90.0))
>>> assert err < 1e-13, 'err'
mmfutils.math.integrate.mquad(f, a, b, abs_tol=1e-12, verbosity=0, fa=None, fb=None, save_fx=False, res_dict=None, max_fcnt=10000, min_step_size=None, norm=<function <lambda>>, points=None)[source]

Return (res, err) where res is the numerically evaluated integral using adaptive Simpson quadrature.

mquad tries to approximate the integral of function f from a to b to within an error of abs_tol using recursive adaptive Simpson quadrature. mquad allows the function y = f(x) to be array-valued. In the matrix valued case, the infinity norm of the matrix is used as it’s “absolute value”.

Parameters
  • f (function) – Possibly array valued function to integrate. If this emits a NaN, then an AssertionError is raised to allow the user to optimize this check away (as it exists in the core of the loops)

  • a (float) – Integration range (a, b)

  • b (float) – Integration range (a, b)

  • fa (float) – f(a) and f(b) respectively (if already computed)

  • fb (float) – f(a) and f(b) respectively (if already computed)

  • abs_tol (float) – Approximate absolute tolerance on integral

  • verbosity (int) – Display info if greater than zero. Shows the values of [fcnt a b-a Q] during the iteration.

  • save_fx (bool) – If True, then save the abscissa and function values in res_dict.

  • res_dict (dict) –

    Details are stored here. Pass a dictionary to access these. The dictionary will be modified.

    fcnt : Number of function evaluations. xy : List of pairs (x, f(x)) if save_fx is defined.

  • max_fcnt (int) – Maximum number of function evaluations.

  • min_step_size (float) – Minimum step size to limit recursion.

  • norm (function) – Norm to use to determine convergence. The absolute error is determined as norm(f(x) - F).

  • points ([float]) – List of special points to be included in abscissa.

Notes

Based on “adaptsim” by Walter Gander. Ref: W. Gander and W. Gautschi, “Adaptive Quadrature Revisited”, 1998. http://www.inf.ethz.ch/personal/gander

Examples

Orthogonality of planewaves on [0, 2pi]

>>> def f(x):
...     v = np.exp(1j*np.array([[1.0, 2.0, 3.0]])*x)
...     return v.T.conj()*v/2.0/np.pi
>>> ans = mquad(f, 0, 2*np.pi)
>>> abs(ans - np.eye(ans.shape[0])).max() < _ABS_TOL
True
>>> res_dict = {}
>>> def f(x): return x**2
>>> ans = mquad(f, -2, 1, res_dict=res_dict, save_fx=True)
>>> abs(ans - 3.0) < _ABS_TOL
True
>>> x = np.array([xy[0] for xy in res_dict['xy']])
>>> y = np.array([xy[1] for xy in res_dict['xy']])
>>> abs(y - f(x)).max()
0.0

# This works, but triggers a warning because of the singular # endpoints. >>> logger = logging.getLogger() >>> logger.disabled = True >>> def f(x): return 1.0/np.sqrt(x) + 1.0/np.sqrt(1.0-x) >>> abs(mquad(f, 0, 1, abs_tol=1e-8) - 4.0) < 1e-8 True >>> logger.disabled = False

>>> def f(x):
...     if x < 0:
...         return 0.0
...     else:
...         return 1.0
>>> abs(mquad(f, -2.0, 1.0) - 1.0) < 1e-10
True
>>> def f(x): return 1./x
>>> mquad(f, 1, np.inf)
Traceback (most recent call last):
    ...
ValueError: Infinite endpoints not supported.
mmfutils.math.integrate.quad(f, a, b, epsabs=1e-12, epsrel=1e-08, limit=1000, points=None, **kwargs)[source]

An improved version of integrate.quad that does some argument checking and deals with points properly.

Return (ans, err).

Examples

>>> def f(x): return 1./x**2
>>> (ans, err) = quad(f, 1, np.inf, points=[])
>>> abs(ans - 1.0) < err
True
>>> (ans, err) = quad(f, 1, np.inf, points=[3.0, 2.0])
>>> abs(ans - 1.0) < err
True
mmfutils.math.integrate.rsum(f, N0=0, ps=None, l=2, abs_tol=1e-12, rel_tol=1e-08, verbosity=0)[source]

Sum f using Richardson extrapolation.

Examples

>>> def f(n):
...     return 1./(n+1)**2
>>> res, err = rsum(f)
>>> res
1.6449340668...
>>> abs(res - np.pi**2/6.0) < err
True