Gaussian-hermite quadrature points and weights, numerical accuracy/stability

integrationnumerical methodsquadrature

I'm trying to implement a code using numeric integration over with Gaussian-Hermite quadrature, parametrized by number of points used.

Recurrence relation makes it easy to find polynomial coefficients and Aberth method should give me required roots without too much of a headache. However, Wikipedia offers an expression for weights that makes use of factorials and exponentially scaling terms.

$w_i = \frac {2^{n-1} n! \sqrt{\pi}} {n^2[H_{n-1}(x_i)]^2}$

Granted, they are multiplicative, so loss of accuracy should be low, but I'm still concerned that I might hit IEEE positive infinite and I'm still in doubt about numerical accuracy of the formula.

I would be grateful for

  1. an estimate of the largest $n$ for which intermediaries of the
    formula don't hit positive infinity of 64 bit IEEE floating point
    format
  2. suggestion of formulas suitable for larger n

There is also a question about quality of quadrature points generated, since at larger $n$ I'll get polynomials where I substract constituents with huge absolute values, so finding accurate roots might be a problem as well.

I would be grateful for

  1. An estimate of the highest n where common methods for finding
    Hermite polinomial rules become numerically unreliable
  2. suggestion of a better way of finding quadrature points for very
    high n.

I'm aiming at number of quadrature points around few thousands, preferably with points and weights calculated from first principles, without asymptotic formulas. The integrated functions are fractions of two polinomials weighted by gaussian function, i.e.

$ f(x) = \frac {P(x)} {Q(x)} e^{-x^2} ; $,

The order of $P$ is expected to be within hundred (zero included) and $Q$ within ten (zero included). Also, $Q(x) > 1$ for real x.

Best Answer

Question 1 can be settled extending the representational range of floating point arithmetic by manipulating the exponent and the significand field. Specifically, if the normalized floating point representation of $x$ and $y$ are $$x = f\times2^m \quad\text{and}\quad y=g\times2^n,$$ then $$p=xy=(fg)\times2^{(m+n)}$$ and if $p\leq q$, then $$x+y = (f \times 2^{(p-q)} + g) \times 2^q.$$ You may have to normalize the new significant and adjust the new exponent after each operation, i.e. if $2 \leq fg < 4$, then you write $$xy = [(fg)\times2^{-1}]\times2^{m+n+1}$$ and similarly for addition. This shows that you can perform any finite sequence of elementary arithmetic operations without fear of overflow. This takes case of Question 2. Programming language such as MATLAB ([f m]=log2(x)) and C (frexp in math.h) allows you to extract the mantissa and the exponent by directly accessing the field of bits.

The only difficulty in applying the formula $$w_i = \frac {2^{n-1} n! \sqrt{\pi}} {n^2[H_{n-1}(x_i)]^2}$$ is the conditioning of the polynomial $H_{n-1}$. The relative condition number of $x \rightarrow f(x)$ is $\kappa_f(x) = \left|\frac{xf'(x)}{f(x)}\right|$. You cannot expect to compute $f(x)$ with a relative error which is smaller than $\kappa_f(x)$ times the relative error on $x$. I cannot give you an upper bound on the largest $n$ for which $w_i$ can be computed accurately. However, by tracking the computing the condition number of $H_{n-1}$ at the point $x_i$ as well as the relative error of $x_i$, then you estimate the relative error on $w_i$ very accurately.

Finding the quadrature points consists of finding the roots of Hermite polynomials. This can be done reliably using bisection provided you compute a running error bound so that you can decide if you can trust the computed sign. This is a standard technique which is discussed in this answer to a related question.

The results produced by this website suggests that for each $n$, many weights $w_i$ will be negligible compared with the largest weights.


I cannot emphasize this enough, but high order does not imply high accuracy. In practice you will much better off using a low order method and adaptive quadrature. You will get an an accurate result and a reliable error estimate using less time than with a high order method which does not necessarily apply to your integrand.
Related Question