Bivariate Normal Distribution – Estimating the Cumulative Probability

bivariatecumulative distribution functiondistributionsnumerical integrationpearson-r

I have a quick question regarding working out the probabilities of a bivariate normal distribution. To my knowledge, there is no nice closed-form for a cumulative distribution function for the bivariate normal distribution (Botev, 2016) so instead, we must numerically integrate through the bivariate normal distribution's probability density function (I think?).

I am referring to this thing in particular: https://mathworld.wolfram.com/BivariateNormalDistribution.html

So how is this done? By using the trapezoid rule or Simpson's rule? Or would that be an incorrect approach and a more accurate one is called for?

EDIT: a user pointed out a paper by Drzner & Welowosky (2010) which discusses numerical methods to integrate over a univariate distribution. This user also pointed out to me that the c.d.f. for this kind of distribution is

$$ \mathbb P(X<x,Y<y)=\int_{-\infty}^x\mathbb P(Y<y|X=x)\varphi(x;\mu_X,\sigma_X)\text dx $$

Best Answer

The following is a screen-shot from Mark Schervish's (1984) paper on the approximation of the multivariate Normal probability function, based on (recursive) quadrature formulae, where $$G_L(x_N,\ldots,X_{N-L+1})=\int_{A(L)}^{B(L)}\cdots\int_{A(1)}^{B(1)} f(x_1,\ldots,x_N)\,\text dx_1\cdots\text dx_N$$ is the $L$-th inner integral. enter image description here

Schervish (1984) also provides a formula for the choice of the errors $\delta_L$ $(L=1,\ldots, N-2)$ towards the overall numerical error being at most $\epsilon$.

Related Question