The wikipedia pages for the control variates method and the antithetic method are a pretty good start to learn about them.
The antithetic method seems the easiest to implement. What you should do is sample random numbers from your distribution (here the Weibull distribution) and for each of those numbers create another number according to a procedure that makes the new numbers be realizations of a random variable with negative covariance with respect to the original Weibull variable that is used for generating the first sequence of numbers. This way, you reduce the total variance on your estimate of the expected value.
Practically, I think the following should do the trick (I'm using R code):
n<-10000;
x<-rweibull(n,1,5); # Generate n random numbers
# distributed according to Weib(1,5)
y<-qweibull(1-pweibull(x,1,5),1,5); # Generate n numbers fom the previous ones
# to be "in the opposite quantile".
z<-c(x,y);
sum(z^(1/3))/(2*n); # Compute the mean
Maybe it's more efficient to generate uniformly distributed random numbers $u_i$, take their complements $1-u_i$ and from these generate the Weibull distributed variables using qweibull
. Also this might not be the best choice to minimize the variance.
Recall that if $Y$ is a random variable with density $g_Y$ and $h$ is a bounded measurable function, then $$\mathbb E[h(Y)] = \int_{\mathbb R} h(y)g_Y(y)\,\mathsf dy. $$
Moreover, if $Y\sim\mathcal U(0,1)$, then $a+(b-a)U\sim\mathcal U(a,b)$. So applying the change of variables $x=a+(b-a)u$ (with $a=0$, $b=\pi$) to the given integral, we have
$$I = \int_0^1 \frac{\pi}{\sqrt{2\pi}} e^{-\frac12\sin^2 (\pi u) }\,\mathsf du=\int_0^1 h(u)\,\mathsf du, $$
with $h(u)=\sqrt{\frac\pi 2} e^{-\frac12\sin^2 (\pi u) }$. It follows then that $I=\mathbb E[h(U)]$ with $U\sim\mathcal U(0,1)$. Let $U_i$ be i.i.d. $\mathcal U(0,1)$ random variables and set $X_i=h(U_i)$, then for each positive integer $n$ we have the point estimate
$$\newcommand{\overbar}[1]{\mkern 1.75mu\overline{\mkern-1.75mu#1\mkern-1.75mu}\mkern 1.75mu}
\widehat{I_n} =: \overbar X_n= \frac1n \sum_{i=1}^n X_i$$ and the approximate $1-\alpha$ confidence interval
$$\overbar X_n\pm t_{n-1,\alpha/2}\frac{S_n}{\sqrt n}, $$
where $$S_n = \sqrt{\frac1{n-1}\sum_{i=1}^n \left(X_i-\overbar X_n\right)^2} $$ is the sample standard deviation.
Here is some $\texttt R$ code to estimate an integral using the Monte Carlo method:
# Define "h" function
hh <-function(u) {
return(sqrt(0.5*pi) * exp(-0.5 * sin(pi*u)^2))
}
n <- 1000 # Number of trials
alpha <- 0.05 # Confidence level
U <- runif(n) # Generate U(0,1) variates
X <- hh(U) # Compute X_i's
Xbar <- mean(X) # Compute sample mean
Sn <- sqrt(1/(n-1) * sum((X-Xbar)^2)) # Compute sample stdev
CI <- (Xbar + (c(-1,1) * (qt(1-(0.5*alpha), n-1) * Sn/sqrt(n)))) # CI bounds
# Print results
cat(sprintf("Point estimate: %f\n", Xbar))
cat(sprintf("Confidence interval: (%f, %f)\n", CI[1], CI[2]))
For reference, the value of the integral (as computed by Mathematica) is $$e^{-\frac14}\sqrt{d\frac{\pi }{2}} I_0\left(\frac{1}{4}\right) \approx 0.991393, $$
where $I_\cdot(\cdot)$ denotes the modified Bessel function of the first kind, i.e. $$I_0\left(\frac14\right) = \frac1\pi\int_0^\pi e^{\frac14\cos\theta}\,\mathsf d\theta. $$
Best Answer
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ The usual Monte Carlo procedure is given by $\ds{\int_{a}^{b}\mrm{P}\pars{x}\mrm{f}\pars{x}\dd x \approx {1 \over N}\sum_{k = 1}^{N}\mrm{f}\pars{x_{k}}}$ where
Given a particular integration $\ds{\int_{a}^{b}\phi\pars{x}\,\dd x}$, you write it as $$ \int_{a}^{b}\mrm{P}\pars{x}\bracks{\phi\pars{x} \over \mrm{P}\pars{x}}\,\dd x \approx {1 \over N}\sum_{k = 1}^{N}{\phi\pars{x_{k}} \over \mrm{P}\pars{x_{k}}}\,,\qquad N \gg 1 $$ where $\ds{P}\pars{x}$ is "conveniently chosen". Note that $\ds{\mrm{P}\pars{x} \geq 0\ \mbox{and}\ \int_{a}^{b}\mrm{P}\pars{x}\dd x = 1}$.
For example,
Lets go to the present case ( in general, it's convenient to remove the integrable singularities as $\ds{1/\root{x}}$, but lets keep it for the time being ): \begin{align} \int_{0}^{\infty}{\dd x \over \pars{1 + x}\root{x}} & = \int_{0}^{\infty}\overbrace{1 \over \pars{x + 1}^{2}} ^{\ds{\mrm{P}\pars{x}}}\ {1 + x \over \root{x}}\,\dd x \approx {1 \over 10^{6}}\sum_{n = 0}^{10^{6} - 1} {1 + x_{n} \over \root{x_{n}}} \end{align}
The following ${\tt javascript}$ code performs the above task: A "typical run" yields $\ds{\large{3.143321704930537}}$.