Monte Carlo – Expected Value and Variance of Monte Carlo Estimate of ?_{0}^{1}e^{-x}dx

MATLABmonte carlo

Given the integral:

$I=\int_{0}^{1}e^{-x}dx$,

use standard Monte Carlo with 1000 random numbers and repeat the simulation 1000 times.

(a) What is the expected value and variance of the simple Monte Carlo estimate of I ?

I wrote the following to calculate the Monte-Carlo approximation to the integral:

a=0;
b=1;
n=1000;
x=a+(b-a)*rand(n,1);
q=1/(b-a);
f=exp(-x);
i=mean(f./q)

How do I repeat the approximation 1000 times though, and how do I calculate the expected value and variance? Thank you!

Edit: repeating the approximation 1000 times, calculating expected value and variance.

m=1000;
n=1000;
a=0;
b=1;
z=zeros(1,m);
for j=1:m
    x=a+(b-a)*rand(n,1);
    q=1/(b-a);
    f=exp(-x);
    i=mean(f./q);
    z(j) = i;
end
expectedvalue = mean(z)
variance = var(z)

Best Answer




Let $X$ be a random variable uniformly distributed on the interval $[0.1]$. It has density function $1$ on the interval, and $0$ elsewhere. Let $Y=e^{-X}$. We first compute the mean and variance of the random variable $Y$.

We have $E(Y)=\int_0^1 e^{-x}\,dx=1-e^{-1}$. Call this $\mu$. It is the exact value of our integral. This is the whole point of the simulation: If we take a sort of big sample, such as $1000$, the sample mean will probably be close to the true mean, which is the true value of the integral.

To find the variance of $Y$, we use the shortcut formula $\text{Var}{Y}=E(Y^2)-(E(Y))^2$. We have $E(Y^2)=E(e^{-2X})=\int_0^1 e^{-2x}\,dx=\frac{1}{2}(1-e^{-2})$. Now we can calculate the variance of $Y$, either as an exact expression, or by giving a decimal approximation. Call it $\sigma^2$.

In your program, you have $1000$ random variables $Y_1,Y_2,\dots, Y_{1000}$, each with the same distribution as the above $Y$. We are studying the random variable $$\bar{Y}=\frac{Y_1+Y_2+\cdots +Y_{1000}}{1000}.$$ This random variable $\bar{Y}$ also has mean $\mu$. It can be shown (this is a general phenomenon) that the variance of $\bar{Y}$ is $\frac{\sigma^2}{1000}$.

We know $\sigma^2$, so we know the variance of $\bar{Y}$.

Note that $\bar{Y}$ is the result of a single run of your program, that is, $1000$ points.

We can imagine repeating the simulation $1000$ times, that is, effectively, using $1$ million points.

The theory is much the same. The results of the simulations are random variables $\bar{Y}_1,\dots, \bar{Y}_{1000}$. If we average them, the result has variance equal to the variance of $\bar{Y}$, divided by $1000$. so it is $\frac{\sigma^2}{1000^2}$.

Related Question