The algorithm for the Monte-Carlo approach is the following.
- Generate $n$ uniform random points $x_n \in (0, 1)$
- Set $\displaystyle I = \dfrac{1}{n} \sum_{i = 1}^n x_n^2$
We can then compare the error from the exact value versus the result from the Monte Carlo approach. Depending on how we coded up the Monte-Carlo Method, we can also estimate the error.
Example 1: Using the method above with $n = 1000$, we get the plot:
This produces $I = 0.3323369730904328$.
We can also use other improved Monte Carlo Methods.
Example 2: Using a Quasi-Monte-Carlo method with $n = 1000$, we get the plot:
This produces $I = 0.3332527188916689$.
Example 3: Using an Adaptive-Monte-Carlo method with $n = 1000$, we get the plot:
This produces $I = 0.333727841861132$.
Example 4: Using an Adaptive-Quasi-Monte-Carlo method with $n = 1000$, we get the plot:
This produces $I = 0.3333327629439999$.
In all cases we can compare that to the actual result of:
$$\displaystyle \int_0^1 x^2~ dx = \dfrac{1}{3}$$
This is a very good question.
For one-dimensional integrals, there are two ways we could approach it.
Method 1: This is the approach in the other link you listed.
- Pick $n$ randomly distributed points $x_1, x_2, \ldots, x_n$ in the interval $a \le x \le b$.
- Determine the average value of the function $\displaystyle f_{avg} = \dfrac{1}{n} \sum_{i = 1}^n f(x_i)$.
- Compute the approximation to the integral $\displaystyle \int_a^b f(x)~dx = (b-a)f _{avg}$.
- An estimate of the error is Error $\approx (b-a) \sqrt{\dfrac{f_{avg}^2 - (f_{avg})^2}{n}}$, where $f_{avg}^2 = \dfrac{1}{n} \sum_{i = 1}^n f^2(x_i)$.
Notice that in this approach, we know the function values apply because we are using the function itself as part of the calculation. Just like other numerical integration techniques, you can interpret $(1/n)f(x_n)$ as the area of a rectangle of height $f(x_n)$ and base $1/n$, so you are adding up the areas of those rectangles. Typically, the more rectangles (larger the $n$), the better the approximation.
Method 2: The way you are asking.
- First, we must determine the rectangular $R$ box containing the area $A$ where $R = \{ (x, y): a \le x \le b ~\mbox{and}~ 0 \le y \le d$ where $\displaystyle d = \max_{a \le x \le b} f(x)$.
- Second, randomly pick points $n$ point, $(x_n, y_n) \in R$, where the $x_n$ and $y_n$ are chosen from an independent uniformly distributed random variables over $x_n \in [a, b]$ and $y_n \in [0, d]$ , respectively.
- Third, calculate the ratio $\rho$ as $\rho = \dfrac {m}{n}$, where $m$ is number of points that lie in $A$.
- The area is computed using the approximation area $A = \rho (b-a) d$.
- An estimate for the accuracy of the above computation is Error $\approx (b-a) d \sqrt{\dfrac{\rho - \rho^2}{n}}$.
It is worth noting that Method 2 makes it easier to move to higher dimensional integrals, but is probably overkill for $1D$.
Lets apply these methods and compare to the exact problem you mentioned for the Gamma Function in the other problem:
$$\displaystyle \Gamma(\beta) = \int_0^\infty x^{\beta - 1} e^{-x} dx$$
I will use $\beta = \pi$.
Approach 1: A naive approach is that since we have this integral going to infinity, we can look at a plot and determine a suitable range to do the Monte Carlo simulation.
$$\displaystyle \Gamma(\pi) = \int_0^\infty x^{\pi - 1} e^{-x} dx$$
If we plot this function over $0 \le x \le 10$, we get a reasonable range given how fast the exponential approaches zero:
$~~~~~~~~~~~~~$
Using Method 1, with $n = 1000$, we get:
$$I = 2.28421$$
Using Method 2, with $n = 1000$, we get:
$$I = 2.29143$$
Calculating the exact value, we have:
$$\Gamma[\pi] = 2.288037795340033$$
Both approximations are not bad for only $1000$ samples and more samples will improve the result given the Central Limit Theorem. Note running multiple trials does show that there is some variation in the result, but in all cases it is not bad.
Lastly, using a sophisticated Adaptive-Quasi-Monte-Carlo Method, the samples and plot look like:
$~~~~~~~~~~~~~$
This approach yields a result of:
$$\Gamma[\pi] = 2.28804242000413$$
Update
Approach 2: In this approach we will make a change in variables so as to map an infinite range of integration into a finite one.
A good example that works for functions that decrease towards infinity faster than $\frac{1}{x^2}$ is to use:
$$\displaystyle \int_a^b f(x)~dx = \int_{\frac{1}{a}}^{\frac{1}{b}} \dfrac{1}{t^2}f\left(\dfrac{1}{t}\right)~dt$$
With this altered approach, we can now do integrals when one of the limits is infinite
and the other limit is of the same sign. If you need to integrate say something that has $a = 0$, what you do is break the integral into two parts and use Monte Carlo on each one.
If we have infinite limits for both limits, we could split the integral into three or more parts. So, for this problem, we have to split it into two pieces as:
$$\displaystyle \Gamma(\pi) = \int_0^\infty x^{\pi - 1} e^{-x} dx = \int_0^1 x^{\pi - 1} e^{-x} dx + \int_0^1 \dfrac{1}{t^2} \left(\dfrac 1t\right)^{\pi - 1} e^{-\frac 1t} ~dt$$
Using the Monte Carlo for the first part, we get $I_1 = 0.14972956088031433$ and for the second part, we get $I_2 = 2.1272262548687735$, which yields:
$$ \displaystyle \Gamma(\pi) = \int_0^\infty x^{\pi - 1} e^{-x} dx = 2.276955815749088$$
Compare that to our previous and the exact result.
Approach 3: In this approach we will make a change in variables so as to map an infinite range of integration into a finite one. This time, we will transform variables using $u = \frac{1}{1+x}$, yielding:
$$\int_0^1 \left( \frac{1-u}{u}\right)^{\beta-1}e^{-\frac{1-u}{u}}\frac{1}{u^2}du$$
Using Method I with $n=1000$ yields:
$$I = 2.2895585648008825$$
Best Answer
This yields a binomial distribution for the points inside the circle. The probability that a point is inside the circle is $p = \left(\pi/4\right)/\left(1\times 1\right) = \pi/4$. Then, the average number of points inside the circle is $\overline{n} = Np = N\pi/4$ where $N$ is the total number of points. That means $$ \pi = 4\,{\overline{n} \over N} $$ In a computer, you generate $N$ points inside the square $\left(~N \gg 1~\right)$ and counts how many are inside de circle ( that is $\overline{n}$ ). This little script ( in $\tt C++$ ) generates a Montecarlo $\pi$: