Suppose we have a collection of samples $X_1,X_2,\dots,X_n$ that have been drawn from some fixed probability distribution (Poisson, exponential, normal, etc.) that depends on some unknown parameter $\mu$.
We often want to use the samples to estimate the value of $\mu$. To this end we come up with functions that take in $X_1,X_2,\dots,X_n$ and return an estimate of $\mu$. Any such function is called an estimator.
Now, some estimators are better or worse than others. For example,
$$f(X_1,X_2,\dots,X_n) = 0$$
is clearly a terrible estimator (unless it so happens that $\mu=0$), given that it doesn't use any information provided by the samples.
A basic criteria for an estimator to be any good is that it is unbiased, that is, that on average it gets the value of $\mu$ correct. Formally, an estimator $f$ is unbiased iff
$$E[f(X_1,X_2,\dots,X_n)] =\mu.$$
In your case, the estimator is the sample average, that is,
$$f(X_1,X_2,\dots,X_n)=\frac{1}{n}\sum_{i=1}^n X_i,$$
and it is unbiased since on average it guesses the unknown parameter, $\lambda$, correctly. An example of a biased estimator would be
$$f(X_1,X_2,\dots,X_n)=1+\frac{1}{n}\sum_{i=1}^n X_i,$$
since $E[f(X_1,X_2,\dots,X_n)] = 1+\lambda$. On average it gets the value of $\lambda$ wrong by $1$; it has a bias of $1$.
Returning to the sample average, suppose that the samples are drawn from any distribution (not necessarily Poison) which has an expected value (or mean) of $\mu$. Then
$$E[f(X_1,X_2,\dots,X_n)] = E\left[\frac{1}{n}\sum_{i=1}^n X_i\right] = \frac{1}{n}\sum_{i=1}^n E[X_i] = \frac{1}{n}\sum_{i=1}^n \mu = \mu.$$
So in general, the sample average is an unbiased estimator of the expected value of the distribution from which the samples are drawn.
Let's consider the most natural estimate of the random area:
$$\pi\frac1n\sum_{i=1}^nX_i^2.$$
The question is if this is an unbiased estimate assuming that $X_i=R+e_i$ where $e_i$ are independent normal random variables with $0$ mean and $\sigma^2$ as variance.
So, we need to calculate the expectetion if our estimate above.
$$\pi E\left[\frac1n\sum_{i=1}^nX_i^2\right]=\pi\frac1nE\left[\sum_{i=1}^n(R+e_i)^2\right]=$$
$$=\pi R^2+2\pi RE[e_i]+\pi\sigma^2=\pi R^2+\pi\sigma^2.$$
So, it seems that
$$\pi\frac1n\sum_{i=1}^nX_i^2-\pi\sigma^2$$
is an unbiased estimate.
Best Answer
Let us start by computing the $\mathbb E\left[\left(X_i - \overline X\right)^4\right]$. To be able to do that, we need to find the distribution of $X_i - \overline X$. This is not complicated since $X_i$ are normal distributed,
$$X_i - \overline X \sim \mathcal N\left(0, \left(1 - \frac1n\right)\sigma^2\right)$$
So $$\mathbb E\left[\left(X_i - \overline X\right)^4\right] = \left(1-\frac{1}{n}\right)^2\sigma^4 \mathbb E\left[Z^4\right] = 3\left(1-\frac{1}{n}\right)^2\sigma^4$$
Now replace the true expectation by it empirical equivalent and $\sigma^4$ by its estimator (this is called Moment Method and it leads to an unbiased estimator)
$$\frac1n \sum_{i=1}^n \left(X_i - \overline X\right)^4 = 3\frac{(n-1)^2}{n^2} \widehat{\sigma^4}$$
So if you want to have an unbiased estimator, you can take:
$$\widehat{\sigma^4} = \frac{n}{3\left(n-1\right)^2} \sum_{i=1}^n \left(X_i - \overline X\right)^4$$
Using $S^2$: as mentioned in the question $$\mathbb E\left[\frac{n-1}{n+1}S^4 \right] = \sigma^4$$ which leads to the following unbiased estimator:
$$\widehat{\sigma^4} = \frac{1}{n^2-1} \left(\sum_{i=1}^n \left(X_i - \overline X\right)^2\right)^2$$