Let's start by representing the sum $S$ using the definition of the autocorrelation function:

\begin{equation}
S = \sum_{h=1}^{n-1} \hat{\rho}(h) = \sum_{h=1}^{n-1} \left(\frac{\frac{1}{n}\sum_{t=1}^{n-h}(X_t-\bar{X})(X_{t+h}-\bar{X})}{\frac{1}{n}\sum_{t=1}^{n}(X_t-\bar{X})^2}\right)
\end{equation}

Denominator does not depend on $h$ so we can simplify and move the front $\sum$ to the numerator, which gives us:
\begin{equation}
S = \frac{\sum_{h=1}^{n-1} \sum_{t=1}^{n-h} (X_t-\bar{X})(X_{t+h}-\bar{X})}{\sum_{t=1}^{n} (X_t-\bar{X})^2}
\end{equation}

Now consider the denominator. How do we represent in so we get an expression similar to the numerator? Set $Y_t=X_t-\bar{X}$. Then $\sum_{t=1}^{n}Y_t=0.$ The denominator here is $\sum_{t=1}^{n}Y_t^{2}$.
We know that $\sum_{t=1}^{n}Y_t^{2} = \left(\sum_{t=1}^{n}Y_t\right)^2 - 2\sum_{h=1}^{n-1} \sum_{t=1}^{n-h}Y_t Y_{t+h}$, i.e. subtracting all unique pairs $\times$ 2. Because $\sum_{t=1}^{n}Y_t=0$, it follows that $\sum_{t=1}^{n}Y_t^{2} = - 2\sum_{h=1}^{n-1} \sum_{t=1}^{n-h}Y_t Y_{t+h}$.

Plugging back in terms of X, the denominator becomes $- 2\sum_{h=1}^{n-1} \sum_{t=1}^{n-h}(X_t-\bar{X})(X_{t+h}-\bar{X})$. Then,

\begin{equation}
S=\frac{\sum_{h=1}^{n-1} \sum_{t=1}^{n-h}(X_t-\bar{X})(X_{t+h}-\bar{X})}{- 2\sum_{h=1}^{n-1} \sum_{t=1}^{n-h}(X_t-\bar{X})(X_{t+h}-\bar{X})}= -\frac{1}{2}
\end{equation}

Hope this helps!

**Background information on estimation bias:** Before answering your specific questions in this post, it is worth exploring the bias that occurs in the sample auto-covariance estimator in this kind of problem. To do this, we will consider the corresponding summation-statistic that replaces the (unknown) true mean in the process with the sample mean:

$$\hat{S}(k) \equiv \sum_{t=1}^{n-k} (X_t - \bar{X})(X_{t+k} - \bar{X}) \quad \quad \quad \bar{X} \equiv \frac{1}{n} \sum_{t=1}^n X_t.$$

From the true autocorrelation function $\gamma$, we facilitate our analysis by defining the quantities:

$$\Pi_{n,k} \equiv \frac{1}{n(n-2k)} \sum_{i=1}^n \sum_{j=k+1}^{n-k} \gamma|i-j|.$$

This quantity is the average correlation value in the $n \times n$ correlation matrix, with one dimension trimmed on each side by $k$ entries. It arises in our later analysis because it is related to the sample mean and trimmed sample mean, which both arise in our analysis.

Given a stationary process with fixed mean and variance we have $\mathbb{Cov}(X_i, X_j) = \sigma^2 \gamma |i-j|$ so that $\mathbb{E}(X_i X_j) = \sigma^2 \gamma |i-j| + \mu^2$. Hence, the expected value of our summation statistic is:

$$\begin{equation} \begin{aligned}
\mathbb{E}(\hat{S}(k))
&= \mathbb{E} \Bigg( \sum_{t=1}^{n-k} (X_t - \bar{X})(X_{t+k} - \bar{X}) \Bigg) \\[8pt]
&= \mathbb{E} \Bigg( \sum_{t=1}^{n-k} X_t X_{t+k} - \bar{X} \Bigg( \sum_{t=1}^{n-k} X_t + \sum_{t=1}^{n-k} X_{t+k} \Bigg) + (n-k) \bar{X}^2 \Bigg)
\\[8pt]
&= \mathbb{E} \Bigg( \sum_{t=1}^{n-k} X_t X_{t+k} - \bar{X} \Bigg( n\bar{X} + \sum_{t=k+1}^{n-k} X_t \Bigg) + (n-k) \bar{X}^2 \Bigg)
\\[8pt]
&= \mathbb{E} \Bigg( \sum_{t=1}^{n-k} X_t X_{t+k} - \bar{X} \sum_{t=k+1}^{n-k} X_t - k \bar{X}^2 \Bigg)
\\[8pt]
&= \mathbb{E} \Bigg( \sum_{t=1}^{n-k} X_t X_{t+k} \Bigg) - \mathbb{E} \Bigg( \bar{X} \sum_{t=k+1}^{n-k} X_t \Bigg) - k \mathbb{E} \Big( \bar{X}^2 \Big)
\\[8pt]
&= (n-k) \Big( \sigma^2 \gamma(k) + \mu^2 \Big) - (n-2k) \Big( \sigma^2 \Pi_{n,k} + \mu^2 \Big) - k \Big( \sigma^2 \Pi_{n,0} + \mu^2 \Big)
\\[8pt]
&= (n-k) \sigma^2 \gamma(k) - \sigma^2 \Big( (n-k) \Pi_{n,k} + k (\Pi_{n,0} - \Pi_{n,k}) \Big) \\[8pt]
&= (n-k) \sigma^2 \Bigg[ \gamma(k) - \Bigg( \Pi_{n,k} + \frac{k}{n-k} (\Pi_{n,0} - \Pi_{n,k}) \Bigg) \Bigg]. \\[8pt]
\end{aligned} \end{equation}$$

Now, consider the auto-covariance estimator:

$$\hat{C}(k) \equiv \frac{1}{n-k} \sum_{t=1}^{n-k} (X_t - \bar{X})(X_{t+k} - \bar{X}).$$

From the above results we have:

$$\frac{\mathbb{E} (\hat{C}(k))}{\sigma^2} = \gamma(k) - \Bigg( \Pi_{n,k} + \frac{k}{n-k} (\Pi_{n,0} - \Pi_{n,k}) \Bigg).$$

This expression shows that there is a clear bias term in our analysis. However, for most stationary processes of interest, the auto-correlation dissipates as observations become farther apart in time. This means that when $n$ is large, the average auto-correlation terms $\Pi_{n,k}$ become small, and so the bias term (the second term in the expression) also becomes small.

**Answers to your specific questions:** Here are my answers to your specific questions about this estimation problem:

1) It is common for analysts to use these standard estimators by substituting in the sample mean and variance. So long as the auto-correlation structure meets standard requirements (i.e., tends to dissipate as observations get farther apart in time), these are asymptotically unbiased and also consistent. This means that when $n$ is large they will be reasonable estimators. Although the estimator is biased, the bias is small for large $n$.

2) I am not sufficiently familiar with this area of statistics to know what other estimators have been proposed, or to know the relative estimation properties of different estimators. Some other obvious estimation methods would be to use Bayesian estimators using priors on the parameters, which would presumably also yield reasonable estimators. I do not know what is the "best" estimator that is currently known.

3) Time series analysis is no different to other kinds of statistical analysis in regard to the importance/unimportance of bias in estimators. Bias is not entirely unimportant, but it is only one aspect of the performance of estimators. Usually we are concerned with the overall performance of an estimator as judged by a metric like MSE. Bias contributes to this, but it is not the entire story. In time-series analysis, the auto-correlation of observations makes it difficult to obtain unbiased estimators, so there is often a fall-back to biased estimators that are nonetheless consistent.

## Best Answer

Same as with regular correlation, it is undefined. Same as with correlation with a constant variable, if a function is constant, there is no point in calculating autocorrelation.