Solved – Variance of Random Matrix

probabilityrandom matrixvariance

Let's consider independent random vectors $\hat{\boldsymbol\theta}_i$, $i = 1, \dots, m$, which are all unbiased for $\boldsymbol\theta$ and that
$$\mathbb{E}\left[\left(\hat{\boldsymbol\theta}_i –
\boldsymbol\theta\right)^{T}\left(\hat{\boldsymbol\theta}_i –
\boldsymbol\theta\right)\right] = \sigma^2\text{.}$$ Let
$\mathbf{1}_{n \times p}$ be the $n \times p$ matrix of all ones.

Consider the problem of finding
$$\mathbb{E}\left[\left(\hat{\boldsymbol\theta} –
\boldsymbol\theta\right)^{T}\left(\hat{\boldsymbol\theta} –
\boldsymbol\theta\right)\right]$$ where $$\hat{\boldsymbol\theta} =
\dfrac{1}{m}\sum_{i=1}^{m}\hat{\boldsymbol\theta}_i\text{.}$$

My attempt is to notice the fact that $$\hat{\boldsymbol\theta} = \dfrac{1}{m}\underbrace{\begin{bmatrix}
\hat{\boldsymbol\theta}_1 & \hat{\boldsymbol\theta}_2 & \cdots & \hat{\boldsymbol\theta}_m
\end{bmatrix}}_{\mathbf{S}}\mathbf{1}_{m \times 1}$$
and thus
$$\text{Var}(\hat{\boldsymbol\theta}) = \dfrac{1}{m^2}\text{Var}(\mathbf{S}\mathbf{1}_{m \times 1})\text{.}$$
How does one find the variance of a random matrix times a constant vector? You may assume that I am familiar with finding variances of linear transformations of a random vector: i.e., if $\mathbf{x}$ is a random vector, $\mathbf{b}$ a vector of constants, and $\mathbf{A}$ a matrix of constants, assuming all are comformable,
$$\mathbb{E}[\mathbf{A}\mathbf{x}+\mathbf{b}] = \mathbf{A}\mathbb{E}[\mathbf{x}]+\mathbf{b}$$
$$\mathrm{Var}\left(\mathbf{A}\mathbf{x}+\mathbf{b}\right)=\mathbf{A}\mathrm{Var}(\mathbf{x})\mathbf{A}^{\prime}$$

Best Answer

Using the matrix computation only (although essentially, this solution is not that different from @DeltaIV's direct calculation), let me first slightly modify your definition of $S$ to its centralized version $\begin{bmatrix}\hat{\theta}_1 - \theta & \cdots & \hat{\theta}_m - \theta\end{bmatrix}$. We can go as follows \begin{align} & E[(\hat{\theta} - \theta)^T(\hat{\theta} - \theta)] \\ = & \frac{1}{m^2}E[1^TS^TS1] \\ = & \frac{1}{m^2}1^T E[S^TS] 1 \tag{1} \\ = & \frac{1}{m^2}1^T \begin{bmatrix} E[(\hat{\theta}_1 - \theta)^T(\hat{\theta}_1 - \theta)] & \cdots & E[(\hat{\theta}_1 - \theta)^T(\hat{\theta}_m - \theta)] \\ \vdots & \ddots & \vdots \\ E[(\hat{\theta}_m - \theta)^T(\hat{\theta}_1 - \theta)] & \cdots & E[(\hat{\theta}_m - \theta)^T(\hat{\theta}_m - \theta)] \end{bmatrix} 1 \\ = & \frac{1}{m^2}1^T\text{diag}(\sigma^2, \ldots, \sigma^2)1 \tag{2} \\ = & \frac{1}{m}\sigma^2. \end{align}

In $(1)$, we used the fact that for any conformable non-random matrices $A$, $B$ and random matrix $X$, $E[AXB] = AE[X]B$.

In $(2)$, we applied the independence assumption.