You were on the right track. But the likelihood is simpler than what it looks in your post.
If $\boldsymbol X_i=(X_{ij})_{1\le j\le n}$ for every $i$, then $\boldsymbol X_i$'s are i.i.d $N_n(\mu\mathbf1_n,\Sigma)$ with
$$\Sigma=\sigma^2 I_n + \tau^2\mathbf1_n\mathbf1_n^T$$
So joint pdf of the $X_{ij}$'s is joint pdf of the $\boldsymbol X_i$'s, given by
\begin{align}
f_{\boldsymbol\theta}(\boldsymbol x_1,\ldots,\boldsymbol x_p)&\propto \frac1{(\det\Sigma)^{p/2}}\exp\left\{-\frac12 \sum_{i=1}^p (\boldsymbol x_i-\mu\mathbf1_n)^T\Sigma^{-1}(\boldsymbol x_i-\mu\mathbf1_n)\right\}
\\&=\frac1{(\det\Sigma)^{p/2}}\exp\left\{-\frac12 Q(\boldsymbol x_1,\ldots,\boldsymbol x_p;\boldsymbol\theta)\right\}
\end{align}
For sufficient statistics of $\boldsymbol\theta=(\mu,\tau^2,\sigma^2)$, we only need to focus on the sum $Q$ in the exponent.
We have
$$\Sigma^{-1}=\frac1{\sigma^2}I_n-\frac{\tau^2}{\sigma^2(\sigma^2+n\tau^2)}\mathbf1_n\mathbf1_n^T$$
So,
$$
Q=\frac1{\sigma^2}\sum_{i,j} (x_{ij}-\mu)^2 - \frac{\tau^2}{\sigma^2(\sigma^2+n\tau^2)}\sum_i\left\{\sum_j (x_{ij}-\mu)\right\}^2
\tag{1}
$$
Keep in mind that
$$\overline x_{i\bullet}=\frac1n\sum_{j=1}^n x_{ij} \quad, \quad \overline x_{\bullet\bullet}=\frac1p \sum_{i=1}^p \overline x_{i\bullet}$$
Recall the definition of sum of squares due to errors $(\text{SSE})$ and sum of squares due to a factor $A$ (say) corresponding to $\alpha_i$ (call this $\text{SSA}$). The right hand side of $(1)$ can be written in terms of $\text{SSE}$, $\text{SSA}$ and an additional term involving $\mu$. Once you do that, you would notice that $f_{\boldsymbol\theta}$ is a member of a regular (full-rank) exponential family and a sufficient statistic would be easy to identify.
Note that
\begin{align}
\sum_{i,j}(x_{ij}-\mu)^2&=\sum_{i,j}\{(x_{ij}-\overline x_{i\bullet})+(\overline x_{i\bullet}-\mu)\}^2
\\&=\sum_{i,j}(x_{ij}-\overline x_{i\bullet})^2 + n\sum_i (\overline x_{i\bullet}-\mu)^2
\end{align}
And also
\begin{align}
\sum_i (\overline x_{i\bullet}-\mu)^2&=\sum_{i}\{(\overline x_{i\bullet}-\overline x_{\bullet\bullet})+(\overline x_{\bullet\bullet}-\mu)\}^2
\\&=\sum_{i}(\overline x_{i\bullet}-\overline x_{\bullet\bullet})^2 + p (\overline x_{\bullet\bullet}-\mu)^2
\end{align}
Therefore continuing from $(1)$,
\begin{align}
Q&=\frac1{\sigma^2}\sum_{i,j} (x_{ij}-\mu)^2 - \frac{n^2\tau^2}{\sigma^2(\sigma^2+n\tau^2)}\sum_i (\overline x_{i\bullet}-\mu)^2
\\&=\frac1{\sigma^2}\sum_{i,j}(x_{ij}-\overline x_{i\bullet})^2 + \frac{n}{\sigma^2+n\tau^2} \sum_i (\overline x_{i\bullet}-\mu)^2
\\&=\frac1{\sigma^2}\sum_{i,j}(x_{ij}-\overline x_{i\bullet})^2 + \frac{n}{\sigma^2+n\tau^2}\sum_i(\overline x_{i\bullet}-\overline x_{\bullet\bullet})^2 + \frac{np}{\sigma^2+n\tau^2}(\overline x_{\bullet\bullet}-\mu)^2
\end{align}
Expressing the likelihood in this form also helps to find MLE of $\boldsymbol \theta$.
Best Answer
It is not terribly enlightening, I am afraid, but you can permute the parallel dimensions out to the third and fourth, use
bsxfun
to do the broadcasting, then bring the results back. It involves a fewshiftdim
operations: