I think I have an answer.
First case, $\theta_P$.
The first observation is that since $M$ is uniform and independent over $\{1,2,3\}$, we have that
$$
\mathbb E[((\theta_P(X_1, X_2, X_3) - X_M)^2] = \frac 13 \mathbb E \left[\sum_{m=1}^3 (\theta_P(X_1, X_2, X_3) - X_m)^2 \right],
$$
where the latter does not involve random $M$ any more.
The second fact (very obvious!) is that if $A \le B$ then $\mathbb E[A] \le \mathbb E[B]$. Hence, we can try to find the solution as
$$
\theta_P(x_1, x_2, x_3) = \arg \min_t \left( \sum_{m=1}^3 (t-x_m)^2 \right)
$$
where $x_1, x_2, x_3$ are treated as constants. In other words, we try to minimise $\theta_P$ in each of the points. By using basic calculus, the optimal $t$ is then
$$
t = \frac{x_1 + x_2 + x_3}3,
$$
which is also the expression for the optimal $\theta_P(x_1, x_2, x_3)$.
Surprisingly enough, the solution does not depend on the distribution of $(X_1, X_2, X_3)$.
Second case, $\xi_P$.
Again, we do optimisation for each triple of fixed values $(X_1, X_2, X_3)$ separately, i.e.
$$
\xi_P(x_1, x_2, x_3) := \arg \min_t \left( \max \left( (t-x_1)^2, (t-x_2)^2, (t-x_3)^2 \right) \right).
$$
We observe that
$$
\max \left( (t-x_1)^2, (t-x_2)^2, (t-x_3)^2 \right) = \max \left( (t-x_{\min})^2, (t-x_{\max})^2 \right),
$$
where $x_{\min} = \min(x_1, x_2, x_3)$ and $x_{\max} = \max(x_1, x_2, x_3)$. Finally,
$$
\max( (t-x_\min)^2, (t-x_\max)^2) = \begin{cases}
(t-x_\max)^2, & t < \frac{x_\min + x_\max}{2},\\
(t-x_\min)^2, & t \ge \frac{x_\min + x_\max}{2},
\end{cases}
$$
and the global minimum is achieved at $t = \frac{x_\min + x_\max}{2}$. Altogether,
$$
\xi_P(x_1, x_2, x_3) = \frac{\min(x_1, x_2, x_3) + \max(x_1, x_2, x_3)}2.
$$
Again, it doesn't depend on distribution of $X_1, X_2, X_3$.
Assuming that everything is $0$ mean.
You can reexpress the distance as
\begin{align*}
\mathbb E[\| X-Y \|^2] &= \mathbb E[(X-Y)^T(X-Y)]\\
&= \mathbb E[X^TX] + \mathbb E[Y^TY]-2 \mathbb E[X^TY]
\end{align*}
To treat terms like $\mathbb E[X^T X]$ you can use the trick $X^T X = \text{Tr}(X^TX) = \text{Tr}(XX^T)$ this together with the fact that trace is linear yelds
\begin{align*}
\mathbb E[X^TX] &= \text{Tr} (\Sigma_X )\\
\mathbb E[Y^TY] &= \text{Tr} (\Sigma_Y )\\
\mathbb E[X^TY] &= \text{Tr} (\Sigma_{YX} )
\end{align*}
So in the end you obtain
\begin{align*}
\mathbb E[\| X-Y \|^2] &= \text{Tr} (\Sigma_X ) + \text{Tr} (\Sigma_Y ) -2 \text{Tr} (\Sigma_{YX} )
\end{align*}
If $X$ now have mean $\mu_X$ and $Y$ have mean $\mu_Y$, then
\begin{align*}
&\| X-Y \|^2\\=& \| (X-\mu_X)-(Y-\mu_Y)+\mu_X-\mu_Y \|^2\\
=& \| (X-\mu_X)-(Y-\mu_Y) \|^2 + 2 \langle (X-\mu_X)-(Y-\mu_Y),\mu_X-\mu_Y\rangle +\| \mu_X-\mu_Y \|^2
\end{align*}
taking the expectation of this, the second term is $0$ since both $X-\mu_X$ and $Y-\mu_Y$ have mean $0$. So we get that
\begin{align*}
\mathbb E[\| X-Y \|^2] &= \text{Tr} (\Sigma_X ) + \text{Tr} (\Sigma_Y ) -2 \text{Tr} (\Sigma_{YX} ) + \| \mu_X-\mu_Y \|^2
\end{align*}
Best Answer
Your method would yield the right answer, but it's more complicated than it needs to be. From this answer, to generate a random hyperplane passing through the origin, first pick a random point from the surface of the $n$-sphere. The hyperplane would then be defined by $\langle x,u\rangle=0$.
WLOG, let $\mathbf{p} = (1, \underbrace{0, ..., 0}_{n-1})$. The distance from the hyperplane defined by $\langle x,u\rangle=0$ to $\mathbf{p}$ would then be $\left| \langle u, \mathbf{p}\rangle \right|$. This is then simply $|u_1|$. This is the integral of $|x_1|$ over the $n$-sphere, divided by the "surface area" of the sphere. From this, this is given by $$\frac{\frac{2\pi^{\frac{n-1}{2}}}{\Gamma\left(\frac{n+1}{2}\right)}}{\frac{2\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n}{2}\right)}} = \frac{\Gamma\left( \frac{n}{2}-1 \right)}{\Gamma\left(\frac{n-1}{2} \right)\sqrt{\pi}}$$