Consider the following transformation
$$\begin{align} Y_1 &=X_1-\Sigma_{12}\Sigma_{22}^{-1}X_2 \\ Y_2 &=X_2\end{align}$$
Then $$\begin{pmatrix}
Y_1 \\
Y_2
\end{pmatrix} \sim\mathcal{N_p}\left[\begin{pmatrix}
\mu_1-\Sigma_{12}\Sigma_{22}^{-1}\mu_2 \\
\mu_2
\end{pmatrix} ,\begin{pmatrix}
\bar \Sigma & 0 \\
0 & \Sigma_{22}
\end{pmatrix} \right]
$$
Hence the PDF of
$\begin{pmatrix}
Y_1 \\
Y_2
\end{pmatrix}
$ is
$$\begin{align}n(y_1,y_2) &= n(y_1|\mu_1-\Sigma_{12}\Sigma_{22}^{-1}\mu_2, \bar \Sigma)\cdot n(y_2| \mu_2,\Sigma_{22}) \end{align}$$
Since $Y_1 \sim \mathcal{N_{p1}}(\mu_1-\Sigma_{12}\Sigma_{22}^{-1}\mu_2, \bar \Sigma)$ and $Y_2 \sim \mathcal{N_{p-p1}}(\mu_2,\Sigma_{22})$ independently, as $Cov(Y_1,Y_2)=0$.
The PDF of $\begin{pmatrix}
X_1 \\
X_2
\end{pmatrix}
$ will be obtained by replacing $y_1$ by $x_1-\Sigma_{12}\Sigma_{22}^{-1}x_2$ and $y_2$ by $x_2$. Note that here jacobian of the transformation is unity.
Hence the PDF of
$\begin{pmatrix}
X_1 \\
X_2
\end{pmatrix}
$ is
$$\begin{align}n(x_1,x_2) &= n((x_1-\Sigma_{12}\Sigma_{22}^{-1}x_2)|\mu_1-\Sigma_{12}\Sigma_{22}^{-1}\mu_2, \bar \Sigma)\cdot n(x_2| \mu_2,\Sigma_{22}) \end{align}$$
Hence the conditional PDF of $X_1$ given $X_2=a$ is
$$\begin{align}f_{X_1|X_2=a}(x_1) &= \dfrac{n(x_1,a)}{n(a|\mu_2,\Sigma_{22})} \\ &=n(x_1-\Sigma_{12}\Sigma_{22}^{-1}a|\mu_2-\Sigma_{12}\Sigma_{22}^{-1}\mu_2, \bar \Sigma) \\ &= \dfrac{1}{(2 \pi)^{p/2}\sqrt{|\bar \Sigma|}}\exp\left( \frac{1}{2}\left[ (x_1-\mu_1- \Sigma_{12}\Sigma_{22}^{-1}(a-\mu_2))'\bar \Sigma^{-1}(x_1-\mu_1- \Sigma_{12}\Sigma_{22}^{-1}(a-\mu_2)) \right]\right) \end{align}$$
Hence $X_1|X_2=a \sim \mathcal{N_{p1}}(\bar \mu, \bar \Sigma)$.
I don't know where you could have seen that claimed, but it doesn't make sense unless $x$ is a fixed constant. If $x$ is a random variable, it's not clear what $y\sim N(0,x^2)$ would mean.
If $x$ is constant, then one can say that if $x^{-1}y\sim N(0,1)$ then $y\sim N(0,x^2)$. It is also true that if the conditional distribution of one random variable given another does not depend on the other, then the marginal (or "unconditional") distribution of the first is the same as the conditional distribution.
But going from $x^{-1}y\sim N(0,1)$ to $y\sim N(0,x^2)$ is wrong unless $x$ is equal to some constant with probability $1$.
Best Answer
Let $W \sim \mathcal{N}(\mu, \Sigma)$ and write $W = \mu + \Sigma^{1/2}Z$, where $\Sigma^{1/2}$ is the unique positive-definite square root of $\Sigma$ that commutes with $\Sigma$. Then $Z \sim \mathcal{N}(0, I)$. Now define $A, B, Q$ as
$$ A = P\Sigma^{1/2}, \qquad B = A^{T}(AA^{T})^{-1}, \qquad Q = BA = A^{T}(AA^{T})^{-1}A. $$
and notice that
Now decomposing $Z$ into the sum of $Z_{\perp} = QZ$ and $Z_{||} = (I-Q)Z$, they are uncorrelated normal vectors and hence independent. From this, the conditioning equation $q = PW$ becomes
$$ q = PW = P\mu + A Z = P\mu + A Z_{\perp}. $$
Multiplying $B$ to both sides and using $Q^2 = Q$ (which follows from the fact that $Q$ is an orthogonal projection), we obtain
$$ B(q-P\mu) = BAZ_{\perp} = Q^2 Z = Q Z = Z_{\perp}$$
and hence the condition $PW = q$ determines the value of $Z_{\perp}$. So
\begin{align*} (W \mid PW=q) &\stackrel{d}{=} (\mu + \Sigma^{1/2}(Z_{\perp} + Z_{||}) \mid Z_{\perp} = B(q-P\mu)) \\ &\stackrel{d}{=} \mu + \Sigma^{1/2}B(q-P\mu) + \Sigma^{1/2}(1-Q)Z. \tag{*} \end{align*}
The last line $\text{(*)}$ has several implications:
$\text{(*)}$ is an affine transformation of $Z \sim \mathcal{N}(0, I)$, hence it is again normal with
$$ (W \mid PW=q) \sim \mathcal{N}( \mu + \Sigma^{1/2}B(q-P\mu), \Sigma^{1/2}(1-Q)\Sigma^{1/2}). $$
Plugging all the definitions, mean of the conditional distribution $(W \mid PW=q)$ simplifies to
\begin{align*} \mathbb{E}[W \mid PW=q] &= \mu + \Sigma^{1/2}B(q-P\mu) \\ &= \mu + \Sigma P^{T} (P\Sigma P^{T})^{-1}(q-P\mu). \end{align*}
If we write $S = \Sigma^{1/2}B = \Sigma P^{T} (P\Sigma P^{T})^{-1}$, then $\text{(*)}$ can be simplified to a formula which involves only known variables:
$$ (W \mid PW=q) \stackrel{d}{=} Sq + (I - SP) W. $$