I'll start from scratch, since the original post has some math typos like wrong signs, dropping the $V$ matrix, etc.
You've specified prior $p(\beta)=\mathcal{N}( 0, \sigma^2 V )$ and likelihood: $p(y | \beta ) = \mathcal{N}( B\beta, \sigma^2I )$.
We can write each of these purely as expressions of terms inside the $\exp$ that depend on $\beta$, grouping all terms unrelated to $\beta$ into a single constant:
$\log p( \beta ) + \mbox{const} = -\frac{1}{2\sigma^2} \beta^T V^{-1} \beta$
$\log p( y | \beta ) + \mbox{const} = -\frac{1}{2\sigma^2}( \beta^T B^TB \beta - 2y^T B \beta ) \quad$ (note that $y^TB\beta = \beta^T B^T y$ always)
Added these in log space and collecting like terms yields the unnormalized log posterior
$\log p( \beta | y ) + \mbox{const} = -\frac{1}{2\sigma^2}( \beta^T(V^{-1} + B^TB)\beta - 2y^T B \beta )\quad$ (1)
... here, we've used the standard identity $x^TAx + x^TCx = x^T(A+C)x$ for any vectors $x$ and matrices $A,C$ of appropriate size.
OK, our goal is now to "complete" the square. We'd like an expression of the form below, which would indicate that the posterior for $\beta$ is Gaussian.
$\log p( \beta | y ) + \mbox{const} = (\beta - \mu_p)^T \Lambda_p (\beta - \mu_p )
= \beta^T \Lambda_p \beta -2\mu_p^T \Lambda_p \beta + \mu_p^T \Lambda_p \mu_p$
where parameters $\mu_p, \Lambda_p$ define the posterior mean and inverse covariance matrix respectively.
Well, by inspection eqn. (1) looks a lot like this form if we set
$\Lambda_p = V^{-1} + B^TB \quad$ and
$\quad \mu_p = \Lambda_p^{-1}B^Ty$
In detail, we can show that this substitution creates each necessary term from (1):
quadratic term: $\beta^T \Lambda_p \beta = \beta^T( V^{-1} + B^TB)\beta$
linear term: $\mu_p^T \Lambda_p \beta = ( \Lambda_p^{-1}B^Ty )^T \Lambda_p \beta = y^T B \Lambda_p^{-1} \Lambda_p \beta = y^T B \beta$
.... here we used facts $(AB)^T = B^T A^T$ and $(\Lambda_p^{-1})^T =\Lambda_p^{-1}$ due to symmetry ($\Lambda_p$ is symmetric, then so is its inverse).
However, this leaves us with a pesky extra term $\mu_p^T \Lambda_p \mu_p$. To avoid this, we just subtract this term from our final result. Thus, we can directly substitute our $\mu_p, \Lambda_p$ parameters into (1) to get
$\log p( \beta | y ) + \mbox{const} = -\frac{1}{2\sigma^2}[ (\beta-\mu_p)^T\Lambda_p(\beta-\mu_p) - \mu_p\Lambda_p\mu_p ]$
since that last term is constant with respect to $\beta$, we can just smash it into the big normalization constant on the left hand side and we've achieved our goal.
[Extract from our book Bayesian Core, Chapter 3, pp.54-56]
3.2.1 Conjugate Priors
Specifying both the
conditional prior on $\beta$,
$$
\beta|\sigma^2,X\sim\mathscr{N}_{k+1}(\tilde\beta,\sigma^2M^{-1})\,,
$$
where $M$ is a $(k+1,k+1)$ positive definite symmetric matrix, and the marginal prior on $\sigma^2$,
$$
\sigma^2|X\sim \mathscr{IG}(a,b),\qquad a,b>0,
$$
we indeed have conjugate priors in that the conditional posterior distribution
$\pi(\beta|\sigma^2,y,X)$ is
$$
\beta|\sigma^2,y,X\sim
\mathscr{N}_{k+1}\left((M+X^\text{T} X)^{-1}
\{(X^\text{T} X)\hat\beta+M\tilde\beta\},\sigma^2(M+X^\text{T} X)^{-1}\right),
$$
where $\hat\beta$ is the OLS, and the marginal posterior distribution
$\pi(\sigma^2|y,X)$ is
$$
\sigma^2|y,X\sim
\mathscr{IG}\left(\frac{n}{2}+a,b+\frac{s^2}{2}+\frac{(\tilde\beta-\hat\beta)^\text{T}
\left(M^{-1}+(X^\text{T} X)^{-1}\right)^{-1}(\tilde\beta-\hat\beta)}{2}\right)\,,
$$
where $s^2=(y-X\hat\beta)^\text{T} (y-X\hat\beta)$, posteriors which are of the same types as the prior distributions.
Integrating the conditional posterior of $\beta$ over the marginal posterior of $\sigma^2$ [in $\sigma^2$] leads to a multivariate $t$ marginal posterior distribution on $\beta$ since
\begin{align*}
\pi(\beta|y,X) & \propto \left[(\beta-\{M+X^\text{T}X\}^{-1}[\{X^\text{T} X\}\hat\beta+M\tilde\beta])^\text{T} (M+X^\text{T} X)\right. \\
&\quad \times(\beta-\{M+X^\text{T} X\}^{-1}[\{X^\text{T} X\}\hat\beta+M\tilde\beta])+2b+s^2 \\
&\quad +\left.(\tilde\beta-\hat\beta)^\text{T} \left(M^{-1}+(X^\text{T} X)^{-1}\right)^{-1}
(\tilde\beta-\hat\beta)\right]^{-(n/2+k/2+a)}
\end{align*}
(the computation is straightforward if tedious bookkeeping).
We recall that the density of a multivariate $\mathscr{T}_p(\nu,\theta,\Sigma)$ distribution on $\mathbb{R}^p$ is
$$
f(t|\nu,\theta,\Sigma)=\frac{\Gamma((\nu+p)/2)/\Gamma(\nu/2)}{\sqrt{\mbox{det}(\Sigma)\nu\pi}}\left[1+\frac{(t-\theta)^\text{T}
\Sigma^{-1}(t-\theta)}{\nu}\right]^{-(\nu+p)/2}\,.
$$
We thus have that, marginally and a posteriori,
$$
\beta|y,X \sim \mathscr{T}_{k+1}\left(n+2a,\hat\mu,\hat\Sigma\right),
$$
with
\begin{align*}
\hat\mu &= (M+X^\text{T} X)^{-1}((X^\text{T} X)\hat\beta+M\tilde\beta),\\
\hat\Sigma &= \frac{2b+s^2+(\tilde\beta-\hat\beta)^\text{T} \left(M^{-1}+(X^\text{T} X)^{-1}\right)^{-1}
(\tilde\beta-\hat\beta)}{n+2a}(M+X^\text{T} X)^{-1}.
\end{align*}
In this case, the posterior variance of $\beta$ is proportional to $\left(M+X^\text{T} X\right)^{-1}$.
The correlation structure is thus completely determined by the prior and the design matrix. The scale factor comes from the Inverse Gamma part: Modulo an $(n+2a) /(n+2a-4)$ term, this is the
expectation of $\sigma^2$ from its marginal posterior.
Best Answer
The normal conditional density's normalizing constant is not free of lambda, so Wolfram is integrating the wrong thing. $$ (2\pi)^{-1}\int_0^\infty \underbrace{(2\pi\tau^2\lambda^2)^{-1/2}}_{\text{this part}}\exp\left[ - \frac{\beta^2}{2\tau^2\lambda^2} \right](1 + \lambda^2)^{-1} d \lambda. $$