I am trying to see if the linear combination of multivariate t distribution will give a multivariate t distribution.
In general, no, this is not the case, even with univariate t's (see here and here for example; note that the difference of two t-random variables is the sum of two t-random variables, but with the second component having its mean that of the original random variable multiplied by -1)
In some very particular cases, yes. Consider:
(i) the limiting case of infinite degrees of freedom, linear combinations of multivariate normals are multivariate normal;
(ii) if the component t-variables are perfectly dependent their sums will be multivariate-t;
(iii) in the univariate case, sums of independent Cauchy random variables will be Cauchy. I haven't checked, but this may well extend more to subsets of the multivariate case than vectors of independent Cauchy (and the perfectly-dependent case mentioned above);
(iv) in the limit of very large numbers of components, where none of the components dominates variance-wise (that is, where the coefficient of each component times the variance of that component doesn't become too large), you may be able to invoke a version of the central limit theorem.
In the case where the weights on the components are equal (effectively converting it to a scaled sum) and you're dealing with standard t (rather than ones with general means and variances), this paper has some information. Extending it to the case of a general mean is straightforward but it doesn't deal with the general case of arbitrary scales, or equivalently arbitrary linear combinations.
Let's write the RV $X$, $Y$ as
$$
X = \mu + \varepsilon _x \\
Y = AX + b + \varepsilon _y
$$
with $ \varepsilon _x \sim \mathcal N (0, \Lambda ^{-1})$, $ \varepsilon _y \sim \mathcal N (0, L ^{-1})$. Now pluging in X in the second equation above gives
$$
Y = A\mu + b + A\varepsilon _x + \varepsilon _y.
$$
This is a linear combination of normal distributed random variables and as such itself normal distributed with expectation $A\mu +b$ and covariance matrix $A\Lambda ^{-1}A^T + L^{-1}$ ($var(AX)=Avar(X)A^T$). From this you get $p(y)$.
The second fact is a bit more complicated and involves some tedious calculation. From Bayes Theorem it follows that
$$
p(x|y) \propto p(y|x)p(x) \\
\propto \exp ((y-Ax-b)^TL(y-Ax-b) + (x-\mu)^T\Lambda(x-\mu)).
$$
If you multiply everything out and then factor out $x$ you get to
something proportional to
$$
\exp( (x-(\Lambda + A^TLA)^{-1}A^TL(y-b))^T(\Lambda + A^TLA)(x-(\Lambda + A^TLA)^{-1}AL(y-b)))
$$
which is proportional to your given normal.
Best Answer
A multivariate Gaussian (or Normal) random variable $X=(X_1,X_2,\ldots,X_n)$ can be defined as an affine transformation of a tuple of independent standard Normal variates $Z=(Z_1,Z_2,\ldots, Z_m)$. This easily implies the desired result, because when we condition $X$, we impose linear constraints among the $Z_j$. (If this is not obvious, please read on through the details.) This merely reduces the number of "free" $Z_j$ contributing to the variation among the $X_i$--but those $X_i$ nevertheless remain affine combinations of independent standard Normals, QED.
We can obtain this result in three steps of increasing generality. First, the distribution of $X$ conditional on its first component is Normal. Second, this implies the distribution of $X$ conditional on some linear constraint $C^\prime X = d$ is Normal. Finally, that implies the distribution of $X$ conditional on any finite set of $r$ such linear constraints is Normal.
Details
By definition,
$$X = \mathbb{A} Z + B$$
for some $n\times m$ matrix $\mathbb{A} = (a_{ij})$ and $n$-vector $B = (b_1, b_2, \ldots, b_n)$. Because one affine followed by another is still an affine transformation, notice that any affine transformation of $X$ is therefore also Normal. This fact will be used repeatedly.
Fix a number $x_1$ in order to consider the distribution of $X$ conditional on $X_1=x_1$. Replacing $X_1$ by its definition produces
$$x_1 = X_1 = b_1 + a_{11}Z_1 + a_{12}Z_2 + \cdots + a_{1m}Z_m.$$
When all the $a_{1j}=0$, the two cases $x_1=b_1$ and $x_1\ne b_1$ are easy to dispose of, so let's move on to the alternative where, for at least one index $k$, $a_{1k}\ne 0$. Solving for $Z_k$ exhibits it as an affine combination of the remaining $Z_j,\, j\ne k$:
$$Z_k = \frac{1}{a_{1k}}\left(x_1 - b_1 - (a_{11}Z_1 + \cdots + a_{1,k-1} + a_{1,k+1} + \cdots + a_{1m}Z_m)\right).$$
Plugging this in to $\mathbb{A}Z + B$ produces an affine combination of the remaining $Z_j$, explicitly exhibiting the conditional distribution of $X$ as an affine combination of $m-1$ independent standard normal variates, whence the conditional distribution is Normal.
Now consider any vector $C=(c_1, c_2, \ldots, c_n)$ and another constant $d$. To obtain the conditional distribution of $X$ given $C^\prime X = d$, construct the $n+1$-vector
$$Y = (Y_1,Y_2,\ldots, Y_{n+1})=(C^\prime X, X_1, X_2, \ldots, X_n) + (d, b_1, b_2, \ldots, b_n).$$
It is an affine combination of the same $Z_j$: the matrix $\mathbb{A}$ is row-augmented (at the top) by $C^\prime \mathbb{A}$ (an $n+1\times m$ matrix) and the vector of means $B$ is augmented at the beginning by the constant $d$. Therefore, by definition, $Y$ is multivariate Normal. Applying the preceding result to $Y$ and $d$ immediately shows that $Y$, conditional on $Y_1 = d$, is multivariate Normal. Upon ignoring the first component of $Y$ (which is an affine transformation!), that is precisely the distribution of $X$ conditional on $C^\prime X = d$.
The distribution of $X$ conditional on $\mathbb{C}X = D$ for an $r\times n$ matrix $\mathbb{C}$ and an $r$-vector $D$ is obtained inductively by applying the preceding construction one term at a time (working row-by-row through $\mathbb{C}$ and component-by-component through $D$). The conditionals are Normal at every step, whence the final conditional distribution is Normal, too.