Method 1: characteristic functions
Referring to (say) the Wikipedia article on the multivariate normal distribution and using the 1D technique to compute sums in the article on sums of normal distributions, we find the log of its characteristic function is
$$i t \mu - t' \Sigma t.$$
The cf of a sum is the product of the cfs, so the logarithms add. This tells us the cf of the sum of two independent MVN distributions (indexed by 1 and 2) has a logarithm equal to
$$i t (\mu_1 + \mu_2) - t' (\Sigma_1 + \Sigma_2) t.$$
Because the cf uniquely determines the distribution we can immediately read off that the sum is MVN with mean $\mu_1 + \mu_2$ and variance $\Sigma_1 + \Sigma_2$.
Method 2: Linear combinations
View the pair of MVN distributions as being a single MVN with mean $(\mu_1, \mu_2)$ and covariance $\Sigma_1 \oplus \Sigma_2$. In block matrix form this is
$$\Sigma_1 \oplus \Sigma_2 = \pmatrix{\Sigma_1 & 0 \\ 0 & \Sigma_2}$$
where the zeros represent square matrices of zeros (indicating all covariances between any component of distribution 1 and any component of distribution 2 are zero).
The sum is given by a linear transformation and therefore is MVN. The covariance again works out to $\Sigma_1 + \Sigma_2$. (See p. 2 #4 in course notes by the late Dr. E.B. Moser, LSU EXST 7037. Edit Jan 2017: alas, the university appears to have removed them from its Web site. A copy of the original PDF file is available on archive.org.)
It is well-known that a linear combination of 2 random normal variables is also a random normal variable. Are there any common non-normal distribution families (e.g., Weibull) that also share this property?
The normal distribution satisfies a nice convolution identity: $X_1\sim N\left[\mu _1,\sigma _1^2\right],X_2\sim N\left[\mu _2,\sigma _2^2\right]\Longrightarrow X_1+X_2\sim N\left[\mu _1+\mu _2,\sigma _1^2+\sigma _2^2\right]$. If you are referring to the central limit theorem, then for example, those gamma distributions with the same shape coefficient would share that property and convolve to be gamma distributions. Please see A cautionary note regarding invocation of the central limit theorem. In general, however, with unequal shape coefficients, gamma distributions would "add" by a convolution that would not be a gamma distribution but rather a gamma function multiplying a hypergeometric function of the first kind as found in Eq. (2) of convolution of two gamma distributions. The other definition of adding, that is forming a mixture distribution of unrelated processes would not necessarily exhibit any central limit, for example, if the means are different.
There are probably other examples, I haven't done an exhaustive search. Closure for convolution does not seem to be far fetched. For linear combination, the product of Pearson VII with a Pearson VII is another Pearson VII.
Best Answer
In general, no, this is not the case, even with univariate t's (see here and here for example; note that the difference of two t-random variables is the sum of two t-random variables, but with the second component having its mean that of the original random variable multiplied by -1)
In some very particular cases, yes. Consider:
(i) the limiting case of infinite degrees of freedom, linear combinations of multivariate normals are multivariate normal;
(ii) if the component t-variables are perfectly dependent their sums will be multivariate-t;
(iii) in the univariate case, sums of independent Cauchy random variables will be Cauchy. I haven't checked, but this may well extend more to subsets of the multivariate case than vectors of independent Cauchy (and the perfectly-dependent case mentioned above);
(iv) in the limit of very large numbers of components, where none of the components dominates variance-wise (that is, where the coefficient of each component times the variance of that component doesn't become too large), you may be able to invoke a version of the central limit theorem.
In the case where the weights on the components are equal (effectively converting it to a scaled sum) and you're dealing with standard t (rather than ones with general means and variances), this paper has some information. Extending it to the case of a general mean is straightforward but it doesn't deal with the general case of arbitrary scales, or equivalently arbitrary linear combinations.