I think my comments have grown long enough for an answer...
One reason why you might want to look at the multivariate case rather than univariate cases is when there's a lot of dependence between variables. It's quite possible for each univariate response to show "no effect" but the multivariate one to show a strong one. See this plot about a difference between two groups on just two dimensions
Note that here, $y$ and $x$ are both DVs, and the grouping variable (red/black indicator) is the (lone) IV in the 'regression'.
The issue is that the thing whose mean really differs between the two groups is not the variable $X$ or the variable $Y$ (that is, $\mu_{X2}-\mu_{X1}$ is almost zero, same for $Y$), but a particular linear combination - in the example, $Y-X$ - on which the means of the two groups strongly differ.
In that case univariate $t$ tests find nothing but a multivariate test sees it easily (which can be done by regression and multivariate regression where there is a single IV, the group indicator).
The same issue applies to other, less simple regressions.
Basically, can you do everything with the equivalent linear univariate
regression model that you could with the multivariate model?
I believe the answer is no.
If your goal is simply either to estimate the effects (parameters in $\mathbf{B}$) or to further make predictions based on the model, then yes it does not matter to adopt which model formulation between the two.
However, to make statistical inferences especially to perform the classical significance testing, the multivariate formulation seems practically irreplaceable. More specifically let me use the typical data analysis in psychology as an example. The data from $n$ subjects are expressed as
$$ \underset{n \times t}{\mathbf{Y}} = \underset{n \times k}{\mathbf{X}} \hspace{2mm}\underset{k \times t}{\mathbf{B}} + \underset{n \times t}{\mathbf{R}},
$$
where the $k-1$ between-subjects explanatory variables (factor or/and quantitative covariates) are coded as the columns in $\mathbf{X}$ while the $t$ repeated-measures (or within-subject) factor levels are represented as simultaneous variables or the columns in $\mathbf{Y}$.
With the above formulation, any general linear hypothesis can be easily expressed as
$$\mathbf{L} \mathbf{B} \mathbf{M} = \mathbf{C},$$
where $\mathbf{L}$ is composed of the weights among the between-subjects explanatory variables while $\mathbf{L}$ contains the weights among levels of the repeated-measures factors, and $\mathbf{C}$ is a constant matrix, usually $\mathbf{0}$.
The beauty of the multivariate system lies in its separation between the two types of variables, between- and within-subject. It is this separation that allows for the easy formulation for three types of significance testing under the multivariate framework: the classical multivariate testing, repeated-measures multivariate testing, and repeated-measures univariate testing. Furthermore, Mauchly testing for sphericity violation and the corresponding correction methods (Greenhouse-Geisser and Huynh-Feldt) also become natural for univariate testing in the multivariate system. This is exactly how the statistical packages implemented those tests such as car in R, GLM in IBM SPSS Statistics, and REPEATED statement in
PROC GLM of SAS.
I'm not so sure whether the formulation matters in Bayesian data analysis, but I doubt the above testing capability could be formulated and implemented under the univariate platform.
Best Answer
Try package MRCE in R. This is for "Multivariate regression with covariance estimation".