I believe that the problem is with your guess that the inverse gamma is so easily extended to the multivariate case.
From the distributions appendix of Gleman et al, Bayesian Data Analysis (3rd Edition), 582
The Inverse-Wishart distribution is the conjugate prior distribution for the multivariate normal co-variance matrix. ...
The Wishart distribution is the conjugate prior distribution for the inverse co-variance matrix in a multivariate normal distribution and is a multivariate generalization of the gamma distribution.
I'm uncertain whether you'd like to proceed in your own investigation with this hint, or if you'd like me to spill the beans and post a full solution. (Though, turning to page 73 of the same text, we find the particular underlying algebra that you're interested in.)
First of all, the formulas are defined in terms of variance, not standard deviations.
Second, the variance of the posterior is not a variance of your data but variance of estimated parameter $\mu$. As you can see from the description ("Normal with known variance $\sigma^2$"), this is formula for estimating $\mu$ when $\sigma^2$ is known. The prior parameters $\mu_0$ and $\sigma_0^2$ are parameters of distribution of $\mu$, hence the assumed model is
$$
\begin{align}
X_i &\sim \mathrm{Normal}(\mu, \sigma^2) \\
\mu &\sim \mathrm{Normal}(\mu_0, \sigma_0^2)
\end{align}
$$
When both $\mu$ and $\sigma^2$ are unknown and are to be estimated, then you need slightly more complicated model (in Wikipedia table under "$\mu$ and $\sigma^2$ Assuming exchangeability"):
$$
\begin{align}
X_i &\sim \mathrm{Normal}(\mu, \sigma^2) \\
\mu &\sim \mathrm{Normal}(\mu_0, \tfrac{\sigma^2}{n+\nu}) \\
\sigma^2 &\sim \mathrm{IG}(\alpha, \beta)
\end{align}
$$
where first we need to update parameters of inverse gamma distribution to obtain $\sigma^2$:
$$
\begin{align}
\alpha' &= \alpha + \frac{n}{2} \\
\beta' &= \beta + \frac{1}{2}\sum_{i=1}^n (x_i -\bar x)^2 +
\frac{n\nu(\bar x -\mu_0)^2}{2(n+\nu)}
\end{align}
$$
and then we can proceed to calculate $\mu$ and MAP point estimate for $\sigma^2$:
$$
\begin{align}
\mu &= \frac{ \mu_0\nu + \bar x n }{\nu + n} \\
\operatorname{Mode}(\sigma^2) &= \frac{ \beta' }{ \alpha' + 1 }
\end{align}
$$
For learning more, refer to "Conjugate Bayesian analysis of the Gaussian distribution" paper by Kevin Murphy, or "The Conjugate Prior for the Normal Distribution" notes by Michael Jordan (notice that there are slight differences between those two sources and that some formulas are given for precision $\tau$ rather then variance) and M. DeGroot Optimal Statistical Decisions, McGraw-Hill, 1970 (pp. 169-171).
Best Answer
At least sort of. Let's look at them one at a time first (taking the other as given).
From the link (with the modification of following the convention of using Greek symbols for parameters):
$f(x|\mu,\tau) = \frac{1}{2\tau} \exp \left( -\frac{|x-\mu|}{\tau} \right) \,$
- scale parameter:
$\cal{L}(\tau) \propto \tau^{-k-1} e^{-\frac{S}{\tau}} \,$
for certain values of $k$ and $S$. That is the likelihood is of inverse-gamma form.
So the scale parameter has a conjugate prior - by inspection the conjugate prior is inverse gamma.
- location parameter
This is, indeed, more tricky, because $\sum_i|x_i-\mu|$ doesn't simplify into something convenient in $\mu$; I don't think there's any way to 'collect the terms' (well in a way there sort of is, but we don't need to anyway).
A uniform prior will simply truncate the posterior, which isn't so bad to work with if that seems plausible as a prior.
One interesting possibility that may occasionally be useful is it's rather easy to include a Laplace prior (one with the same scale as the data) by use of a pseudo-observation. One might also approximate some other (tighter) prior via several pseudo-observations)
In fact, to generalize from that, if I were working with a Laplace, I'd be tempted to simply generalize from constant-scale-constant-weight to working with a weighted-observation version of Laplace (equivalently, a potentially different scale for every data point) - the log-likelihood is still just a continuous piecewise linear function, but the slope can change by non-integer amounts at the join points. Then a convenient "conjugate" prior exists - just another 'weighted' Laplace or, indeed, anything of the form $\exp(-\sum_j |\mu-\theta_j|/\phi_j)$ or $\exp(-\sum_j w^*_j|\mu-\theta_j|)$ (though it would need to be appropriately scaled to make an actual density) - a very flexible family of distributions, and which apparently results in a posterior "of the same form" as the weighted-observation likelihood, and something easy to work with and draw; indeed even the pseudo-observation thing still works.
It is also flexible enough that it can be used to approximate other priors.
(More generally still, one could work on the log-scale and use a continuous, piece-wise-linear log-concave prior and the posterior would also be of that form; this would include asymmetric Laplace as a special case)
Example
Just to show that it's pretty easy to deal with - below is a prior (dotted grey), likelihood (dashed, black) and posterior (solid, red) for the location parameter for a weighted Laplace (... this was with known scales).
The weighted Laplace approach would work nicely in MCMC, I think.
--
I wonder if the resulting posterior's mode is a weighted median?
-- actually (to answer my own question), it looks like the answer to that is 'yes'. That makes it rather nice to work with.
--
Joint prior
The obvious approach would be to write $f(\mu,\tau)=f(\mu|\tau)f(\tau)$: it would be relatively easy to have $\mu|\tau$ in the same form as above - where $\tau$ could be a scaling factor on the prior, so the prior would be specified relative to $\tau$ - and then an inverse gamma prior on $\tau$, unconditionally.
Doubtless something more general for the joint prior is quite possible, but I don't think I'll pursue the joint case further than that here.
--
I've never seen or heard of this weighted-laplace prior approach before, but it was rather simple to come up with so it's probably been done already. (References are welcome, if anyone knows of any.)
If nobody knows of any references at all, maybe I should write something up, but that would be astonishing.