No reconciliation is needed. In one case you are referring to the sampling distribution of the maximum likelihood estimator, which is a function of the data. In the other, you are referring to the posterior distribution of the actual model parameter. Two different referents; two different solutions.
The advantage of conjugacy is that we get nice closed-form solutions out for the posterior distribution, and this ability depends on the form of the likelihood function, not on the sampling distribution of the maximum likelihood estimator.
If we look at the Normal likelihood function, we see:
$$\mathcal{L}(\sigma^2) \propto \sigma^{-n}\exp\left(-{\sum X_i^2\over 2\sigma^2}\right)$$
Note how the $\sigma^2$ terms are all in the various denominators of ratios, not in the numerators. In order to maintain conjugacy, we need to find a distribution that looks similar:
$$p(\sigma^2) \propto \sigma^{-a}\exp\left(-{b\over\sigma^2}\right)$$
which will lead to a posterior that has the same form:
$$p(\sigma^2|X) \propto \sigma^{-(a+n)}\exp\left(-{b+\sum X_i^2\over\sigma^2}\right)$$
... and that distribution is the inverse-Gamma.
If we were to use the precision $\beta = 1/\sigma^2$ as our parameter of choice, we'd have:
$$\mathcal{L}(\beta) \propto \beta^{n}\exp\left(-{\sum X_i^2\over 2}\beta\right)$$
and evidently the conjugate prior would be a Gamma distribution. Note that in the former case the $\sum X_i^2$ term and $\sigma^2$ terms are in the numerator and denominator of a ratio, respectively; this leads (well, you have to do the math) to the Gamma and inverse-Gamma distributions respectively. In the latter case, the $\beta$ is in the numerator along with $\sum X_i^2$, and we have Gamma distributions in both cases.
Yes it does, the Generalized Beta prime distribution with shape parameter equal to 1.
We can get there fairly easily by integrating $\beta$ out of the joint distribution of $x$ and $\beta$:
$$f(x,\beta|\alpha, \alpha_0, \beta_0) = {\beta^{\alpha}x^{\alpha-1} \over \Gamma(\alpha)}e^{-\beta x}{\beta_0^{\alpha_0}\beta^{\alpha_0-1} \over \Gamma{\alpha_0}}e^{-\beta_0\beta}$$
Rearranging terms and ignoring everything that isn't related to either $\beta$ or $x$ (as they will all be handled by the constant of integration at the end) results in a great deal of simplification:
$$f(x,\beta|\cdot) \propto x^{\alpha-1}\beta^{\alpha+\alpha_0-1}e^{-(\beta_0+x)\beta}$$
We can integrate out $\beta$ easily enough by noting that the two terms involving $\beta$ are those of a Gamma-distributed variate with shape parameter $\alpha + \alpha_0$ and rate parameter $\beta_0 + x$, so the integral must equal the inverse of the constant of integration of such a distribution:
$$x^{\alpha-1}\int \beta^{\alpha+\alpha_0-1}e^{-(\beta_0+x)\beta}\text{d}\beta = {x^{\alpha-1} \Gamma(\alpha+\alpha_0) \over(\beta_0 + x)^{\alpha + \alpha_0}} \propto x^{\alpha-1} (\beta_0 + x)^{-\alpha - \alpha_0}$$
A slight rearrangement of terms and some minor algebra gets us to:
$$f(x|\cdot) \propto \left({x \over \beta_0}\right)^{\alpha-1} \left(1 + {x\over \beta_0}\right)^{-\alpha - \alpha_0}$$
which clearly matches the formula for the GBPD
(with shape parameter $p=1$) as given by Wikipedia and reproduced here:
$$f(x;\alpha,\beta,p,q) = \frac{p \left(\frac x q \right)^{\alpha p-1} \left(1+ \left(\frac x q \right)^p\right)^{-\alpha -\beta}}{qB(\alpha,\beta)}$$
Best Answer
So the question seems to have stemmed from a discussion on this forum: http://social.microsoft.com/Forums/en-US/47b613d0-177d-4ce3-b54d-2476144ece6b/double-exponential-prior-migrated-from-communityresearchmicrosoftcom?forum=infer.net
The general idea, that the forum alludes to, is that you can build a distribution that resembles a Laplace distribution from a Normal distribution by specifying the variance of the Normal distribution to follow a Gamma distribution, i.e.,
\begin{align} x|\sigma^2 &\sim N(\mu,\sigma^2)\\ \sigma^2&\sim\text{Gamma}(1,1) \end{align}
However, as the OP asks, how to specify the location ($\mu$) and scale ($b$) of the corresponding Laplace distribution is not obvious. And in fact, you can't since the distribution will not be exactly a Laplace distribution, but just something very similar.
To better understand the question we (below) a plot of many Laplace distibutions with differing locations ($\mu$) and scales ($b$).
And so generating random variables from Normal distribution with variance following a Gamma(1,1) distribution will give us something that looks very close to a Laplace distribution.
However, it is only an approximation and is not exact.
On the other hand, there is a way to generate a Laplace distribution (with known location ($\mu$) and scale ($b$)) by using a Normal distribution with variance corresponding to a Rayleigh distribution.
The result is the following:
If $X|Y \sim N(\mu,\sigma=Y)$ with $Y \sim \text{Rayleigh}(b)$ then $X \sim \text{Laplace}(\mu, b)$.