[Math] Conjugate prior for the Weibull distribution

bayesianstatistics

On Wikipedia we find a nice overview on conjugate prior distributions. I am interested in the conjugate prior for a random variable $X$ with density

$$f(x;\lambda,k) =
\begin{cases}
\frac{k}{\lambda}\left(\frac{x}{\lambda}\right)^{k-1}e^{-(x/\lambda)^{k}} & x \geq 0 ,\\
0 & x<0,
\end{cases}$$

the Weibull. With known rate parameter $k$ the inverse Gamma distribution with density
$$g(\lambda; \alpha, \beta)
= \frac{\beta^\alpha}{\Gamma(\alpha)}
(1/\lambda)^{\alpha + 1}\exp\left(-\beta/\lambda\right)$$

is a conjugate prior for $\lambda$. The posterior distribution of $\lambda$ then is apparently $g$ with $\alpha_*=\alpha+n$ and $\beta*=\beta+\sum_i x_i^k$ (with $i=1,\ldots,n$).

I cannot seem to be able to show this. Is this result true? And is there a conjugate prior for $k$ as well?

Best Answer

I suggest you to use an advise to take $\theta =\lambda^k$ (like Lee David Chung Lin said)

but if you assist to use your notation I introduce you one conjugate prior.(you should check identifiability of your model).

$$f(x_1,\cdots,x_n|\lambda)\propto \frac{1}{\lambda^{nk}} e^{-\frac{1}{\lambda^k}\sum x_{i}^{k}}$$

if we choose a prior $g(\lambda)$ then

$g(\lambda|x_1,\cdots ,x_n)\propto \frac{1}{\lambda^k} e^{-\frac{1}{\lambda^k}\sum x_{i}^{k}} g(\lambda)$

so conjugate prior should be something like

$g(\lambda)\propto \frac{1}{\lambda^a} e^{-\frac{b}{\lambda^k}} \hspace{.5cm} \lambda>0$

I found :

$$\int_{0}^{\infty} y^m e^{-b y^k} dy=\frac{\Gamma(\frac{m+1}{k})}{kb^{\frac{m+1}{k}}}$$ (reference: Table of Integrals, Series, and Products, I.S. Gradshteyn and I.M. Ryzhik, page 337)

by choosing $\lambda=1/y$ in this integral

$$\int_{0}^{\infty} \frac{1}{\lambda^{m+2}} e^{-b \frac{1}{\lambda^{k}}} d\lambda=\frac{\Gamma(\frac{m+1}{k})}{cb^{\frac{m+1}{k}}}$$ so you can create a distribution

$$g(\lambda)=NEWG(m,b)= \frac{\frac{1}{\lambda^{m+2}} e^{-b \frac{1}{\lambda^{k}}} }{\frac{\Gamma(\frac{m+1}{k})}{kb^{\frac{m+1}{k}}}} \hspace{.5cm} \lambda>0$$

so posterior will be

$$g(\lambda|x_1,\cdots ,x_n) \propto \frac{1}{\lambda^{m+nk+2}} e^{-(b+\sum x_{i}^{k}) \frac{1}{\lambda^{k}}} $$ that is $NEWG(m+nk,b+\sum x_{i}^{k})$

another option is reparmetrize $m_2=mk$.

If you use standard notation $\theta$

$$f(x|\theta)=\frac{kx^{k-1}}{\theta} e^{-\frac{x^k}{\theta}}$$ using inverse gamma for prior $$g(\theta)\propto \frac{1}{\theta^{\alpha+1}} e^{-\frac{\beta}{\theta}}$$

posterior:

$$g(\theta|x_1,\cdots,x_n)\propto f(x_1,\cdots,x_n|\theta) g(\theta) =\frac{k^n (\prod_{i=1}^{n} x_i)^{k-1}}{\theta^n} e^{-\frac{\sum_{i=1}^{n} x_i^k}{\theta}} \frac{1}{\theta^{\alpha+1}} e^{-\frac{\beta}{\theta}}$$

$$\propto \frac{1}{\theta^{n+\alpha+1}} e^{-\frac{\beta+\sum_{i=1}^{n} x_i^k}{\theta}}$$

that is Inverse gamma with $(n+\alpha,\beta+\sum_{i=1}^{n} x_i^k)$