Solved – Mixture of Normal and Exponential distributions with unkown weights and parameters using JAGS

jagsr

I am trying to model a process that generates random normal variables and random exponential variables, with unknown weights between the two distributions, and unknown parameters of the distributions. I am trying to do this using rjags in R.

I found this post answered by Matt Denwood that mixes normal and uniform distributions, but for some reason I was not able to adapt it my problem (admittedly I only changed the uniform to exponential, without being able to understand the 'ones trick' mentioned in the post).

Distribution of my data:
enter image description here

Output from JAGS:
enter image description here

My model:

model{

for(i in 1:N){

    # Log density for the normal part:
    ld_norm[i] <- logdensity.norm(x[i], mu, tau)

    # Log density for the uniform part:
    ld_exp[i] <- logdensity.exp(x[i], 1/lambda)

    # Select one of these two densities:
    density[i] <- exp(ld_norm[i]*norm_chosen[i] + ld_exp[i]*(1-norm_chosen[i]))

    # Generate a likelihood for the MCMC sampler:
    Ones[i] ~ dbern(density[i])

    # The latent class part as usual:
    norm_chosen[i] ~ dbern(prob)

}

# Priors:
lambda ~ dnorm(1, 10^-6)
prob ~ dbeta(1,1)
mu ~ dnorm(0, 10^-6)
tau <- pow(sigma, -2)
sigma ~ dnorm(1, 10^-6)

}

Any idea why this method doesn't work for me?

Best Answer

Your posteriors look suspiciously like your priors, which usually indicates that your model is not being fit to data. My best guess is that you have not correctly included the vector "Ones" with your data, although this is only a guess as that is to do with your R code not the JAGS model itself.

But in any case I can try to explain the 'Ones trick' which might help to emphasise why it needs to be included as data. Basically it is a way to 'trick' JAGS/BUGS into fitting a customised likelihood function to your data. It involves calculating the log likelihood function for each data point using some arbitrary method - in your case density[i] is calculated either from a normal or exponential distribution (depending on norm_chosen[i]), and the 'real data' (in your case x) and parameters are included somewhere in this step. But this log likelihood value is not actually connected to any samplers - it is just calculated. In order to get JAGS to actually sample 'good' values of your parameters a further step is required: the calculated likelihood value is 'fit' to an 'observed' value of One (i.e. the number 1) using a Bernoulli distribution. This allows higher calculated likelihood values to be preferred by the sampler by virtue of having a higher likelihood for a Bernoulli trial with a known outcome of 1. On the other hand, if the Ones vector is not observed (which is perfectly legal JAGS syntax) then the outcome of the Bernoulli trial is not known to JAGS, so very small calculated likelihoods are just as good as larger calculated likelihoods - i.e. your custom likelihood function is irrelevant. The main disadvantage to this trick (and the similar Zeros trick where a Poisson distribution is used in place of the Bernoulli trial) is that JAGS is forced to use a very inefficient sampling strategy.

For an illustration see the following 3 models:

m1 <- 'model{

    for(i in 1:N){      
        Y[i] ~ dnorm(mu, tau)       
    }

    mu ~ dnorm(0, 10^-6)
    tau ~ dgamma(0.01, 0.01)

    #data# N, Y
    #monitor# mu, tau

}'

m2 <- 'model{

    for(i in 1:N){      
        ll[i] <- logdensity.norm(Y[i], mu, tau)
        Ones[i] ~ dbern(exp(ll[i]))
    }

    mu ~ dnorm(0, 10^-6)
    tau ~ dgamma(0.01, 0.01)

    #data# N, Y
    #monitor# mu, tau

}'

m3 <- 'model{

    for(i in 1:N){      
        ll[i] <- logdensity.norm(Y[i], mu, tau)
        Ones[i] ~ dbern(exp(ll[i]))
    }

    mu ~ dnorm(0, 10^-6)
    tau ~ dgamma(0.01, 0.01)

    #data# N, Y, Ones
    #monitor# mu, tau

}'

N <- 100
Y <- rnorm(N, 3, sqrt(1/2))

library('runjags')
run.jags(m1)
run.jags(m2)
Ones <- rep(1, N)
run.jags(m3)

Models 1 and 3 are equivalent but model 1 is more efficient (effective sample size of 20000 compared to around 12000 for the two parameters mu and tau): this difference in efficiency will get more important with more complex models. Model 2 includes the same 'data' Y but omits the 'Ones' and therefore gives results that don't make much sense.

Related Question