I did something similar recently. Not so much trying to convince others, but doing a small project which allowed me to get a little taste of BMA. What I did was to generate a dataset with a binary response, three independent variables which had an effect on the response and seven variables which did not have any effect on the response. I then compared the BMA results to the frequentist estimates in logistic regression. I think that at least in this case the BMA approach appears to be quite good. If you want to make it more accessible you can always name the variables or something instead of calling them the generic $X$ and $y$.
The R code I used for this is presented below. Hope it can inspire you!
# The sample size
n <- 100
# The 'true' coefficient vector
Beta <- cbind(c(-1.5, 0.45, -3))
# Generate the explanatory variables which have an effect on the outcome
set.seed(1)
X <- cbind(rnorm(n, 0, 1), rnorm(n, 4, 2), rnorm(n, 0.5, 1))
# Convert this into probabilities
prob <- 1/(1+exp(-X %*% Beta))
# Generate some uniform numbers. If the elements are smaller than the corresponding elements in the prob vector, then return 1.
set.seed(2)
runis <- runif(n, 0, 1)
y <- ifelse(runis < prob, 1, 0)
# Add the nonsense variables
X <- cbind(X, rpois(n, 3)) # Redundant variable 1 (x4)
X <- cbind(X, rexp(n, 10)) # Redundant variable 2 (x5)
X <- cbind(X, rbeta(n, 3, 10)) # Redundant variable 3 (x6)
X <- cbind(X, rbinom(n, 10, 0.5)) # Redundant variable 4 (x7)
X <- cbind(X, rpois(n, 40)) # Redundant variable 5 (x8)
X <- cbind(X, rgamma(n, 10, 20)) # Redundant variable 6 (x9)
X <- cbind(X, runif(n, 0, 1)) # Redundant variable 7 (x10)
# The BMA
library(BMA)
model <- bic.glm(X, y, glm.family="binomial", factor.type=FALSE, thresProbne0 = 5, strict = FALSE)
# The frequentist model
model2 <- glm(y~X, family = "binomial")
old.par <- par()
par(mar=c(3,2,3,1.5))
plot(model, mfrow=c(2,5))
par(old.par)
summary(model)
summary(model2)
Best Answer
Two distributions that are used a lot in the literature are: