Bayesian neural nets (BNN) are very popular topic. With development of variational approximation it became possible to train such models much faster then with Monte Carlo sampling. BNNs allow such interesting features as natural regularisation and even uncertainty estimation. So, the question is: why haven't we still completely migrated on BNNs?
I can assume that variational inference does not provide enough accuracy. Is it the only reason?
Best Answer
There are still many disadvantages of BNN compared with NN as listed below: