Solved – Bayesian vs Frequentist: practical difference w.r.t. machine learning

bayesianfrequentistmachine learningmodeling

I know that Bayesian and frequentist approaches differ in their definition of probability.

Practically, in machine learning a model is a formula with tunable parameters.

Then the difference between Bayesian and frequentist is:

That the parameters are assumed to be fixed numbers in frequentist setting and the parameters have their own distributions in the Bayesian setting.

Am I missing anything here or anything is mis-interpreted?

I am not asking theoretical arguments, just what is the practical manifestation of frequentist vs Bayesian w.r.t. Machine learning models and their optimization/fitting.

Best Answer

Once you've fitted the model, it will be what it will be, so I think the difference is prior to that. That is, the models / parameters are fitted differently between the Bayesian and Frequentist approaches. More specifically, the fitted Bayesian parameters will incorporate additional information outside of what is in the data. If you know something about what the parameters are likely to be (and you aren't wrong), that could boost the model's performance. Even if you use an 'uninformative' prior, you will typically find the fitted Bayesian parameters will be shrunk to some degree towards $0$ relative to the fitted Frequentist parameters.

Related Question