Solved – Likelihood ratio test – sample size issue

likelihood-ratio

I am a statistician running into an odd problem, and it feels like I am missing a key point here. I have a true model (generated by me so I know truth), and a PREDICTED model that estimates certain parameters of the true model, and I want to test validity/accuracy of the PREDICTED model.

I do the same exercise with 2 sample sizes -> 200 and 50.

If I use the Likelihood ratio test to test my predicted model's goodness by using n=200, I get a much lower likelihood, than when I use n=50, and so in this structure of the experiment, it shows that the n=200 model (SAME as the n=50 model but for the sample size) is LESS ACCURATE than the n=50 model, simply because the of the likelihoods that are affected by the sample sizes.

What am I missing? Feeling quite silly –

Thanks in advance.

Best Answer

I realize that it has been eight years since this question was asked, but I only recently came across it, and I will attempt to provide an answer even though it may be too late for the original poster.

I assume that the null hypothesis for the tests with n=200 and n=50 are that the parameter tested is equal to zero. The alternative hypothesis is that the parameter is not equal to zero. In this setting, lower likelihood ratio means that the parameter is less likely to be true under the null hypothesis. Assuming that the true parameter is not zero, a lower likelihood ratio with n=200 means that the model built with 200 observations is more accurate than the one built with 50 observations.

Related Question