Solved – How to measure the goodness-of-fit of a nonlinear model

fittinggoodness of fitnonlinearnonlinear regressionregression

Well… I did search for a while before asking and noticed perhaps my question itself has something basically wrong after reading this and this but still not sure so decided to cry out loud :).

As someone unfamiliar with statistics, recently through this post I was somewhat shocked to learn that R-squared value is not a suitable metric for nonlinear models and began to wonder what's proper for measuring goodness-of-fit of a nonlinear model.

Another blog in the previous link suggests using standard error, but it looks like just a personal view. Is there any consensus for this issue? If not, how to decide what metric should be used? Simply throw the specific model to this site and ask?

Best Answer

Regression has to do with the whole study, the type of data, the correct statistical inference, the correct form, and the right tests just to name a few. In other words, R-square value can be used but not sufficient. This is true even in linear models. What is most important is making sure the theory behind the model is logical since you can have goodness of fit yet still be far off in your theory. Null hypothesis and other metrics should be used as well as R-squared or modified R-square. In other words, there is so much more to regression than testing. Much of it is understanding the statistical inference then applying the correct metrics to the specific data you are using. For example is it cross sectional data? Is the data qualitative? You can spend many a semester in graduate school studying this so what I give you is a mere taste.