I’ve learned and observed that training loss / error increases with training data size as stated in Dr Andrew Ng’s ML course.
I’ve recently experienced an anomaly. Training error and Test error curves were decreasing while training data size was increasing.
is this normal?
Some post said because regularization. in my case I use trainbr :Bayesian regularization backpropagation
is this reason?
Thank you.
Best Answer