Solved – Fiting non-linear model to small data set

nlsnonlinear regressionrsmall-sample

I want to do a small example with fitting non-linear model
Here are data:

x <- c(10, 15, 25)
y <- c(5, 10, 15)

Firstly I fit a second degree polynom:

mod <- lm(y~x + I(x^2))

Than I try to fit non-linear model

nls(y ~ a * exp(-b * exp(-b1 * x)), start = c(b = 5, b1 = 0.1, a = 10), trace = T)

But it did not converge. I assume it is due to small data set. However, the results of trace are ok to achieve my goal

0 :   6.9063593  0.1764925 16.3104236
0 :   6.9063593  0.1764925 16.3104236
0 :   6.9063593  0.1764925 16.3104236
Error in nls(y ~ a * exp(-b * exp(-b1 * x)), start = c(b = 5, b1 = 0.1,  : 
  number of iterations exceeded maximum of 50

The results can be shown on the figure

curve(coef(mod)[1] + coef(mod)[2] * x + coef(mod)[3] * x^2, 10, 30, add = T, col = "red")

b = 6.9
b1 = 0.1764
a = 16.31
curve(a *  exp(-b * exp(-b1 * x)), 7, 30, add = T, col = 30)

enter image description here

The question is if I can force the nls to converge on such a small data set, or if I can keep somehow the results from trace from notconverged model?

Best Answer

In the manual of nls function you can read

Warning

Do not use nls on artificial "zero-residual" data.

The nls function uses a relative-offset convergence criterion that compares the numerical imprecision at the current parameter estimates to the residual sum-of-squares. This performs well on data of the form

y = f(x, θ) + eps

(with var(eps) > 0). It fails to indicate convergence on data of the form

y = f(x, θ)

because the criterion amounts to comparing two components of the round-off error. If you wish to test nls on artificial data please add a noise component, as shown in the example below.

This is the case of your data, you have only three observations, so there is no noise and the function does not converge because of the algorithm used.

Instead, you could simply use the optim function, that produces your desired output. You have to simply specify some loss function to minimize (e.g. sum of squared errors) and pass it to the optimizer.

f <- function(param) sum((y - (param[1] *  exp(-param[2] * exp(-param[3] * x))))^2)

optim(c(0, 0, 0), f, method = "BFGS")$par
Related Question