MATLAB: Curve fitting of two equations

curve fitting

Hello,
I have two equations
f=0:0.2:3;
y=teta*exp((a+1)*f);
x=exp((a-1)*f-1)/(a-1)^2;
I have a set of x vs. y
x=[36.78 ,37.53 ,38.28 ,39.06 ,39.85 ,40.65 ,41.47 ,42.3162082317748,43.17 ,44.04 ,44.93 ,45.84 ,46.76 ,47.71 ,48.67 ,49.65 ]
y=[0.01 ,0.0152 ,0.023 ,0.035,0.0536 ,0.0816 ,0.12 ,0.1891 ,0.287 ,0.438 ,0.6668 ,1.01494 ,1.544 ,2.35097424365239,3.578 ,5.445 ]
and I want to fit the data points and find out the constants teta,a.
I will appreciate any help.
Thanks

Best Answer

I'll expand on this answer in the morning. But, yes, you can use lsqcurvefit here, or lsqnonlin.
The y equation in particular suggests logging it however. Anytime that you see large differences in scale across the range of your problem, it makes sense that taking a log will be beneficial. Essentially this presumes proportional error in the fit is appropriate.
If you don't log the y expression, then the problem is, lsqnonlin presumes additive, homoscedastic error. So it assumes that the point with y == 0.01 will have the same absolute importance as the point where y=5.445.
As well, the exponentials in there also suggest taking logs.
x = [36.78 ,37.53 ,38.28 ,39.06 ,39.85 ,40.65 ,41.47 ,42.3162082317748,43.17 ,44.04 ,44.93 ,45.84 ,46.76 ,47.71 ,48.67 ,49.65 ]
y = [0.01 ,0.0152 ,0.023 ,0.035,0.0536 ,0.0816 ,0.12 ,0.1891 ,0.287 ,0.438 ,0.6668 ,1.01494 ,1.544 ,2.35097424365239,3.578 ,5.445 ]
f=0:0.2:3;
So, you have TWO equations that will allow you to predict x and y, as a function of f, and the parameters teta and a.
y=teta*exp((a+1)*f);
x=exp((a-1)*f-1)/(a-1)^2;
Logging the y equation gives us:
log(y) = log(teta) + (a+1)*f;
If we had no more than tis equation, then we could compute teta and a directly using simple linear regression.
The x equation need not be logged, because all the values of x are essentially the same size. So additive versus proportional error is not a real factor. I'll leave it alone, although there is no problem either way.
So, now lets formulate an objective function. I'll use lsqnonlin here,as I prefer it for this specific case. It is not a strong preference though.
I'll implicitly put teta into params(1), a as params(2).
fun = @(params) [x - exp((params(2)-1)*f-1)/(params(2)-1)^2, log(y) - log(params(1)) - (params(2)+1)*f];
Now, just call lsqnonlin on fun.
You probably want to put lower bounds on teta as 0, as you don't want it trying to try out teta less than zero. In fact, I'd suggest setting the lower bound for teta as something like 1e-10, to prevent it from even trying to go negative. If it does, lsqnonlin will fail, due to the complex numbers thereby created.
Similarly, set the lower bound on a as something like -1+1e-10.
The essential idea here is to pack the x and log(y) equations together into one set, then using lsqnonlin to solve the entire set for the combined paramters teta and a.
Good starting values, just be looking at the equations MIGHT be easily gained, from the log(y) equation. Simply fit a linear polynomial to that curve.
p0 = polyfit(f,log(y),1)
p0 =
2.102 -4.6117
That implies
a+1 = 2.102
and
log(teta) = -4.6117
solving for a and teta, we get
teta0 = exp(p0(2))
teta0 =
0.0099353
a0 = p0(1) - 1
a0 =
1.102
These are logical numbers. we know that a MUST be slightly smaller or larger than 1. As a test, I'll solve the problem twice, starting a from slightly above or below 1.
params = lsqnonlin(fun,[teta0,a0],[1e-10,-1+1e-10])
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
<stopping criteria details>
params =
0.0099644 1.1
>>
>> params = lsqnonlin(fun,[teta0,0.99],[1e-10,-1+1e-10])
Local minimum possible.
lsqnonlin stopped because the final change in the sum of squares relative to
its initial value is less than the default value of the function tolerance.
<stopping criteria details>
params =
0.013194 0.91282