MATLAB: How does lsqnonlin fit function to data

3d point cloudfittinglsqnonlin

Hello,
I am trying to use lsqnonlin for finding primitives(plane, cone,..etc) parameter such as orientation.
I have primitive's point set P and the function uses primitive equation(ex. plane equation, cylinder equation) to calculate distance between estimated primitive and P. Input x is composed of 3 variables that represents the orientation of primitive(in euler angle).
I've understand lsqnonlin searches optimal x starting from x0.
But how far does it search? Is there a limitation such as searching only near the x0? If the starting value x0 is far from ground truth, will there be problem in finding solution?
If the lsqnonlin consider's every possibility of x, the function should return correct output but it seems to come out wrong.
The code is like below:
options = optimset('Jacobian', 'on', 'Algorithm','trust-region-reflective', 'display', 'off');
x0=[v1 v2 v3];
out = lsqnonlin(@(x)distance2primitive(x, P), x0, [-Inf -pi 0], [Inf pi pi], options);

Best Answer

This is really just a question about understanding basic optimization methods, and nonlinear regression methods. In fact, you can find a very good description of Levenberg-Marquardt in doc lsqnonlin. Simply doing a good read of that will give you much information. (Look under "Least-Squares (Model Fitting) Algorithms" in the docs for lsqnonlin.)
Some basic comments:
  • No, lsqnonlin does not search "only" near the start point. It is an optimization scheme, based on a locally low order approximation to the nonlinear function that you are trying to fit. This is performed iteratively until it converges to a solution.
  • There is no limit on how far the search can proceed. Iterations are continued as long as useful improvement in the objective is seen. (Thus the norm of the residual vector is seen to decrease.) When improvement in the resicuals is no longer possible, the search should be at least at a local minimizer of the error metric. Again, this is basic optimization.
  • Must such a search always converge to a solution? Of course not. Divergence can occur, where some or all of the parameters can wander out towards infinity. Again, you would benefit greatly from reading and learning theory about basic optimization methodologies.
  • Will the search find the globally best solution? Again, no. Any such search is no better than the quality of the starting values you chose to provide. If you start in the wrong place, or too far away? While the optimization should usually converse to SOME local minimizer, there is no assurance that it will find the one you might want to see.
So I would strongly recommend you do some outside reading on the subject of optimization. Thus optimization in general, and you might focus on a basic method used by lsqnonlin: Levenberg-Marquardt. (Reading about the other methods in lsqnonlin will probably get you in deeper than you really need to go at this point, without giving you much more usable information.)
So you might read this, but there are many very good texts on optimization to be found, and on nonlinear regression.