MATLAB: How to find all solutions to a system of two nonlinear equations

equationfsolveMATLABroots of a nonlinear equation

I have a system of two nonlinear equations (f(x,y)=0 and g(x,y)=0) to which I want to find all roots over a region (say x from -5 to 5 and y from -5 to 5).
The problem with using fsolve is that I need to supply the initial guesses which may not be known easily. If the region where I am looking for solutions is small, I guess I could take a lot of initial guesses and get the solutions. However, I want to know if a more robust method exists.
One approach is tried was to find the values of (x,y) where f(x,y) and g(x,y) simultaneously change their sign and feeding those as the initial conditions. This approach works reasonably well until you supply equations which don't change sign after a zero crossing. (For example f(x,y) = x^2)
Any help/suggestion is appreciated!
Thanks,
Mohit

Best Answer

Here is the rub - you are in trouble IF you really want to find ALL solutions of a system of totally general nonlinear equations, regardless of the domain.
First, any problem can have infinitely many solutions, even in a finite interval. Consider sin(1/x), for example, with infinitely many roots in any finite interval that contains zero. And while you can claim those solutions are describable analytically, it is easy enough to create a problem with roots that are not so easily describable. So finding all roots of any problem is therefore impossible.
Do I really need to cook up an example that has no simple analytical description, yet has infinitely many roots? Quickly, this should work:
fplot(@(x) x/10 + besselj(1,1./x),[0,1])
Pick any interval that contains zero, even an interval that does not explicitly contain zero, if the interval is something like (0,eps]. Thus infinitely many solutions, and since x appears both inside and outside the bessel function, I'll conjecture it won't have an analytical solution.
Next, you talk about the use of fsolve as being a problem, because tools like that provide only a single solution as a function of the start point. Change the start, and potentially get a different result. For this, you may want to do some reading about global optimization methods, and how they differ from more classical ones.
But even there, a global method cannot find all roots. As we just said before, since there can trivially be infinitely many roots, it must give up eventually. And then which roots did it miss? The issue is if you can bound the derivatives of your function. If no bounds are available, then you can provably never find all roots using a numerical method and finite precision arithmetic, since it is trivially easy to create a function that is constant everywhere, except for some immeasurably small interval where it bounces across zero.
In the end, that leaves you at best with probabilistic measures of whether you may have found any given solution, or all of the solutions, since you generally will not know exactly how many solutions truly exist. A good idea may be to use multi-start methods, then use clustering schemes to decide how many solutions you have actually found. Remember that after each solve, two solutions that may in theory be the same will typically only be as close as the convergence tolerance. And that will be impacted by the local shape of the objectives.
Again, do some reading. But don't expect to find a solver that will magically yield all of infinitely many solutions, or even a solver that will robustly always find all of a finite number of solutions on any general problem.