# MATLAB: Local optima problem in minimization problem

algorithmglobal optimalocal optimaoptimizationpso

I actually tried to solve a single objective optimization (minimization) problem and somehow i got trapped into the local optima. I am not able to get rid of the local optima to find out the desired global optima.
My problem is stated below –
y=-0.433 + (5.233*10^-3)*a-(1.77*b) + (0.16*c);
Note: y is a parameter that can not be zero
Desired global optima= [140 0.15 0.90]
Desired Fitness = 0.1781.
But somehow may be, i am stuck at the local optima that is [100 0.15 0.50] with a fitness value of -0.0952.
I dont know what is wrong .
I tried several algorithm like Particle Swarm Optimization, Teaching Learning Based Optimizer and Harris Hawks Optimizer but they all give me the same result of local optima that is [100 0.15 0.50].
How do i get my desired global optima that is [140 0.15 0.9]?
Can anyone tell me what should i do to get rid of this local optima to find the desired global optima?

On problems like this when you do not state the complete story we always find ourselves needing to read your mind. What is your problem. Yes, I know you stated a problem, but one that makes no sense at all. Sorry, but it does not. You tell us this:
y=-0.433 + (5.233*10^-3)*a-(1.77*b) + (0.16*c);
Note: y is a parameter that can not be zero
Desired global optima= [140 0.15 0.90]
Desired Fitness = 0.1781.
But if y is an objective function, then the minimim lies at -inf for y. I can always change [a,b,c] in ways that make y as far negative as I wish. Thus if I make b as large as I want, then I will continue to decrease y, sicne the coefficient of b is negative.
So possibly you really have some data points, and you are trying to find a curve that ppasses through the points, estimating the parameters [a,b,c]. But then you would be trying to solve for a sum of squares of residuals, and it would never be negative. So this possibility seems unlikely.
And that suggests you really are trying to minimize a linear objective. I'm not sure why, but we can look down that path.
y = @(abc) -0.433 + (5.233*10^-3)*abc(1)-(1.77*abc(2)) + (0.16*abc(3));y([100 0.15 0.50])ans = -0.0952
So indeed, at the set of parameters you indicate , this function takes on the value you claim to get.
y([140 0.15 0.90])ans = 0.1781
And at the point in your parameter space where you claim it to be the solution you want, we find 0.1781.
So next, we consider that a MINIMIZATION tool tries to make something as close to -inf as possible. Is -0.0952 smaller than 0.1781? Yes? Then it has been successful in some respect. That it did not go further, probably failing with an exitflag that would indicate your function may even be unbounded? That is the real question in my mind. Depending on the optimizer, you would either get an unbounded warning of some type, or a warning the the problem was taking too many objective function evals. Regardless, it would have gotten upset in some way. But you did not ask as to why the optimizer returned a strange warning message.
So you must have imposed some bound constraints on the parameters, yet you never indicated them in your question. Sigh. If you don't post the COMPLETE problem you tried to solve, how can we read your mind, or look inside your computer?
Again, since the optimizers found the point: [100 0.15 0.50], look at the coefficients of a,b,c respectively. I will repeat, y is LINEAR.
If I decrease a below 100, then I will decrease y yet more. So you MUST have imposed a lower bound of 100 on a.
Similarly, if I increase b to be above 0.15, since the coefficient of b is negative, then it will decrease y. Therefore, b MUST be bounded from above by 0.15.
Finally, we can assert that c must have a lower bound of 0.5, else the minimizer tool would have found a better place, by decreasing c.
It looks like the MINIMIZATION tools did indeed solve the problem perfectly well. At least, they solved the problem that you apparently posed.
We don't know what the complete problem was that you posed, or why you are trying to do this, or why you think that a linear objective can never be negative, since a truly linear objective can go to -inf. We also do not know why you think this is a problem with a local optimum, since a LINEAR objective will never have that on a well-posed convex bounded domain. But it is clear what the optimization tools did at least.