MATLAB: Different results from fmincon with nonlcon

fminconnonlcon

Hi, my problem is that an optimization with fmincon (see below) is not stable, i.e. various runs deliver various results. The optimization problem is described as below:
Many thanks in advance for your help!
Best Wishes, Alex
The function to be optimized:
fun=@(x)x*meanOrRegressed
The non-linear constraint (Sum of all positive weights not above 1.3 and sum of all negative weights not below -0.3:
function [c,ceq,gradc,gradceq]=nonlcon_g(x,1.3,0.3)
nrAssets=size(x,1);
weightSumMax = sum(max(0,x(1:nrAssets-1)));
c(1) = weightSumMax-1.3;
weightSumMin = sum(min(0, x(1:nrAssets-1)));
c(2) = -weightSumMin-0.3;
gradc1=1/2*(sign(x(1:nrAssets-1))+ones(nrAssets-1,1));
gradc1=vertcat(gradc1,0);
gradc2=1/2*(sign(x(1:nrAssets-1))-ones(nrAssets-1,1));
gradc2=vertcat(gradc2,0);
gradc=horzcat(gradc1,gradc2);
ceq =[];
gradceq=[];
end
[weights,optimizedValue,exitflag] = fmincon(@fun,initialWeights',A,b,Aeq,beq,lowerBound,upperBound,@nonlcon_g,options)
A = -1 -1 -1 -1 -1 0
1 1 1 1 1 0
0 0 0 0 0 -1
0 0 0 0 0 1
b = 0.3 1.3 0.3 1.3
Aeq = 1 1 1 1 1 1
beq = 1
lowerBound = -0.3 -0.3 -0.3 -0.3 -0.3 -0.3
upperBound = 1.3 1.3 1.3 1.3 1.3 1.3 1 3
meanOrRegressed =
2.349096891796729e-004
-7.582259013820250e-005
1.190461785891006e-003
2.529756213317396e-003
1.066862350689632e-003
5.133561643835617e-005
options =
Display: []
MaxFunEvals: 60000
MaxIter: 40000
TolFun: 1.000000000000000e-010
TolX: []
FunValCheck: []
OutputFcn: []
PlotFcns: []
ActiveConstrTol: []
Algorithm: 'active-set'
AlwaysHonorConstraints: []
BranchStrategy: []
DerivativeCheck: []
Diagnostics: []
DiffMaxChange: []
DiffMinChange: []
FinDiffRelStep: []
FinDiffType: 'forward'
GoalsExactAchieve: []
GradConstr: 'on'
GradObj: []
HessFcn: []
Hessian: []
HessMult: []
HessPattern: []
HessUpdate: []
InitialHessType: []
InitialHessMatrix: []
InitBarrierParam: []
InitTrustRegionRadius: []
Jacobian: []
JacobMult: []
JacobPattern: []
LargeScale: []
LineSearchType: []
MaxNodes: []
MaxPCGIter: []
MaxProjCGIter: []
MaxRLPIter: []
MaxSQPIter: 200000
MaxTime: []
MeritFunction: []
MinAbsMax: []
NodeDisplayInterval: []
NodeSearchStrategy: []
NoStopIfFlatInfeas: []
ObjectiveLimit: []
PhaseOneTotalScaling: []
Preconditioner: []
PrecondBandWidth: []
RelLineSrchBnd: []
RelLineSrchBndDuration: []
ScaleProblem: []
Simplex: []
SubproblemAlgorithm: []
TolCon: 1.000000000000000e-008
TolConSQP: []
TolGradCon: []
TolPCG: []
TolProjCG: []
TolProjCGAbs: []
TolRLPFun: []
TolXInteger: []
TypicalX: []
UseParallel: 'always'

Best Answer

Another solution, one which avoids the need for epsilon-approximation, is to recognize that your original "nonlinear" constraints are really linear ones. In particular, one can show that
sum(max(x,0))<=K
is equivalent to the set of linear inequalities
sum(x(J))<=K
for every index subset, J. This can be expressed by adding more rows to your A,b data. I do so as follows and get a highly accurate result.
load AlexData
N=length(initialWeights);
AA=dec2bin(0:2^N-1,N)-'0';
AA(sum(AA,2)<=1,:)=[];
M=size(AA,1);
bb(1:M,1)=maxLong;
bb(M+1:2*M)=maxShort;
A=[A;AA;-AA]; %Append new constraints
b=[b(:);bb];
lowerBound=max(lowerBound,-maxShort);
upperBound=min(upperBound, maxLong);
options.algorithm='interior-point';
options.TolX=1e-20;
options.TolFun=1e-20;
options.UseParallel=[];
[weights,optimizedValue,exitflag,output,lambda,grad,hessian] = ...
fmincon(@(x)dot(-x,ret), initialWeights', A, b, Aeq, beq,...
lowerBound, upperBound,[],options);
weightError=weights(:).' - [0, 1.3, -.3 0 0 0],
Unfortunately, this means that your constraint data A,b consume O(N*2^N) memory where N is the number of unknown weights. It should be fine, though, for small problems (N<=15 or so).
Related Question