MATLAB: Problems with fmincon for design optimization

failure probabilityfminconmonte carlononlinearoptimization

Hi everyone,
I am usign fmincon for the optimization of my problem. details and the source code of problem are provided below. As can be seen there are 12 design variables that should be optimized. There are several random variables and constraints. Two of the constraints “Pfo and Pfa” are for calculating failure probability using MonteCarlo method. My code is not written properly and for example these two constraints are not controled during optimization.
Any solution?
fmincon-1.jpg
fmincon-2.jpg
Matlab code is written in four separate files:
1- Main code:
clc;clear all;close all;
tic
objective = @objecfun;
% Optimization Variables: d=[a,b,c,B,hc,hd,t,r,Bt,al,as,at]
% Initial Guess
x0 = [2; 1; 6; 4.5; 4; 2.5; 1.75; 3.5; 4.5; 0.5; 0.5; 0.5];
% Statistical Model Parameters
% Hs = normrnd(2.5,0.11);
% T = normrnd(5,1.5);
% gamma = normrnd(26,0.11);
% Nod = normrnd(1.25,0.375);
% Kd = normrnd(1.075,0.0375);
% S = (2*pi().*Hs)/9.81./(T.^2);
% a = normrnd(2.5,0.25); a is x(1)
% b = normrnd(1.25,0.125); b is x(2)
% c = normrnd(6,0.5); c is x(3)
% B = normrnd(5.5,0.25); B is x(4)
% al = normrnd(0.59,0.0963); al is x(10)
% as = normrnd(0.59,0.0963); as is x(11)
% Variable Bounds
lb = [1; 0.5; 5; 3; 3; 1; 0.5; 1; 3; 0.4; 0.4; 0.4];
ub = [3; 1.5; 7; 6; 5; 4; 3; 6; 6; 0.59; 0.59; 0.59];
% Show Initial Objective
disp(['Initial Objective: ' num2str(objective(x0))])
% Linear Constraints
A = [0,0,-1,0,0,0,-1,0,0,0,0,0];
b = -5.99;
Aeq = [1,1,1,0,-1,0,0,0,0,0,0,0;
0,0,0,0,0,0,-2,1,0,0,0,0];
beq = [5.99;0];
% Nonlinear Constraints
nonlincon = @nonlconstr;
% Optimize with fmincon
%[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN]
% = fmincon(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON,OPTIONS)
options = optimoptions('fmincon','Algorithm','active-set','PlotFcn',{@optimplotfval},'display','iter');
[x,fval,exitflag,output] = fmincon(objective,x0,A,b,Aeq,beq,lb,ub,nonlincon,options);
% Show Final Objective
disp(['Final Objective: ' num2str(objective(x))])
% Print Solution
disp('Solution')
disp(['a = ' num2str(x(1))])
disp(['b = ' num2str(x(2))])
disp(['c = ' num2str(x(3))])
disp(['B = ' num2str(x(4))])
disp(['hc = ' num2str(x(5))])
disp(['hd = ' num2str(x(6))])
disp(['t = ' num2str(x(7))])
disp(['r = ' num2str(x(8))])
disp(['Bt = ' num2str(x(9))])
disp(['al = ' num2str(x(10))])
disp(['as = ' num2str(x(11))])
disp(['at = ' num2str(x(12))])
output
toc
2- Objective function:
function f = objecfun(x)
Val = x(1).*(x(1)./tan(x(10)) + x(3)./tan(x(10))) + (x(1)./2).*(2.*x(4) - x(1)./tan(x(10)) - ...
x(1)./tan(x(11))) + x(1).*x(3)./sin(x(11));
ht = x(1)+x(2)+x(3)-x(5)-x(6);
Vul = x(2).*(x(2)./tan(x(10)) + x(3)./tan(x(10))) + (x(2)./2).*(2.*x(4) - 2*x(1)./tan(x(10)) - ...
2*x(1)./tan(x(11)) - x(2)./tan(x(10)) - x(2)./tan(x(11))) ...
+ x(2).*x(3)./sin(x(11)) + ht.*(x(9) + ht./tan(x(12)));
Vcl = x(7).*(x(8) + (x(1)+x(2))./sin(x(11))) + (x(3)+x(7)).*(x(4) - x(1)./tan(x(10)) - x(1)./tan(x(11)) - ...
x(2)./tan(x(10)) - x(2)./tan(x(11))...
+0.5*(x(3)+x(7))./tan(x(11))+0.5*(x(3)+x(7))./tan(x(10)));
f = 822*Val + 166*Vul + 131*Vcl;
3- Nonlinear constraints:
function [c,ceq] = nonlconstr(x)
Hs = normrnd(2.5,0.11);
T = normrnd(5,1.5);
gamma = normrnd(26,0.11);
S = (2*pi().*Hs)/9.81./(T.^2);
c = [9.81.*Hs.*T.*0.013.*exp(-22.*(x(5)./Hs).*sqrt(S./2./pi())./gamma) - 0.4;
cot(x(10))-3;
1.5-cot(x(10));
cot(x(11))-3;
1.5-cot(x(11));
cot(x(12))-3;
1.5-cot(x(12));
1-x(5);
x(5)-6;
Pfo(x(5))-0.001;
Pfa(x(1),x(10),x(3),x(4),x(11))-0.01];
ceq = [];
4- Function for calculation of failure probability using monte carlo method:
function p = Pfo(x)
n = 10^6;
Hs = normrnd(2.5,0.11,[1 n]);
T = normrnd(5,1.5,[1 n]);
gamma = normrnd(26,0.11,[1 n]);
Ss = (2*pi().*Hs)/9.81./(T.^2);
G = 0.4 - 9.81.*Hs.*T.*0.013.*exp(-22.*(x./Hs).*sqrt(Ss./2./pi())./gamma);
isuccess=zeros(1,n);
isuccess(G<=0)=1;
p = sum(isuccess)/n;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
function p = Pfa(x1,x10,x3,x4,x11)
n = 10^6;
Hs = normrnd(2.5,0.11,[1 n]);
KD = normrnd(2.75,0.375,[1 n]);
Val = x1.*(x1./tan(x10) + x3./tan(x10)) + (x1./2).*(2.*x4 - x1./tan(x10) - ...
x1./tan(x11)) + x1.*x3./sin(x11);
Dn = (Val).^(1/3);
delta = 1.512;
G = delta*Dn.*(KD.*cot(x10)).^(1/3) - Hs;
isuccess=zeros(1,n);
isuccess(G<=0)=1;
p = sum(isuccess)/n;

Best Answer

Hi Soheil,
You have posted a lot of information and I certainly have not digested even a large fraction of it (it would take me days to figure out what this undocumented code is really doing), but I do have a few suggestions:
1. In your objective function, you could call the monte carlo routines and include penalties based on those, something like this:
% inside objecfun:
failurep1 = Pfo(as);
if failurep1 > 0.001
Penalty1 = 1000 * (failurep1 - 0.001); % It works well to have the penalty increase with failurep1
else
Penalty1 = 0;
end
failurep2 = Pfa(a,c,B,as,al); % I am not sure what x is here, but you must know this.
if failurep2 > 0.01
Penalty2 = 1000 * (failurep2 - 0.01);
else
Penalty2 = 0;
end
f = 822*Val + 166*Vul + 131*Vcl + Penalty1 + Penalty2;
% Note that now the parameter search routine will avoid parameter
% combinations that produce high failure probabilities, due to the
% penalities. If the 1000 multiplier is not enough to avoid
% exceeding your thresholds of .001 and .01, just increase it.
2. Now there is the issue of repeatability, as John explained.
As far as I can see, all of your calls to normrnd involve fixed parameters, so the randomization does not have to vary each time the optimization parameters are changed. For example, you could just generate a single Hs vector to use throughout the entire optimization run. Include this statement in the main program and make Hs global:
Hs = normrnd(2.5,0.11,[1 n]);
Correspondingly, remove these random calls from Pfo and Pfa. Then, each time Pfo and Pfa are called, they will use exactly the same Hs values. Do the same thing for all of the other random variables (T, gamma), and all of the functions will be repeatable. (If you think it is a bad idea for Pfo and Pfa to use the same Hs values, generate a separate vector like HsPfo and HsPfa for each routine to use.)
Keep in mind that the final parameter estimates that you get will be specific to the particular sets of random numbers that were drawn in the main program before the optimization run started. But you can see how much difference that makes by running the whole program again and again (each time basing the whole optimization run on new sets of random numbers).