MATLAB: L1 Optimization in matlab

l1 optimizationmathematics

Hi guys,
I am trying to solve a slightly modified L1 optimization problem in matlab
argmin_x : |x-d|||^2 + |Fx|||_1
where F is a low rank matrix and d is a given vector. x is the variable to be minimized. Could you suggest the best way to solve this in matlab??

Best Answer

Make some d and F just to test it.
d = [1;2;3;4;5];
F = [.1 .3 .5 .7 .9; .2 .4 .6 .8 1.0];
I can think of two ways.
1. Use FMINUNC. This is simple to set up, but for larger problems it will take some time, and you may need to set options such as MaxFunEvals with OPTIMSET to make it work.
V = @(x) norm(x-d)^2+norm(F*x,1);
xopt = fminunc(V,d)
2. Use QUADPROG. This is more complicated to set up, but much faster and more accurate. Create slack variables to deal with the L1 part.
s = size(F,1);
nx = size(F,2);
f = [-2*d; zeros(s,1); ones(s,1)];
H = blkdiag(2*eye(nx),zeros(s),zeros(s));
Aeq = [F -eye(s) -zeros(s)];
beq = zeros(s,1);
A = [zeros(s,nx) eye(s) -eye(s);
zeros(s,nx) -eye(s) -eye(s)];
b = zeros(2*s,1);
[xopt,fval] = quadprog(H,f,A,b,Aeq,beq);
xopt = xopt(1:nx)
Trying it out for d and F given above, I get the same answer either way.
xopt =
0.8500
1.6500
2.4500
3.2500
4.0500
Related Question