MATLAB: Problem for solving linear system with variables in the coefficient matrix

linear systemsymbolic matrix

Here is my problem. I have a quite complicated linear system with two syms variables in the matrix. It is of the form AX=0. So if det(A)is not equal to zero, the answer is X=0. So I first calculate the values of my syms variables that lead to det(A)=0. since det(A) is a high order polynomial (degree 12), I couldn't find anyway to to that except by doign the following: (I store them because I have also plotted one variable versus the other)
D=det(A);
fct=matlabFunction(D);
h=[];
for idx=10:5:200
g=fct(idx,nu);
ps=roots(sym2poly(g));
h=[h ps];
end
Now I want to solve my systems for those values so that I don't get the trivial X=0 solution.
So I tried this:
Bk=zeros(6,1);
Ak=subs(A, {Ustar, nu}, {10, h(1,1)});
temp=linsolve(Ak,Bk);
where Ustar and nu are the name of my two syms variables in matrix A. Since h(1,1) should correspond to the idx value 10, I should have det(A)=0 and thus a different solution (relative solution on the components of X) from X=0. I think the problem comes from the fact that my roots are only approximated and thus further down the calculation det(A) is not equal to 0.
  • Did I miss an obvious mistake in my code ?
  • Is there anyway around that ? forcing matlab to consider det(A)=0
  • Or is there a completely different method I am not aware of ?

Best Answer

If you have a general matrix A then the general way to solve for the problem
A*X = 0
is simple. It does NOT involve using linsolve, or backslash, or any of the methods you think of. It DOES require an understanding of linear algebra to see what to do, and see why it works. This is called a linear homogeneous system. The right hand side is zero. There is no constant term.
If A is nonsingular, then there is NO other solution for the problem other than X==0. This should be clear.
If A is singular, then there are INFINITELY many solutions to the problem. All zeros is one of those solutions. If A is size nxn, then suppose A has rank m, where m<n. In that case, there will be an infinite number of solutions, a subspace of dimension n-m that contains the origin. In general, the solution is found by examining the null-space of A. I'll give a simple example to show what happens and how to solve it.
syms u
A = sym(magic(3));
A(2,2) = u
A =
[ 8, 1, 6]
[ 3, u, 7]
[ 4, 9, 2]
In general, this matrix is non-singular, for arbitrary values of u. In fact, there is only one value of u that makes it non-singular, since the determinant is linear in u.
det(A)
ans =
-8*u - 320
We can see that the value for which the matrix is singular is u=-40.
solve(det(A) == 0)
ans =
-40
So only when u takes on a value whereby A is singular is there a non-zero solution to the homogeneous equation. We get that solution from the nullspace of A. The nullspace of A is defined as the set of vectors for which A*X=0.
As = subs(A,u,-40);
Xs = null(As)
-13/17
2/17
1
(For a larger matrix, once you have the values that force A to be singular, it is best to make the matrix a double precision one, then use null. While null will work on symbolic matrices, it will be slower.)
As you can see, this vector does indeed kill off A. Any multiple of Xs will also do so of course, so a typical way to scale that vector is to make it have unit norm (pick your favorite norm here.)
As*Xs
ans =
0
0
0
A problem of course is if A is large. Then computing a symbolic determinant of A and then solving the resulting polynomial problem may take some serious effort. If the matrix is purely numeric, then null does the work for you directly. And, finally, if the matrix is seriously rank deficient, then the nullspace will be represented by a set of orthogonal vecgtors that span the nullspace.
null(ones(3))
ans =
0 0.8165
-0.70711 -0.40825
0.70711 -0.40825
Here of course, ones(3) has rank 1. So the nullspace is 2 dimensional.
As to your conjecture that linsolve failed, BECAUSE the matrix was not exactly singular, no that is simply wrong. linsolve failed because it finds the zero solution to a singular homogeneous problem. After all, there are INFINITELY many solutions. All zeros is one of them, and arguably the best. For example, it does have minimum norm.
You need to employ null to find the non-zero solution. (It can be done using other tools, but you need to understand why they would work, and how to use them. For example, a pivoted Gaussian elimination can solve the problem. You set one of the unknowns to some fixed non-zero value, then use back-substitution to solve for the remainder. Or, you can use the svd to solve the problem. In fact, this is how null works. Finally, you can use a pivoted QR to do the work. Again, these work if you know the linear algebra to make them work. Null is easier, since it is designed to solve exactly that problem.)
Related Question