I am afraid I have bumped across a bug, or at least a confusing feature. It deals with imposing an assumption on a symbolic function. Consider the following code
clear allsyms theta1(t)assume(theta1(t),'real')theta1' % checking if conjugate transpose correctly reduces to transpose for a real function
assumptions % gives a list of all assumptions
The outputs are
ans(t) = theta1(t) ans = in(theta1(t), 'real')
Perfect. That is what we expect. But now consider a minor extension – we add one more symbolic function
clear allsyms theta1(t) syms theta2(t)assume(theta1(t),'real')assume(theta2(t),'real')theta1'theta2'assumptions
The outputs are incorrect:
ans(t) = conj(theta1(t)) ans(t) = theta2(t) ans = in(theta2(t), 'real')
It appears that upon imposing the second assumption, the first assumption is reset/removed.
Note that here the assumptions are imposed on two different/distinct objects; this is not the same case as the one with multiple assumptions mentioned in the manual.
But let's now check that the outcomes are correct if instead of symbolic functions we impose the assumptions on symbolic variables/expressions
clear allsyms theta1 syms theta2assume(theta1,'real')assume(theta2,'real')theta1'theta2'assumptions
The outputs are
ans = theta1 ans = theta2 ans = [in(theta1, 'real'), in(theta2, 'real')]
Correct. That is why I am tempted to conclude that the behaviour for functions exhibits a bug.
Best Answer