Unfortunately, your update doesn't convince me you did the correct job, based on your script, your x
variable is already the answer and residing in the workspace, the second time when your own scripts are being executed, only one iteration has been made, not the whole Newton's method.
If you are allowed to use Symbolic Toolbox
to compute the Jacobian, there is a more universal way of doing instead of just carrying out a problem-oriented script. Here is a modified version to match your notation of an old implementation of mine for Newton's method, and this could be easily vectorized for a multi-dimensional nonlinear equation system using varargin
input, and do a string size check on the inline function you passed to the following function.
function [s J] = newton(f,p0,tol,MaxIter)
format long
x = sym('x'); y = sym('y');
F = f([x,y]);
% Compute the Jacobian matrix symbolically
J = jacobian(F);
invJ = inv(J);
s = zeros(MaxIter,2);
s(1,:) = p0;
dsnorm = inf;
iter = 1;
while dsnorm>tol && iter<MaxIter
ds = -subs(invJ,[x y],s(iter,:))*f(s(iter,:));
s(iter+1,:) = s(iter,:) + ds';
dsnorm = norm(ds,inf);
iter = iter+1;
end
s = s(1:iter,:);
end
This function accepts input f
as an inline function, p0
is the initial guess, tol
is the tolerance and MaxIter
is the maximum iteration allowed, if the vectorized version of your input is:
>>f=inline('[p(:,1).^2+p(:,2).^2-2.12; ...
p(:,2).^2-p(:,1).^2.*p(:,2)-0.04]','p');
Then
>>s = newton(f,[1 1],0.4e-6,10)
would return:
s =
1.000000000000000 1.000000000000000
1.006666666666667 1.053333333333333
1.006852189012140 1.051784722315402
1.006852720492674 1.051783057117558
1.006852720493521 1.051783057115295
As you can see the last output is different from yours, because basically what you had was fsolve
output, however fsolve
use trust-region-dogleg
algorithm by default, not Newton's method, if your output coincides with fsolve
's result, then your implementation should have a problem.
Short Answer: Backward Euler method is not A-stable for nonlinear diffusion equation where the diffusion constant is "strongly" dependent on the solution $u$ itself.
Long Answer (but rather heuristic): Backward Euler method (BEM) is stable for
$$
\frac{\partial u}{\partial t} = Au + f
$$
when the differential operator $A$ has all eigenvalue being negative (or all eigenvalue's real parts are negative). An intuitive explanation is that, the error introduced by the scheme itself in current time step will not propagate to the next time step in a magnified fashion. But rather, the error at $n$-th step will shrink as we marching to the next time step, and the sum of the error in each time adding up will be order $O(h^{k})$.
Your equation is:
$$
\frac{\partial u}{\partial t} = \nabla\cdot(\alpha(u)\nabla u) + f
$$
written in dimensionless form.
If $\alpha$ is a constant, or is something not relying on $u$, and has certain smoothness. Assuming you have the Dirichlet boundary condition, then the following eigenvalue problem
$$
A u = \nabla\cdot(\alpha \nabla u) = \lambda u$$
has all eigenvalues, $\lambda$, are negative. Using BEM will converge nicely.
If $\alpha =\alpha(u)$, the operator you have here is nonlinear $A(u)$. Rewriting the equation as:
$$
\frac{\partial u}{\partial t} = [A(u)]u + f = Au + [A(u)-A]u + f
$$
where $Au = \nabla\cdot(\hat{\alpha}(x) \nabla u) $. If $\|A(u) - A \|$ is big, then BEM won't be stable, while iterative method works.
Lastly, I suggest you try semi-implicit Runge-Kutta scheme for time dimension. At time step $t_n$, construct an approximation to the operator $A(u({t_n},x))$ using $A(u({t_{n-1}},x))$, $A(u({t_{n-2}},x))$, and so on.
Best Answer
The items below should help you to look up further details of Newton's Method for system of nonlinear equations.
Advantages:
Disadvantages:
References:
Notes