[Math] Solve non-linear equations of 3 variables using Newton-Raphson Method iterms of c,s and q.

algorithmsanalysisapproximationnonlinear systemnumerical methods

The three non-linear equations are given by
\begin{equation}
c[(6.7 * 10^8) + (1.2 * 10^8)s+(1-q)(2.6*10^8)]-0.00114532=0
\end{equation}
\begin{equation}
s[2.001 *c + 835(1-q)]-2.001*c =0
\end{equation}
\begin{equation}
q[2.73 + (5.98*10^{10})c]-(5.98 *10^{10})c =0
\end{equation}
Using the Newton-Raphson Method solve these equations in terms of $c$,$s$ and $q$.

=> It is really difficult question for me because i don't know very much about the Newton-Raphson Method and also these non-linear equations contain 3 variables.

I have try by applying the newton-Raphson method to each equations:-
\begin{equation}
f(c,s,q)=0= c[(6.7 * 10^8) + (1.2 * 10^8)s+(1-q)(2.6*10^8)]-0.00114532
\end{equation}
\begin{equation}
g(c,s,q)=0= s[2.001 *c + 835(1-q)]-2.001*c
\end{equation}
\begin{equation}
h(c,s,q)=0= q[2.73 + (5.98*10^{10})c]-(5.98 *10^{10})c
\end{equation}
now i guess i need to work out $f'(c,s,q), g'(c,s,q), h'(c,s,q)$ but i dont know how?

and after working out $f'(c,s,q), g'(c,s,q), h'(c,s,q)$ . After that i think i need to use newton-raphson iteration:

$c_{n+1}= c_n – \frac{f(c,s,q)}{f'(c,s,q)}$

but the $f(c,s,q)$ and $f'(c,s,q)$ contains the $s$ and $q$.

Similarly, for

$s_{n+1}= s_n – \frac{g(c,s,q)}{g'(c,s,q)}$

will have $g(c,s,q)$ and $g'(c,s,q)$ containing the $c$ and $q$.

$q_{n+1}= q_n – \frac{h(c,s,q)}{h'(c,s,q)}$

will have $h(c,s,q)$ and $h'(c,s,q)$ containing the $c$.

so am i not sure what to do please help me. to find the values of $c,s,q$.

Best Answer

The Newton-Raphson method is based on considering the tangent line. We need some linear algebra to understand and implement the substitute of "tangent line" in multiple dimensions.

A nonlinear system of $n$ equations with $n$ unknowns can be written in vector form as $\vec F(\vec x)=0$. The first order partial derivatives of $\vec F$ form the Jacobian matrix $J$: put the components of $\vec F$ in a column, then take derivatives in each variable. For example if $$ \vec F(\vec x) = \begin{pmatrix} x_1^2e^{3x_2}-30 \\ x_1x_2-\sin(x_1+x_2^2) \end{pmatrix} $$ then $$ J = \begin{pmatrix} 2x_1e^{3x_2} & 3x_1^2e^{3x_2} \\ x_2-\cos(x_1+x_2^2) & x_1-2x_2\cos(x_1+x_2^2) \end{pmatrix} $$ The Jacobian matrix provides a linear approximation to $\vec F$: near a point $\vec x_0$ we have $$\vec F(\vec x) \approx \vec F(\vec x_0) + J(\vec x-\vec x_0) \tag1$$ This is the analog of tangent line.

Following the idea of single variable method, we equate the right hand side of (1) to $\vec 0$ and solve: $$\vec x - \vec x_0 = -J^{-1} \vec F(\vec x_0) \tag2$$ Note that (2) is merely for writing down the theoretical approach, in practice we do not invert the matrix $J$. Rather, we let the software (Matlab or whatever) solve the system with matrix $J$ and right hand side $-\vec F(\vec x_0)$, which it can do efficiently.

Having solved the system, you obtain the new point $\vec x$, which takes the role of $\vec x_0$ at the next step of iteration. Continue until the norm $|\vec x - \vec x_0|$ becomes small... or until the allowed number of iterations runs out (indicating the method fails to converge).