The regular Newton-Raphson method is initialized with a starting point $x_0$ and then you iterate $x_{n+1}=x_n-\dfrac{f(x_n)}{f'(x_n)}$.
In higher dimensions, there is an exact analog. We define:
$$F\begin{bmatrix}x\\y\\z\end{bmatrix} = \begin{bmatrix}f_1(x,y,z) \\ f_2(x,y,z) \\ f_3(x,y,z)\end{bmatrix} = \begin{bmatrix}\sqrt{(x-20000)^2+(y-19400)^2+(z-19740)^2}-20000 = 0 \\ \sqrt{(x-18700)^2+(y-1800)^2+(z-18500)^2}-19000 = 0 \\ \sqrt{(x-18900)^2+(y-17980)^2+(z-20000)^2}-19500= 0 \end{bmatrix}$$
The derivative of this system is the $3x3$ Jacobian given by:
$J(x,y,z) = \begin{bmatrix} \dfrac{\partial f_1}{\partial x} & \dfrac{\partial f_1}{\partial y} & \dfrac{\partial f_1}{\partial z}\\ \dfrac{\partial f_2}{\partial x} & \dfrac{\partial f_2}{\partial y} & \dfrac{\partial f_2}{\partial z} \\ \dfrac{\partial f_3}{\partial x} & \dfrac{\partial f_3}{\partial y} & \dfrac{\partial f_3}{\partial z}\end{bmatrix} =\\
\begin{bmatrix}
\frac{x-20000}{\sqrt{(x-20000)^2+(y-19400)^2+(z-19740)^2}} & \frac{y-19400}{\sqrt{(x-20000)^2+(y-19400)^2+(z-19740)^2}} & \frac{z-19740}{\sqrt{(x-20000)^2+(y-19400)^2+(z-19740)^2}} \\
\frac{x-18700}{\sqrt{(x-18700)^2+(y-1800)^2+(z-18500)^2}} & \frac{y-1800}{\sqrt{(x-18700)^2+(y-1800)^2+(z-18500)^2}} & \frac{z-18500}{\sqrt{(x-18700)^2+(y-1800)^2+(z-18500)^2}} \\
\frac{x-18900}{\sqrt{(x-18900)^2+(y-17980)^2+(z-20000)^2}} & \frac{y-17980}{\sqrt{(x-18900)^2+(y-17980)^2+(z-20000)^2}} & \frac{z-20000}{\sqrt{(x-18900)^2+(y-17980)^2+(z-20000)^2}} \\
\end{bmatrix}$
The function $G$ is defined as:
$$G(x) = x - J(x)^{-1}F(x)$$
and the functional Newton-Raphson method for nonlinear systems is given by the iteration procedure that evolves from selecting an initial $x^{(0)}$ and generating for $k \ge 1$,
$$x^{(k)} = G(x^{(k-1)}) = x^{(k-1)} - J(x^{(k-1)})^{-1}F(x^{(k-1)}).$$
We can write this as:
$$\begin{bmatrix}x^{(k)}\\y^{(k)}\\z^{(k)}\end{bmatrix} = \begin{bmatrix}x^{(k-1)}\\y^{(k-1)}\\z^{(k-1)}\end{bmatrix} + \begin{bmatrix}y_1^{(k-1)}\\y_2^{(k-1)}\\y_3^{(k-1)}\end{bmatrix}$$
where:
$$\begin{bmatrix}y_1^{(k-1)}\\y_2^{(k-1)}\\y_3^{(k-1)}\end{bmatrix}= -\left(J\left(x^{(k-1)},y^{(k-1)},z^{(k-1)}\right)\right)^{-1}F\left(x^{(k-1)},y^{(k-1)},z^{(k-1)}\right)$$
Here is a starting vector:
$$x^{(0)} = \begin{bmatrix}x^{(0)}\\y^{(0)}\\z^{(0)}\end{bmatrix} = \begin{bmatrix}1\\1\\1\end{bmatrix}$$
The iterates will be:
- $x_0 = (1,1,1)$
- $x_1 = (17289.9,15734.3,-8508.52)$
- $x_2 = (13614.9,9566.68,1369.85)$
- $x_3 = (16370.9,11019.8,1760.91)$
- $x_4 = (16274.,10923.6,2010.87)$
- $x_5 = (16276.2,10924.5,2011.53)$
- $x_6 = (16276.2,10924.5,2011.53)$
You should end up with a solution that looks something like:
$$\begin{bmatrix}x\\y\\z\end{bmatrix} = \begin{bmatrix} 16276.2 \\ 10924.5 \\ 2011.53 \end{bmatrix}$$
Of course, for a different starting vector you could get a different solution and perhaps no solution at all.
That's one of the main reasons why linear algebra was invented!
First we translate the problem into matrices: if
$$
\mathbf{A}=\begin{bmatrix}
1 & 1 & 1 \\
1 & 1 & 2 \\
1 & 1 & 3
\end{bmatrix}
\qquad
\mathbf{x}=\begin{bmatrix} x \\ y \\ z \end{bmatrix}
\qquad
\mathbf{b}=\begin{bmatrix} 1 \\ 3 \\ -1 \end{bmatrix}
$$
then the system can be rewritten as $\mathbf{A}\mathbf{x}=\mathbf{b}$. This is not really a great simplification, but allows using the unknowns as a “single object”.
A big advance is obtained by interpreting this in terms of linear maps. The matrix $\mathbf{A}$ induces a linear map $f_{\mathbf{A}}\colon\mathbb{R}^3\to\mathbb{R}^3$ defined by
$$
f_{\mathbf{A}}(\mathbf{v})=\mathbf{A}\mathbf{v}
$$
and now solvability of the linear system becomes the question
does the vector $\mathbf{b}$ belong to the image of $f_{\mathbf{A}}$?
The image $\operatorname{Im}(f_{\mathbf{A}})$ is a vector subspace of $\mathbb{R}^3$; if it has dimension $3$, then clearly the system is solvable. But what if the dimension is less than $3$?
This is the “obstruction” for the solvability: when the dimension of the image (the rank of the linear map and of the matrix $\mathbf{A}$) is less than the dimension of the codomain (in your case $3$) the system can be solvable or not, depending on whether $\mathbf{b}$ belongs to the image or not.
There is no “general answer” that allows just looking at $\mathbf{A}$ and $\mathbf{b}$ and tell whether the system is solvable. Rather, there are efficient techniques that show whether the system has a solution without actually solving it. A very good one is doing elementary row operations, because these correspond to multiplying both sides of the system by an invertible matrix. In the present case, we do
\begin{align}
\left[\begin{array}{ccc|c}
1 & 1 & 1 & 1 \\
1 & 1 & 2 & 3\\
1 & 1 & 3 & -1
\end{array}\right]
&\to
\left[\begin{array}{ccc|c}
1 & 1 & 1 & 1 \\
0 & 0 & 1 & 2\\
0 & 0 & 2 & -2
\end{array}\right]
&&\begin{aligned} R_2&\gets R_2-R_1 \\ R_3&\gets R_3-R_1 \end{aligned}
\\&\to
\left[\begin{array}{ccc|c}
1 & 1 & 1 & 1 \\
0 & 0 & 1 & 2\\
0 & 0 & 0 & -6
\end{array}\right]
&&R_3\gets R_3-2R_2
\end{align}
At this stage we know that the system is not solvable. We also know that the rank of $\mathbf{A}$ is $2$ and even that the image is spanned by the vectors
$$
\begin{bmatrix}1\\1\\1\end{bmatrix}
\qquad
\begin{bmatrix}1\\2\\3\end{bmatrix}
$$
This is easy for the present situation, but the method can be applied to systems of any size, not necessarily with as many equations as unknowns.
The same row elimination shows that if the vector $\mathbf{b}$ had been
\begin{bmatrix} 1 \\ 3 \\ 5 \end{bmatrix}
then the system would be solvable.
Seen in a different way, the system is solvable if and only if
$$
\mathbf{b}=\alpha\begin{bmatrix}1\\1\\1\end{bmatrix}
+\beta\begin{bmatrix}1\\2\\3\end{bmatrix}
$$
for some $\alpha$ and $\beta$.
Best Answer
So you will have $10$ answer pairs for the first equation.
Each of these is obtained by getting the inverse of some chosen value of $z_1$ and then solving for $(8z_2 + 2) \equiv 3z_1^{-1} \bmod 11$ which calculates out to $z_2\equiv 7(3z_1^{-1}-2) \bmod 11$ since $8^{-1}\equiv 7 \bmod 11$.
You know in advance that $z_1\equiv 0 \bmod 11$ is not going to give a solution. Note that $z_2\equiv 0$ can be a solution though; in fact $(z_1,z_2)\equiv (7,0)\bmod 11$ is a solution for the first equation.
So for example with $z_1\equiv 6$, we know $z_1^{-1}\equiv 2$ so $z_2 \equiv 7(3\cdot 2 -2) \equiv 28\equiv 6 \bmod 11$.
Similarly you will have $12$ answer pairs for the second equation. Here of course $z_2\equiv 0 \bmod 13$ cannot be a solution, but this time $z_1\equiv 0$ is feasible and $(z_1,z_2)\equiv (0,4)\bmod 13$ satisfies the requirement.
Combining every solution to the first equation with every solution to the second through the Chinese Remainder Theorem, you will have $120$ answer pairs $\bmod 143$.
An example: combining $(6,6)\bmod 11$ with $(0,4)\bmod 13$ gives $(z_1,z_2)\equiv(39,17)\bmod 143$ as one of the answer pairs.