[Math] Newton Raphson Method in GPS

newton raphson

I'm writing a paper on GPS and how coordinates are found using triangulation methods. To find the coordinates on a 3D system, the Newton Raphson Method is needed. How would I do this and could an example be given as well?

This is the equation for triangulation:

$$\sqrt{(x-x_i)^2 + (y-y_i)^2 + (z-z_i)^2} – c\cdot {\rm d}T = d_i$$

$4$ of these equations are needed

  • $(x,y,z)$ are the coordinates of the point needed (unknown).

  • ${\rm d}T$ is the time offset (unknown).

  • $(x_i,y_i,z_i)$ are the coordinates of the satellite (known)

  • $d_i$ is the distance from satellite to point (known).

  • $c$ is simply a constant.

Best Answer

To illustrate the method in it's simplest form lets take the 2D problem without the timing offset as an example. We have the equations

$$\sqrt{(x-x_1)^2+(y-y_1)^2} = d_1$$ $$\sqrt{(x-x_2)^2+(y-y_2)^2} = d_2$$

where $x,y$ are the unknowns. To solve this using Newton's method we rephase it on the form ${\bf F}(x,y) = 0$ for some function ${\bf F}$. A natural choice to take is ${\bf F}(x,y) = (f_1(x,y),f_2(x,y))$ where

$$f_1(x,y) = \sqrt{(x-x_1)^2+(y-y_1)^2}-d_1$$ $$f_2(x,y) = \sqrt{(x-x_2)^2+(y-y_2)^2}-d_2$$

Now Newton's method says that given a starting guess $\vec{r}_0 = (x_0,y_0)$ we can find a new solution by itterating the recurrence

$${\bf J}(\vec{r}_n)(\vec{r}_{n+1}-\vec{r}_n) = -{\bf F}(\vec{r}_n)$$

or equivalently

$$\vec{r}_{n+1} = \vec{r}_n - {\bf J}^{-1}(\vec{r}_n){\bf F}(\vec{r}_n)$$

where ${\bf J}$ is the Jacobian matrix (and ${\bf J}^{-1}$ is the inverse Jacobian)

$${\bf J}(x,y) = \left(\matrix{\frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y}\\ \frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y}}\right) = \left(\matrix{\frac{x-x_1}{\sqrt{(x-x_1)^2+(x-y_1)^2}} & \frac{y-y_1}{\sqrt{(x-x_1)^2+(x-y_1)^2}}\\ \frac{x-x_2}{\sqrt{(x-x_2)^2+(x-y_2)^2}} & \frac{x-y_2}{\sqrt{(x-x_2)^2+(x-y_2)^2}}}\right)$$

and $\vec{r}_n = (x_n,y_n)$. Apposed to Newton's method in 1D we need to solve a linear equation system ($Ax = b$) at every step (unless we can analytically invert the Jacobian) to find the new solution. To demonstrate the method in action, here is a Mathematica implementation:

(* Desired solution *)
xx = 2.4; yy = 4.2;

(* Constants *)
x1 = 0.5; x2 = 2.4; y1 = 0; y2 = -1.3;
d1 = Sqrt[(xx - x1)^2 + (yy - y1)^2];
d2 = Sqrt[(xx - x2)^2 + (yy - y2)^2];

(* F = 0 function *)
f1[x_, y_] = Sqrt[(x - x1)^2 + (y - y1)^2] - d1;
f2[x_, y_] = Sqrt[(x - x2)^2 + (y - y2)^2] - d2;

(* Jacobian matrix *)
J[x_, y_] = {{D[f1[x, y], x], D[f1[x, y], y]}, {D[f2[x, y], x], D[f2[x, y], y]}};

(* Random first guess *)
rn = {-3.7, 8.9};
Do[
  (* Itterate *)
  rn += -Inverse[J[rn[[1]], rn[[2]]]].{f1[rn[[1]], rn[[2]]], f2[rn[[1]], rn[[2]]]};
  Print["Step ",i," Solution = ", rn];
  , {i, 1, 10}];

Here is a test run. After only $6$ steps we have found the true solution to high accuracy:

Step 1 Solution = $\{9.4199,9.30668\}$

Step 2 Solution = $\{-0.0657397,6.9274\}$

Step 3 Solution = $\{3.95296,4.90711\}$

Step 4 Solution = $\{2.24599,4.40806\}$

Step 5 Solution = $\{2.4086,4.20223\}$

Step 6 Solution = $\{2.4,4.20001\}$

Step 7 Solution = $\{2.4,4.2\}$

I did not add a convergence criterion above, usually one can do this by defining convergence when $|{\bf F}(\vec{r}_n)| < \epsilon$ where $\epsilon$ is some small pre-defined constant.


To generalize this procedure to your problem you need to add two more variables ($z$ and ${\rm d}T$) and add two more functions to ${\bf F}$ and then calculate the now $4\times 4$ Jacobian matrix.

Related Question