I've come up with an optimal algorithm on my own! This is the most difficult task I've solved in my life, so focus.
So. First I'll explain the example for $d=2$ and then generalize it.
We want to find the farthest points in 2D. Let's determine for each point it's distance from $(0;0)$. It's $|x_i|+|y_i|$. When we have two points we can calculate the distance between them using their distance from $(0;0)$. So for $(1;2)$ and $(6;7)$ we have 3 and 13 so distance between them is 10. BUT for points $(1;7)$ and $(6;-1)$ it doesn't work. Why? Because with the first pair we have segment which looks like / while in the second one the segment looks like \ . In the first case it's ok, because the distance between first point and $(0;0)$ and between second point and $(0;0)$ shares the beginning of the distance and in the second case not necessarily. How can we fix that? By rotating the coordinate system or, more specifically, the axis X, negating all $x_i$. Then for $(1;7)$ and $(6;-1)$ we have respectively $7-1=6$ and $-1-6=-7$ so the distance between them is $6+|-7|=13$ and that's correct. Knowing that, we can find $min$ and $max$ of $y_i+x_i$ and $min2$ and $max2$ of $y_i-x_i$ in linear time. What can we do with it? Check $|min-max|$ and $|min2-max2|$ and the bigger one will be our result (biggest distance between two points in 2D).
Now we can generalize this for $d>2$. Firstly, we don't find only $y_i+x_i$ and $y_i-x_i$ but sums with all combinations of signs. For example for: 1 2 3 4, we have to check sums: $1+2+3+4$, $1+2+3-4$, $1+2-3+4$, $1+2-3-4$, $1-2+3+4$, $1-2+3-4$, $1-2-3+4$ and $1-2-3-4$. Why not $-1+2+3+4$? Because it's just negation of $1-2-3-4$ (just different sign). So we keep all max's and min's for all such combinations of signs. There will be $2^{d-1}$ max's and $2^{d-1}$ min's. After reading the points we just find maximum from $|max_i-min_i|$ for $i \in \left<0;2^{d-1}\right>$ (assuming we count from 0). It's worth adding that we go through the combinations with binary code. So 0000, 0001, 0010, 0011, etc. If there's 0 on a position we negate the number. The complexity is $O(n \cdot d \cdot 2^d)$, because for each point we check all $2^{d-1}$ combinations and for each of them we calculate the sum in $\Theta(d)$ time. That's not the complexity I wanted, though! Let's go deeper.
Here's the code for the algorithm above: https://ideone.com/smc9dL with $d \le 16$ (wb = abs in my language).
We can notice that we don't have to calculate the sum every time. We can first calculate the sum and then subtract or add a certain value if only one position was changed. Thus, instead of binary code we can use Gray code where only one position changes. But how to move from one "Gray number" to another? We can use the formula: $\text{Gray}(x) = x \oplus \lfloor\frac{x}{2}\rfloor$ which calculates $x$-th Gray number. But how to get position on which the value changes? We can first calculate $\text{Gray}(x) \oplus \text{Gray}(x-1)$ (which will be a power of 2 - zeroes with only one 1 - ...0001000...) and in $O(d)$ time check it's position. But it will have the same complexity as in the previous algorithm! Yes, it will. But we can notice, that these positions can be calculated once. Therefore, we calculate for each number in range $\left<1;2^{d-1}\right)$ in $O(d)$ time. And later we check them in constant time. We also have to remember whether bit 0 switched to 1 or 1 to 0. In the first case we add and in the second we subtract.
The final algorithm is here: https://ideone.com/Lhvbac .
Hope I made everything clear.
Since the radius increases at a constant rate relative to the change of the angle $\theta$ the polar equation would be $r = r_1 + \frac{r_2-r_1}{\theta_1 - \theta_0}\theta$ where $r_2 > r_1$. To find the distance you need the integral $\int_{\theta_0}^{\theta_1}\sqrt{1+(\frac{\text{d}r}{\text{d}\theta})^2}\text{d}\theta$, where $\frac{\text{d}r}{\text{d}\theta}=\frac{r_2-r_1}{\theta_1 - \theta_0}$. Which would yield a length of $(\theta_1 - \theta_0)\sqrt{1 + (\frac{r_2-r_1}{\theta_1 - \theta_0})^2}$. Even simpler it would be $\sqrt{(\theta_1 - \theta_0)^2 + (r_2-r_1)^2}$.
Best Answer
Doing this with angles, as Jyrki suggested, is cumbersome and difficult to generalize to different dimensions. Here is an answer that's essentially a generalization of WimC's, which also fixes an error in his answer. In the end, I show why this works, since the proof is simple and nice.
The algorithm
Given a distance matrix $D_{ij}$, define $$M_{ij} = \frac {D^2_{1j}+D^2_{i1}-D^2_{ij}} 2 \,.$$ One thing that is good to know in case the dimensionality of the data that generated the distance matrix is not known is that the smallest (Euclidean) dimension in which the points can be embedded is given by the rank $k$ of the matrix $M$. No embedding is possible if $M$ is not positive semi-definite.
The coordinates of the points can now be obtained by eigenvalue decomposition: if we write $M = USU^T$, then the matrix $X = U \sqrt S$ (you can take the square root element by element) gives the positions of the points (each row corresponding to one point). Note that, if the data points can be embedded in $k$-dimensional space, only $k$ columns of $X$ will be non-zero (corresponding to $k$ non-zero eigenvalues of $M$).
Why does this work?
If $D$ comes from distances between points, then there are $\mathbf x_i \in \mathbb R^m$ such that $$D_{ij}^2 = (\mathbf x_i - \mathbf x_j)^2 = \mathbf x_i^2 + \mathbf x_j^2 - 2\mathbf x_i \cdot \mathbf x_j \,.$$ Then the matrix $M$ defined above takes on a particularly nice form: $$M_{ij} = (\mathbf x_i - \mathbf x_1) \cdot (\mathbf x_j - \mathbf x_1) \equiv \sum_{a=1}^m \tilde x_{ia} \tilde x_{ja}\,,$$ where the elements $\tilde x_{ia} = x_{ia} - x_{1a}$ can be assembled into an $n \times m$ matrix $\tilde X$. In matrix form, $$M = \tilde X \tilde X^T \,.$$ Such a matrix is called a Gram matrix. Since the original vectors were given in $m$ dimensions, the rank of $M$ is at most $m$ (assuming $m \le n$).
The points we get by the eigenvalue decomposition described above need not exactly match the points that were put into the calculation of the distance matrix. However, they can be obtained from them by a rotation and a translation. This can be proved for example by doing a singular value decomposition of $\tilde X$, and showing that if $\tilde X \tilde X^T = X X^T$ (where $X$ can be obtained from the eigenvalue decomposition, as above, $X = U\sqrt S$), then $X$ must be the same as $\tilde X$ up to an orthogonal transformation.