Since the paper referred to by Hagen Knaf is published by Springer, it may not be available to one and all. The (publicly viewable) MathSciNet reference is: MR355836.
It's a very short paper (7 pages) and the main theorem is:
Theorem A metric space can be embedded in Euclidean n-space if and only if the metric space is flat and of dimension less than or equal to n.
Clearly, the two terms "flat" and "dimension" need expanding. To define these, Morgan considers simplices in the metric space; that is, an n-simplex is simply an ordered (n+1)-tuple of elements of the metric space. Given such an n-tuple, say $(x_0, \dots, x_n)$, Morgan defines $D(x_0,\dots,x_n)$ to be the determinant of the matrix whose $(i,j)$th entry is
$$
\frac{1}{2}\left(d(x_0,x_i)^2 + d(x_0,x_j)^2 - d(x_i,x_j)^2\right)
$$
A metric space is flat if this is positive for all simplices. If it is flat, its dimension (if this exists) is the largest n for which there is an n-simplex with this quantity positive.
The argument (on a skim read) is quite cunning. Rather than go for a one-shot embedding, Morgan defines a map into $\mathbb{R}^n$ of the appropriate dimension and then uses that map to define an inner product on $\mathbb{R}^n$ with respect to which the map is an embedding. Standard linear algebra then completes the argument.
Now finite dimension, in this sense, is clearly very strong. It basically says that the metric is controlled by n points. The flatness condition says that those n points embed properly (and presumably rules out the examples in the question and the first answer). But then, that's probably to be expected since embeddability into Euclidean space is similarly a strong condition.
The Gromov--Hausdorff distance is good only to define topology;
i.e., one should not care about distance between particular spaces.
But since you insist, I will answer an easier question which is closely related.
There is a modified distance $d'_{GH}(X,Y)$ defined as infimum of all numbers $\varepsilon>0$ such that there are maps $f_1\colon X\to Y$ and $f_2\colon Y\to X$ such that
$$|f_i(x)-f_i(y)|\ge |x-y|-\varepsilon.$$
This distance $d^\prime_{GH}$ is equivalent to $d_{GH}$
and it is usually easier to find value $d^\prime_{GH}$
If $ p < q < p^2$ then it is easy to see that
$$ d^\prime_{GH} ( \mathbb Z_{p},\mathbb Z_{q}) = \tfrac{p-1}{p}. $$
Further, if $ p^2 < q < p^3$ then
$$ d^\prime_{GH} ( \mathbb Z_{p},\mathbb Z_{q}) = \tfrac{p^2-1}{p^2}$$
and so on.
Best Answer
Firstly, if you just have pairwise distances rather than coordinates of points in $\mathbb{R}^3$, then you can attempt to embed the mesh into $\mathbb{R}^3$ using Isomap. Under certain conditions, such as if you have a triangulated convex polyhedron, the isometric embedding into $\mathbb{R}^3$ is unique up to isometries of $\mathbb{R}^3$.
After you embed your meshes $X, Y$ as finite labelled subsets of $\mathbb{R}^3$, the problem of finding the optimal scaling and isometry which minimises the error $\sum_i \lVert X_i - Y_i \rVert_2^2$ is called Procrustes analysis. After applying this isometry, you get a canonical (root-mean-square) distance between the sets $X, Y$.
The way that Procrustes analysis usually proceeds is to subtract the mean, then scale to have unit variance, and then use singular value decomposition to find the optimal orthogonal matrix transforming $X$ to match $Y$ as closely as possible. The last part is called the Kabsch algorithm.