I posted this answer to a similar question on sci.math. I will transcribe the question and the summary of the solution below. For this problem, we don't need to compute $r$, just set it to $1$.
Least-Squares Conformal Multilinear Regression
Given $\{ P_j : 1 \le j \le m \}$ and $\{ Q_j : 1 \le j \le m \}$, two sets of
points, we want to find a conformal map, defined by a linear map, $M$,
and a vector, $R$, which maps one set of points to the other via
$$
Q = P M + R\tag{1}
$$
where we require that $M M^T = r^2 I$ and that the square residue
$$
\sum_{j=1}^m\left|P_jM+R-Q_j\right|^2\tag{2}
$$
is minimal. Note that $(1)$ requires that $P$ and $Q$ are row vectors.
Summary of the Method
To find the least squares solution to $P M + R = Q$ for a given set of
$\{ P_j \}$ and $\{ Q_j \}$, under the restriction that the map be conformal,
we first compute the centroids
$$
\overline{P}=\frac1m\sum_{j=1}^mP_j\qquad\text{and}\qquad
\overline{Q}=\frac1m\sum_{j=1}^mQ_j
$$
Next, compute the matrix
$$
\begin{align}
S
&=\sum_{j=1}^m\left(Q_j-\overline{Q}\right)^T\left(P_j-\overline{P}\right)\\
&=\sum_{j=1}^mQ_j^TP_j-m\overline{Q}^T\overline{P}
\end{align}
$$
Let the Singular Value Decomposition of $S$ be
$$
S=UDV^T
$$
Next compute $\{ c_k \}$ with
$$
\begin{align}
c_k
&=\sum_{j=1}^m\left[\left(P_j-\overline{P}\right)V\right]_k\left[\left(Q_j-\overline{Q}\right)U\right]_k\\
&=\sum_{j=1}^m\left[P_jV\right]_k\left[Q_jU\right]_k-m\left[\overline{P}V\right]_k\left[\overline{Q}U\right]_k
\end{align}
$$
and define
$$
a_k = \mathrm{sgn}( c_k )
$$
Let $I_k$ be the matrix with the $(k,k)$ element set to $1$ and all the
other elements set to $0$. Then calculate
$$
E=\sum_{k=1}^na_kI_k
$$
Compute the orthogonal matrix
$$
W=VEU^T
$$
If $\det(W) < 0$ but $\det(W) > 0$ is required, change the sign of the $a_k$
associated with the $c_k$ with the smallest absolute value.
If required, compute $r$ by
$$
r\sum_{j=1}^m\left|P_j-\overline{P}\right|^2=\sum_{j=1}^m\left\langle\left(P_j-\overline{P}\right)W,Q_j-\overline{Q}\right\rangle
$$
or equivalently
$$
r\left(\sum_{j=1}^m\left|P_j\right|^2-m\left|\overline{P}\right|^2\right)
=\sum_{j=1}^m\left\langle P_jW,Q_j\right\rangle-m\left\langle\overline{P}W,\overline{Q}\right\rangle
$$
Finally, we have the desired conformal map $Q = P M + R$ where
$$
M = r W
$$
and
$$
R = \overline{Q} - \overline{P} M
$$
More information, easier computation
Suppose you want to map $\{P_i\}_{i=1}^3$ to $\{Q_i\}_{i=1}^3$, and the distances between the $P_i$'s and $Q_i$'s are the same. Compute a fourth point by
$$
P_4=P_1+(P_2-P_1)\times(P_3-P_1)
$$
and
$$
Q_4=Q_1+(Q_2-Q_1)\times(Q_3-Q_1)
$$
Then create the matrix $P$ whose columns are $P_2-P_1$, $P_3-P_1$, and $P_4-P_1$.
Also create the matrix $Q$ whose columns are $Q_2-Q_1$, $Q_3-Q_1$, and $Q_4-Q_1$.
Then $x\mapsto QP^{-1}x+(Q_1-QP^{-1}P_1)$ maps the source points to the destination points.
Let $\mathbf{R}_1$ describe the rotation from global coordinate system to the first local coordinate system, and $\mathbf{R}_2$ describe the rotation from global coordinate system to the second local coordinate system.
Then, $\mathbf{R}_1^{-1}$ describes the rotation from first local coordinate system back to the global coordinate system, and $\mathbf{R}_2^{-1}$ describes the rotation from the second local coordinate system back to the global coordinate system.
When matrix $\mathbf{R}$ is a pure rotation matrix, it is orthonormal, and its inverse is its transpose, $\mathbf{R}^{-1} = \mathbf{R}^T$.
Let's assume right- or post-multiplication between matrices and vectors.
This means that if vector $\vec{v}_g$ in global coordinate system corresponds to vector $\vec{v}_1$ in the first local coordinate system, and to vector $\vec{v}_2$ in the second local coordinate system, then
$$\vec{v}_1 = \mathbf{R}_1 \vec{v}_g \tag{1}\label{EQ1}$$
$$\vec{v}_2 = \mathbf{R}_2 \vec{v}_g \tag{2}\label{EQ2}$$
$$\vec{v}_g = \mathbf{R}_1^{-1} \vec{v}_1 \tag{3}\label{EQ3}$$
$$\vec{v}_g = \mathbf{R}_2^{-1} \vec{v}_2 \tag{4}\label{EQ4}$$
We can substitute $\eqref{EQ4}$ into $\eqref{EQ1}$ to get
$$\vec{v}_1 = \mathbf{R}_1 \mathbf{R}_2^{-1} \vec{v}_2$$
or $\eqref{EQ3}$ into $\eqref{EQ2}$ to get
$$\vec{v}_2 = \mathbf{R}_2 \mathbf{R}_1^{-1} \vec{v}_1$$
This means that to rotate a vector from the first local coordinate system to the second coordinate system, we need to use rotation matrix $\mathbf{R}_{1 \to 2}$,
$$\mathbf{R}_{1 \to 2} = \mathbf{R}_2 \mathbf{R}_1^{-1} \tag{5}\label{EQ5}$$
and from second local coordinate system back to first, $\mathbf{R}_{2 \to 1}$,
$$\mathbf{R}_{2 \to 1} = \mathbf{R}_1 \mathbf{R}_2^{-1} \tag{6}\label{EQ6}$$
Note that due to both matrices being orthonormal,
$$\begin{aligned}
\mathbf{R}_{2 \to 1} &= \mathbf{R}_{1 \to 2}^{-1} \\
\mathbf{R}_{1 \to 2} &= \mathbf{R}_{2 \to 1}^{-1} \\
\end{aligned}$$
There is no need to repeat this procedure for every single case. From the above, we can derive a few rules that will help in this kind of situation immensely.
We consider only column vectors, i.e. $\vec{v} = \left [ \begin{matrix} v_1 \\ v_2 \\ v_3 \end{matrix} \right ]$.
We consider only pure rotation and reflection matrices, i.e. orthonormal matrices, for which $\mathbf{R}^{-1} = \mathbf{R}^{T}$, and is the inverse of rotation by $\mathbf{R}$.
We use right- or post-multiplication of vectors for matrix-vector multiplication. This means that to rotate vector $\vec{v}$ by matrix $\mathbf{R}$, we obtain result, or rotated vector $\vec{r}$, using $\vec{r} = \mathbf{R} \vec{v}$.
We can chain rotations by multiplying the rotation matrices. If the matrices are orthonormal, the result is orthonormal as well. The first rotation is the rightmost matrix in the product, and the last rotation is the leftmost matrix in the product.
To calculate the rotation between two local coordinate systems, we "unwind" the starting coordinate system rotations using inverse rotation matrices in the inverse order, followed by the rotation matrixes in the normal order from global coordinate system to the target local coordinate system.
As an example, assume we have two local coordinate systems whose rotations from the global coordinate system are $\mathbf{R}_1$ and $\mathbf{R}_2$, and we have two further local coordinate systems, $\mathbf{R}_{11}$ and $\mathbf{R}_{12}$ on top of (or relative to) $\mathbf{R}_1$, and two further local coordinate systems $\mathbf{R}_{21}$ and $\mathbf{R}_{22}$ on top of (or relative to) $\mathbf{R}_2$.
In other words, vector $\vec{v}_0$ in the global coordinate system corresponds to:
$$\begin{aligned}
\vec{v}_1 &= \mathbf{R}_1 \vec{v}_0 \\
\vec{v}_2 &= \mathbf{R}_2 \vec{v}_0 \\
\vec{v}_{11} &= \mathbf{R}_{11} \mathbf{R}_1 \vec{v}_0 \\
\vec{v}_{12} &= \mathbf{R}_{12} \mathbf{R}_1 \vec{v}_0 \\
\vec{v}_{21} &= \mathbf{R}_{21} \mathbf{R}_2 \vec{v}_0 \\
\vec{v}_{22} &= \mathbf{R}_{22} \mathbf{R}_2 \vec{v}_0 \\
\end{aligned}$$
then
$$\begin{aligned}
\mathbf{R}_{1 \to 2} &= \mathbf{R}_2 \mathbf{R}_1^{T} \\
\mathbf{R}_{22 \to 2} &= \mathbf{R}_{22}^{T} \\
\mathbf{R}_{21 \to 12} &= \mathbf{R}_{12} \mathbf{R}_{1} \mathbf{R}_2^{T} \mathbf{R}_{21}^{T} \\
\end{aligned}$$
and so on. Simples!
Note that we could write $\mathbf{R}_{22 \to 2} = \mathbf{R}_2 \mathbf{R}_2^{T} \mathbf{R}_{22}^{T}$, but since $\mathbf{R} \mathbf{R}^{T} = \mathbf{R}^{T} \mathbf{R} = \mathbf{I}$ (identity matrix; "no change") for orthonormal matrices, we can omit the identity or "no change" part, and just use $\mathbf{R}_{22 \to 2} = \mathbf{R}_{22}^{T}$.
Best Answer
With $P_1 = \begin{bmatrix} \mathbf{R}_1 & \mathbf{t}_1\\ \mathbf{0} & 1\\ \end{bmatrix}$, $P_2 = \begin{bmatrix} \mathbf{R}_2 & \mathbf{t}_2\\ \mathbf{0} & 1\\ \end{bmatrix}$: \begin{align} P_2 = P_2 P^{-1}_1 P_1 = P_2 \begin{bmatrix} \mathbf{R}^{\mathrm{T}}_1 & -\mathbf{R}^{\mathrm{T}}_1\mathbf{t}_1\\ \mathbf{0} & 1\\ \end{bmatrix}P_1 = \begin{bmatrix} \mathbf{R}_2\mathbf{R}^{\mathrm{T}}_1 & -\mathbf{R}_2\mathbf{R}^{\mathrm{T}}_1\mathbf{t}_1+t_2\\ \mathbf{0} & 1\\ \end{bmatrix}P_1. \end{align} Notice that since $R_i$ are rotation matrices, $R^\mathrm{T}_iR_i = I$