For the 3D rotation operation $R^{-1}(R(\omega_0)*R(\omega))$, how can we compute the derivative wrt $\omega$

differential-geometrymatrix exponentialmatrix-calculusrotations

How can we compute the Jacobian derivative of the function:

$$f(\omega) = R^{-1}(R(\omega_0) R(w))$$

with respect to $\omega$, where $\omega_0 \in \mathbb{R}^3$ is some fixed/constant vector, the function:
$$R(\omega) \triangleq \exp \big( [\omega]_{\times} \big) \triangleq \exp
\left(
\begin{bmatrix}
0 & -w_z & w_y \\
w_z & 0 & -w_x \\
-w_y & w_x & 0 \\
\end{bmatrix}
\right),
$$

is the Rodrigues-vector-to-rotation mapping, and $R^{-1}(\cdot)$ is the corresponding inverse function?

For the sake of this question, we are primarily interested in taking the derivative about the point $\omega=0$, since other values of $\omega$ can be absorbed into the constant $\omega_0$ with a little extra effort. So everything should be expressible in terms of $\omega_0$.


Edit: Using some dirty empirical methods, I was able to get the first few coefficients of the Taylor expansion as:
$$
\frac{d f}{d \omega} = Z^0 + \frac{1}{2!} Z^1+ \frac{1}{2 (3!)}Z^2 – \frac{1}{6 (5!)}Z^4 + \frac{1}{6 (7!)} Z^6 – (…)
$$

where $Z=[\omega_0]_{\times}$. (Evidently, there no higher order odd terms beyond the first.) So this appears to have a similar form to Rodrigues, as:
$$\frac{d f}{d \omega} = I + \frac{1}{2} Z + \beta(\omega_0) Z^2$$
but getting a functional form for $\beta$ seems nontrivial. (I'd venture a guess than $\beta$ is purely a function of $\|\omega_0\|$.)

Best Answer

To begin with, let's get rid of the logarithm map, and hit both sides by an arbitrary test vector $\mathbf{v}$: $$R_{f(\omega)}\mathbf{v} = R_{\omega_0} R_{\omega}\mathbf{v}.$$

Now we can differentiate both sides, making use of the formulas (derived using geometric arguments) in Gallego and Yezzi, A Compact Formula for the Derivative of a 3-D Rotation in Exponential Coordinates, Journal of Mathematical Imaging and Vision, Volume 51 Issue 3, March 2015, 378--384:

$$\left[\left(d R_{\omega}\right) \delta \omega\right]\mathbf{v} = -R_{\omega} [\mathbf{v}]_\times T(\omega)\delta \omega,$$ for variation $\delta\omega$ and $$T(\omega) = \begin{cases}I, & \|\omega\| = 0\\ \frac{\omega\omega^T + (R_{-\omega}-I)[\omega]_\times}{\|\omega\|^2}, & \|\omega\| > 0.\end{cases}$$

We have $$\left[dR_{f(\omega)} df(\omega)\delta\omega\right] \mathbf{v} = -R_{\omega_0} R_{\omega} [\mathbf{v}]_{\times} T(\omega) \delta \omega$$ $$-R_{f(\omega)} \left[\mathbf{v}\right]_{\times} T[f(\omega)] df(\omega) \delta \omega =-R_{\omega_0} R_{\omega} [\mathbf{v}]_{\times} T(\omega) \delta \omega.$$ The rotations at the beginning cancel, and since the variation $\delta\omega$ and test vector $\mathbf{v}$ are arbitrary, we must have equality of matrices $$df(\omega) = T[f(\omega)]^{-1}T(\omega),$$ with special case $$df(0) = T(\omega_0)^{-1}.$$