I just saw an engineering paper which claims that $$\frac{\partial \hat{\mathbf{x}}}{\partial \hat{\mathbf{x}}} \stackrel{?}{=} -S(\hat{\mathbf{x}})^2 = I – \hat{\mathbf{x}}\hat{\mathbf{x}}^T$$ where $\hat{\mathbf{x}}$ is a unit vector, $S(\hat{\mathbf{x}})$ is the skew symmetric matrix packing of $\hat{\mathbf{x}}$ for use in the cross-product, and I've used the $\stackrel{?}{=}$ symbol to represent the equality I'm calling into question. Is this right? I would have guessed that $$\frac{\partial \hat{\mathbf{x}}}{\partial \hat{\mathbf{x}}} = I$$ just like it is for the vector $\mathbf{x}$, but I'm not sure if the fact that $\hat{\mathbf{x}}$ is a constrained vector somehow explains the appearance of the $-\hat{\mathbf{x}}\hat{\mathbf{x}}^T$ term. Any confirmation or correction is greatly appreciated.
The derivative of a unit vector with respect to itself
derivativesmatrix-calculusmultivariable-calculusprojection-matrices
Related Solutions
Try a different way: find the expression of the cartesian unitary vectors in cylindrical unitary vectors. We have to invert this:
$$\left\{ \begin{matrix} \hat\rho=\cos\phi\hat i+\sin\phi\hat j\\ \hat\phi=-\sin\phi\hat i+\cos\phi\hat j\\ \hat k=\hat k\\ \end{matrix} \right.$$
From this:
$$ \begin{matrix} \sin\phi\hat\rho=\sin\phi\cos\phi\hat i+\sin^2\phi\hat j & \cos\phi\hat\rho=\cos^2\phi\hat i+\cos\phi\sin\phi\hat j\\ \cos\phi\hat\phi=-\cos\phi\sin\phi\hat i+\cos^2\phi\hat j & -\sin\phi\hat\phi=\sin^2\phi\hat i-\sin\phi\cos\phi\hat j\\ \end{matrix} $$
we end with
$$\left\{ \begin{matrix} \hat i=\cos\phi\hat\rho-\sin\phi\hat\phi\\ \hat j=\sin\phi\hat\rho+\cos\phi\hat\phi\\ \hat k=\hat k\\ \end{matrix} \right.$$
Now simply substitute:
$\mathbf A(x,y,z) = z\ \hat i - 2x\ \hat j + y\ \hat k=$
$=z(\cos\phi\hat\rho-\sin\phi\hat\phi)-2\rho\cos\phi(\sin\phi\hat\rho+\cos\phi\hat\phi)+\rho\sin\phi\hat k$
Rearrange and we are done.
$$\mathbf A(\rho,\phi,z) =(z\cos\phi-2\rho\sin(2\phi))\hat\rho-(z\sin\phi+2\rho\cos^2\phi)\hat\phi+\rho\sin\phi\hat k$$
The symbol $\delta$ in this context is known as the Kronecker delta. It denotes a function $\mathbb Z\times\mathbb Z\to\{0,1\}$ whose value at $(i,j)$ is written as $\delta_{ij}$. This function is defined by the following rule: $\delta_{ij}=1$ if $i=j$, and $0$ otherwise.
The symbol $\varepsilon$ is known as the Levi-Civita symbol. It denotes the function $\{1,2,3\}\times\{1,2,3\}\times\{1,2,3\}\to\{-1,0,1\}$ given by $$ \varepsilon_{ijk} = \begin{cases} +1 & \text{if } (i,j,k) \text{ is } (1,2,3), (2,3,1), \text{ or } (3,1,2), \\ -1 & \text{if } (i,j,k) \text{ is } (3,2,1), (1,3,2), \text{ or } (2,1,3), \\ 0 & \text{if } i = j, \text{ or } j = k, \text{ or } k = i. \end{cases} $$ Put differently, $\varepsilon_{ijk}$ equals $0$ when there are any repeated entries, equals $1$ for even permutations of $(1,2,3)$, and equals $-1$ for odd permutations of $(1,2,3)$. The Levi-Civita symbol can also be defined more generally for $n$ indices $i_1i_2\dots i_n$, so that $\varepsilon_{i_1i_2\dots i_n}$ equals $0$ when there are any repeated entries, equals $1$ for even permutations of $(1,\dots,n)$, and equals $-1$ for odd permutations.
Both of these notations are often used in conjunction with the summation convention, also known as Einstein notation. For instance, if $A$ is a $3\times 3$ matrix, we can express its determinant as $$ \det(A)=\varepsilon_{ijk}a_{1i}a_{2j}a_{3k} \, , $$ where the double appearance of the indices $i,j,k$ indicates that the above is a shorthand for $$ \det(A)=\sum_{i=1}^{3}\sum_{j=1}^{3}\sum_{k=1}^{3}\varepsilon_{ijk}a_{1i}a_{2j}a_{3k} \, . $$
Best Answer
$ \def\l{\lambda} \def\x{{\hat x}} \def\qiq{\quad\implies\quad} \def\c#1{\color{red}{#1}} $Let $\x$ be the direction and $\lambda$ the length of an unconstrained vector $x$.
Calculate the differential of $\x$ as follows $$\eqalign{ \l^2 &= x^Tx \qiq \l\,d\l = x^Tdx \\ \x &= \l^{-1}x \\ d\x &= \c{\l^{-1}dx} - x\l^{-2}d\l \\ &= \l^{-1}I\,dx - \l^{-3}x\l\,d\l \\ &= \l^{-1}(I - \l^{-2}xx^T)\,dx \\ &= \l^{-1}(I - \x\x^T)\,dx \\ }$$ Rearranging the terms yields $$\eqalign{ d\x &= (I - \x\x^T)\,(\c{\l^{-1}dx}) \\ &= (I - \x\x^T)\,(\c{d\x+x\l^{-2}d\l}) \\ &= (I - \x\x^T)\,d\x + (I-\x\x^T)\,\x\l^{-1}d\l \\ &= (I - \x\x^T)\,d\x + (x-\x\x^Tx)\,\l^{-1}d\l \\ &= (I - \x\x^T)\,d\x + (0)\,\l^{-1}d\l \\ &= (I - \x\x^T)\,d\x \\ \frac{\partial\x}{\partial\x} &= (I - \x\x^T) \\ }$$ which is the desired result.