Vector derivative in a unitary normal vector dependent function

differential-geometrymatrix-calculusmultivariable-calculusvector analysisvectors

I am struggling in finding the paratial vector derivative of a function of a unitary normal vector of the form $E=\bf{\hat{n}}\cdot\bf{\hat{v}} = \hat{n}^T\hat{v}$. Lets say we have $\bf{r_1}$, $\bf{r_2}$ and $\bf{r_3}$ column-vectors and $\bf{\hat{n}} = \frac{(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})}{|(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})|}$. $\bf{\hat{v}}$ is other unitary column vector that is not relevant until the end.

The partial derivative with respecto $\bf{r_1}$ would be given by $\partial_\bf{r_1} = \partial_\bf{r_1}\bf{\hat{n}}\bf{\hat{v}}$,
where

$\partial_\bf{r_1}\bf{\hat{n}} = \frac{1}{{|(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})|}}\partial_\bf{r_1}(\bf{r_3 – r_1})\times(\bf{r_2 – r_1}) + (\bf{r_3 – r_1})\times(\bf{r_2 – r_1})\frac{\partial_\bf{r_1}}{{|(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})|}}$.

By consideringthe skew-symmetric matrix $\bf{a} \times \bf{b} = [\bf{a}]_xb = [\bf{b}]_xa$,and then, if $\bf{a = a(r_1)}$ and $\bf{b = b(r_1)}$,
$\bf\partial_{r1} = [a]_x(\partial_{r_1}b) – [b]_x(\partial_{r_1}a)$.
Consequently, $\partial_\bf{r_1}(\bf{r_3 – r_1})\times(\bf{r_2 – r_1}) = [r_3 – r_2]_x$. My problem comes later, when finding the derivative in the modulus,

$\frac{\partial_\bf{r_1}}{{|(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})|}} = – \frac{(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})}{|(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})|^3}[\bf{r_3-r_2}]_x = \frac{(\bf{r_3 – r_2})\times[(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})]^T}{|(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})|^3}$

where in the last equality I have transposed the numerator to convert the skew to a cross product. The transpose in the second cross produdct is kept for clarity. My main problem is in the evaluation

$(\bf{r_3 – r_1})\times(\bf{r_2 – r_1})\frac{\partial_\bf{r_1}}{{|(\bf{r_3 – r_1})\times (\bf{r_2 – r_1})|}}$.

I think that the transposition leads to a row vector, such that the previous product results in a matrix. Without transposition the dimensions does not agree. I find this confusing. Finally the multiplication by $\bf\hat{v}$ leads to a vector as it is expected from the vector derivative of a scalar.

Can someone clarify me the problem or show me where are my mistakes?

Thank you so much.

Best Answer

For ease of typing let's use $\{x,y,z\}$ in place of $\{r_1,r_2,r_3\}$.
And $(a,b)$ for the differences instead of $\{(z-x),\,(y-x)\}$.

The vector we wish to analyze is $(a\times b)$.
$$\eqalign{ p &= a\times b &\implies dp = a\times db - b\times da \\ \lambda^2 &= p\cdot p &\implies \lambda\,d\lambda = p\cdot dp,\; \lambda=\|p\| \\ n &= \lambda^{-1}p &\implies d\lambda=n\cdot dp,\quad {\tt 1}=\|n\| \\ }$$ Dot an arbitrary vector with $dp$ and recall the triple scalar product rule: $\;c\cdot(a\times b) = (c\times a)\cdot b$ $$\eqalign{ y\cdot dp &= y\cdot(a\times db) - y\cdot(b\times da) \\ &= (y\times a)\cdot db - (y\times b)\cdot da \\ &= (Ya)\cdot db - (Yb)\cdot da \\ &= Y\,(a\cdot db - b\cdot da) \\ }$$ where the uppercase letter $\{Y\}$ denotes the skew symmetric cross-product matrix for $y$.

Now we're ready to tackle the derivative of the ${\cal E}$ function. $$\eqalign{ {\cal E} &= v\cdot n = \lambda^{-1}\;v\cdot p \\ d{\cal E} &= \lambda^{-1}\;v\cdot dp - \lambda^{-2}d\lambda\;v\cdot p \\ &= \lambda^{-1}\;(v\cdot dp - {\cal E}\,d\lambda) \\ &= \lambda^{-1}\;(v\cdot dp-{\cal E}n\cdot dp) \\ &= \lambda^{-1}\;(v-{\cal E}n)\cdot dp \\ &= \lambda^{-1}\;(V-{\cal E}N)\,(a\cdot db - b\cdot da) \\ }$$

If $\{y,z\}$ are held constant we can substitute $\;da=db=-dx\;$ $$\eqalign{ d{\cal E} &= \lambda^{-1}\,(V-{\cal E}N)(b-a)\cdot dx \\ &= \lambda^{-1}\,(v-{\cal E}n)\times(y-z)\cdot dx \\ \frac{\partial{\cal E}}{\partial x} &= \lambda^{-1}\,(v-{\cal E}n)\times(y-z) \\ }$$ Conversely, if we hold $\{x,z\}$ constant, then $\,da=0,\,db=dy$.
While holding $\{x,y\}$ constant means $\;da=dz,\,db=0$.