So called pseudovectors pop up in physics when discussing quantities defined by cross products, such as angular momentum $\mathbf L=\mathbf r\times\mathbf p$. Under the active transformation $\mathbf x \mapsto \mathbf{-x}$, we claim that such a vector gets mapped to itself because $\mathbf{-r} \times \mathbf{-p} = \mathbf r\times\mathbf p$. (Or under the equivalent passive transformation, a pseudovector turns into to its negative.) But it seems like we're just pretending that a linear transformation $T$ preserve cross products, so that $T(\mathbf a \times \mathbf b) = T(\mathbf a) \times T(\mathbf b)$, and then when things don't go as expected we label the result as a pseudovector. Is there more to the story?
Cross Product and Pseudovector – Understanding the Confusion
linear algebraphysics
Related Solutions
The Leibniz rule indeed holds even in this case.
One of the quick ways to see that is to recall that the cross product can be equivalently defined by $$ A \times B := (A^{\flat} \wedge B^{\flat})^{\sharp} $$
Assuming that the covariant derivative works as expected we may write the following lines $$ \begin{align} \nabla_{v}(A \times B) &= (\nabla_{v}A^{\flat} \wedge B^{\flat} + A^{\flat} \wedge \nabla_{v}B^{\flat})^{\sharp} \\ &= (\nabla_{v}A^{\flat} \wedge B^{\flat})^{\sharp} + A^{\flat} \wedge \nabla_{v}B^{\flat})^{\sharp} \\ &= \nabla_{v}A \times B + A \times \nabla_{v}B \end{align} $$
If we look at this calculation more carefully we will observe that there are different covariant derivatives that are involved there! The normal connection acts on the normal fileds such as $A \times B$ here, while the "intrinsic" covariant derivative acts on the tangential fields $A$ and $B$.
We should have adorned our $\nabla$-s with some marks to distinguish them with regards to the bundle they act on, but this is quite customary in differential geometry to use the same $\nabla$ for all bundles involved in calculations provided the reader knows where the sections are taken from.
In fact, if one diligently writes down all the definitions it will be clearly visible that the coordinate presentations of the operations strikingly differ.
Let $A$, $B$ be the foci and $P$ the point on the ellipse, so that $\vec r =\vec{AP}$. I'll use $a$ and $b$ to denote the length of the semi-axes, and $c=\sqrt{a^2-b^2}=AB/2$.
Your vector $\vec q$ (blue in the figure below) is nothing but $-\vec{PN}$, where $N$ is the point where the normal at $P$ intersects the major axis.
To see why, notice first of all that those two vectors have the same direction and that the normal at $P$ is the bisector of $\angle APB$, with $\varphi=\angle APN=\pi/2-\theta$ so that $\sin\theta=\cos\varphi$. We can then apply the well-known result for the length of the bisector in a triangle, to obtain: $$\displaystyle PN={b\over a}\sqrt{r(2a-r)}.$$
From the cosine rule applied to triangle $ABP$ we also get $$\displaystyle \cos\varphi={b\over\sqrt{r(2a-r)}},$$ so that $$\displaystyle PN={b^2\over a\cos\varphi}={\ell\over\cos\varphi}$$ and the vectors also have the same length.
As a consequence, $\vec q-\vec r=\vec{NA}$, which lies indeed on the major axis. Finally, from the angle bisector theorem we have $NA:(2c-NA)=r:(2a-r)$, whence one readily obtains: $$ NA={c\over a}r=er, $$ as it was to be proved.
Best Answer
In three dimensions, pseudovectors are a simple way to treat bivectors, oriented planar subspaces. True vectors are oriented linear subspaces with a weight (their magnitudes); bivectors are planar instead of linear. The normal vectors to these oriented subspaces are what we usually call pseudovectors, and it is for this reason that various operations (like reflections or inversions through the origin) produce "wrong" results.
Notationally, we deal directly with a bivector by forming a wedge product of vectors. That is, the bivector formed by vectors $a,b$ is $a \wedge b$. Given a linear operator $\underline T$, we define the action of the linear operator on a bivector by the following law:
$$\underline T(a \wedge b) \equiv \underline T(a) \wedge \underline T(b)$$
Let us consider the simple case of $\underline T(a) = -a$ for any $a$. Then the associated bivector transforms as $\underline T(a \wedge b) = -a \wedge -b = a \wedge b$, as you observe. Doing it this way--by defining the action of a linear operator on a bivector--makes it sensible, rather than saying simply that pseudovectors transform differently from regular vectors. Here, you build the operator according to a specific rule, and the result is deterministic.
Note that we can continue to build things with wedges that traditional formulations of vector algebra and calculus tend to gloss over. We can define the action of a linear operator on three vectors wedged together.
$$\underline T(a \wedge b \wedge c) = \underline T(a) \wedge \underline T(b) \wedge \underline T(c)$$
The quantity $a \wedge b \wedge c$ is called a trivector or pseudoscalar. In three dimensions, there is only one linearly independent unit trivector, $\hat x \wedge \hat y \wedge \hat z$. The action of $\underline T$ on this object is very interesting. It happens that
$$\underline T(\hat x \wedge \hat y \wedge \hat z) = \hat x \wedge \hat y \wedge \hat z \, \det \underline T$$
This can be taken as a definition of the determinant, defined in a wholly geometric way.
Ultimately, though, yes, linear operators should act individually on the vectors that make them up--they should preserve wedge products. Cross products are related to wedges, however, and so most of the time, applying a linear operator to preserve crosses is sensible, but there are some times (inversion and reflections being among them) that it is not.
Edit: about the relations between operators on duals and duals of operators. The Hodge star is much, much better treated in geometric algebra as multiplication by the pseudoscalar. We define $i \equiv \hat x \wedge \hat y \wedge \hat z$ and make sense of expressions like $\star a = i a$ and $\star (a \wedge b) = -i (a \wedge b)$ through the geometric product. Here are the canonical properties of the geometric product:
You should be able to show then that $i = \hat x \hat y \hat z$ and that $\star a = i a$ captures the Hodge star operation on a vector.
Now, why bother with this stuff? Because it makes formulas that would be ugly and clumsy with the Hodge star very simple. There exists a simple formula relating the adjoint (in Euclidean space, the transpose) of an operator with the inverse. That is,
$$\overline T^{-1}(a) = [\underline T(i)]^{-1} \underline T(ia)$$
for any multivector $a$, where $\overline T$ is the adjoint operator to $\underline T$. Written with Hodge stars, we would need a term of $(-1)^k$ that would alternate based on grade, and it would all be a royal mess. This formula, however, written in geometric algebra, is entirely simple.
Now then, rotations and reflections all belong to the group of orthogonal linear operators, obeying $\overline T^{-1} = \underline T$, so for rotations and reflections we get instead,
$$\underline T(a) = \frac{1}{\det \underline T} i^{-1} \underline T(ia)$$
or, more simply,
$$(\det \underline T) i \underline T(a) = \underline T(ia)$$
In Hodge star notation, for any vector $a$,
$$(\det \underline T) \star[\underline T(a)] = \underline T(\star a)$$
For a rotation, the determinant is $+1$, and as such, the $i$ just pulls out. Rotating the vector and then finding the dual is the same as rotating the dual. For an inversion, the determinant is $-1$, and you can see how the inversion of the vector gets canceled by the determinant's factor.