In three dimensions, pseudovectors are a simple way to treat bivectors, oriented planar subspaces. True vectors are oriented linear subspaces with a weight (their magnitudes); bivectors are planar instead of linear. The normal vectors to these oriented subspaces are what we usually call pseudovectors, and it is for this reason that various operations (like reflections or inversions through the origin) produce "wrong" results.
Notationally, we deal directly with a bivector by forming a wedge product of vectors. That is, the bivector formed by vectors $a,b$ is $a \wedge b$. Given a linear operator $\underline T$, we define the action of the linear operator on a bivector by the following law:
$$\underline T(a \wedge b) \equiv \underline T(a) \wedge \underline T(b)$$
Let us consider the simple case of $\underline T(a) = -a$ for any $a$. Then the associated bivector transforms as $\underline T(a \wedge b) = -a \wedge -b = a \wedge b$, as you observe. Doing it this way--by defining the action of a linear operator on a bivector--makes it sensible, rather than saying simply that pseudovectors transform differently from regular vectors. Here, you build the operator according to a specific rule, and the result is deterministic.
Note that we can continue to build things with wedges that traditional formulations of vector algebra and calculus tend to gloss over. We can define the action of a linear operator on three vectors wedged together.
$$\underline T(a \wedge b \wedge c) = \underline T(a) \wedge \underline T(b) \wedge \underline T(c)$$
The quantity $a \wedge b \wedge c$ is called a trivector or pseudoscalar. In three dimensions, there is only one linearly independent unit trivector, $\hat x \wedge \hat y \wedge \hat z$. The action of $\underline T$ on this object is very interesting. It happens that
$$\underline T(\hat x \wedge \hat y \wedge \hat z) = \hat x \wedge \hat y \wedge \hat z \, \det \underline T$$
This can be taken as a definition of the determinant, defined in a wholly geometric way.
Ultimately, though, yes, linear operators should act individually on the vectors that make them up--they should preserve wedge products. Cross products are related to wedges, however, and so most of the time, applying a linear operator to preserve crosses is sensible, but there are some times (inversion and reflections being among them) that it is not.
Edit: about the relations between operators on duals and duals of operators. The Hodge star is much, much better treated in geometric algebra as multiplication by the pseudoscalar. We define $i \equiv \hat x \wedge \hat y \wedge \hat z$ and make sense of expressions like $\star a = i a$ and $\star (a \wedge b) = -i (a \wedge b)$ through the geometric product. Here are the canonical properties of the geometric product:
- $\hat u \hat u = 1$ for some unit vector $\hat u$
- $\hat u \hat v = - \hat v \hat u$ for two orthogonal unit vectors $\hat u, \hat v$
- $(ab)c = a(bc)$--that is, associativity--for three vectors $a, b, c$
You should be able to show then that $i = \hat x \hat y \hat z$ and that $\star a = i a$ captures the Hodge star operation on a vector.
Now, why bother with this stuff? Because it makes formulas that would be ugly and clumsy with the Hodge star very simple. There exists a simple formula relating the adjoint (in Euclidean space, the transpose) of an operator with the inverse. That is,
$$\overline T^{-1}(a) = [\underline T(i)]^{-1} \underline T(ia)$$
for any multivector $a$, where $\overline T$ is the adjoint operator to $\underline T$. Written with Hodge stars, we would need a term of $(-1)^k$ that would alternate based on grade, and it would all be a royal mess. This formula, however, written in geometric algebra, is entirely simple.
Now then, rotations and reflections all belong to the group of orthogonal linear operators, obeying $\overline T^{-1} = \underline T$, so for rotations and reflections we get instead,
$$\underline T(a) = \frac{1}{\det \underline T} i^{-1} \underline T(ia)$$
or, more simply,
$$(\det \underline T) i \underline T(a) = \underline T(ia)$$
In Hodge star notation, for any vector $a$,
$$(\det \underline T) \star[\underline T(a)] = \underline T(\star a)$$
For a rotation, the determinant is $+1$, and as such, the $i$ just pulls out. Rotating the vector and then finding the dual is the same as rotating the dual. For an inversion, the determinant is $-1$, and you can see how the inversion of the vector gets canceled by the determinant's factor.
Let $f=f(x,y,z)$ be a scalar function and $\mathbf F=\langle F_1(x,y,z),F_2(x,y,z),F_3(x,y,z)\rangle$ be a vector field in $\mathbb{R}^3$. Then we can think of $f$ or $\mathbf F$ (as appropriate) as the inputs to the operators grad, div, curl, and even laplacian with the resulting outputs indicated:
\begin{align}
f\longrightarrow &\ \color{blue}{{\LARGE\boxed{\nabla}}} \longrightarrow \langle f_x,f_y,f_z\rangle\\
\mathbf F\longrightarrow &\ \color{blue}{{\LARGE\boxed{\nabla\cdot}}} \longrightarrow {\partial F_1\over \partial x}+{\partial F_2\over \partial y}+{\partial F_3\over \partial z}\\
\mathbf F\longrightarrow &\ \color{blue}{{\LARGE\boxed{\nabla\times}}} \longrightarrow \left\langle {\partial F_3\over \partial y}-{\partial F_2\over \partial z},{\partial F_1\over \partial z}-{\partial F_3\over \partial x},{\partial F_2\over \partial x}-{\partial F_1\over \partial y}\right\rangle\\
f\longrightarrow &\ \color{blue}{{\LARGE\boxed{\nabla\cdot\nabla}}} \longrightarrow f_{xx}+f_{yy}+f_{zz}.\\
\end{align}
Thus $\nabla$ is not a vector, but rather indicates an operator whose action on the input $f$ results in the output $\langle f_x,f_y,f_z\rangle$. Similarly for the others.
If you find the del notation counterproductive, just abandon that notation/nomenclature for this:
\begin{align}
f\longrightarrow &\ \color{blue}{{\LARGE\boxed{\text{grad}}}} \longrightarrow \langle f_x,f_y,f_z\rangle\\
\mathbf F\longrightarrow &\ \color{blue}{{\LARGE\boxed{\text{div}}}} \longrightarrow {\partial F_1\over \partial x}+{\partial F_2\over \partial y}+{\partial F_3\over \partial z}\\
\mathbf F\longrightarrow &\ \color{blue}{{\LARGE\boxed{\text{curl}}}} \longrightarrow \left\langle {\partial F_3\over \partial y}-{\partial F_2\over \partial z},{\partial F_1\over \partial z}-{\partial F_3\over \partial x},{\partial F_2\over \partial x}-{\partial F_1\over \partial y}\right\rangle\\
f\longrightarrow &\ \color{blue}{{\LARGE\boxed{\text{lap}}}} \longrightarrow f_{xx}+f_{yy}+f_{zz}.\\
\end{align}
Best Answer
The concepts of pseudovectors and pseudoscalars arise from a clumsy attempt to make all geometric objects seem like vectors and scalars when they're not.
In 3d, pseudovectors and pseudoscalars are better understood as bivectors and trivectors instead.
Bivectors and trivectors in the 3d exterior algebra: direct representations of areas and volumes
Exterior algebra takes the vectors and scalars you know and builds from them bivectors and trivectors. The product needed for this is called the wedge product, and its properties are simple: if $a$, $b$, and $c$ are vectors, then
$$a \wedge b = - b \wedge a, \quad (a \wedge b) \wedge c = a \wedge (b \wedge c)$$
So it's anticommutative (like the cross product) but also associative (unlike the cross product). $a \wedge b$ is a bivector, and $a \wedge b \wedge c$ is a trivector.
Bivectors correspond directly to weighted, oriented planes or areas the way vectors correspond to weighted, oriented lines or directions. For instance, $\hat x \wedge \hat y$ corresponds to the $xy$-plane. You can multiply this by a scalar, so for instance, $2 \hat x \wedge \hat y$ also corresponds to the $xy$-plane, but with a different magnitude or weight. This is not different from regular vectors, as $2 \hat x$ corresponds to the $x$-direction just as much as $\hat x$ does. But having different magnitudes allows you to do addition and subtraction like usual.
In fact, bivectors form a vector space of their own, just as vectors form their own vector space. In 3d, there are three basis bivectors--$\hat x \wedge \hat y, \hat y \wedge \hat z, \hat z \wedge \hat x$--and it's for this reason that many people identify bivectors with vectors that transform "differently" than regular vectors.
Do bivectors transform differently? Yes. The basic law of how linear maps ("matrices") act on a bivector is like so: if $T$ is a linear map, then
$$T(a \wedge b) \equiv T(a) \wedge T(b)$$
Consider the case $T(a) = -a$, which is an inversion through the origin. All vectors point the opposite direction under inversion, but a bivector doesn't change:
$$T(a \wedge b) = T(a) \wedge T(b) = (-a) \wedge (-b) = a \wedge b$$
So this is part of what's different about bivectors compared to ordinary vectors.
Trivectors, too, are different from ordinary scalars. All trivectors can be written as a scalar multiple of $\epsilon \equiv \hat x \wedge \hat y \wedge \hat z$, and we can interpret this as an oriented volume. If some trivector $\tau = \alpha \epsilon$ for some positive $\alpha$, then $\tau$ is right-handed. If $\alpha < 0$, then $\tau$ is left-handed.
The definition above for linear maps acting on bivectors generalizes to trivectors, and you should see pretty quickly that $\epsilon$ picks up a minus sign on inversion: inversion turns a right-handed volume into a left-handed one, and vice versa. This is called an orientation reversing transformation. (Though, note that bivectors were not reversed on inversion, and so whether a transformation reverses orientation can depend on what you're talking about.)
Relating bivectors and trivectors to common vectors and scalars: the clifford algebra
Exterior algebra doesn't have an operation to convert bivectors and trivectors to vectors and scalars, but the clifford algebra does. It defines a geometric product of vectors that incorporates the dot product. If $u,v, w$ are orthogonal vectors, then
$$uu = u \cdot u, \quad uv = -vu = u \wedge v, \quad (uv)w = u(vw)$$
With this in mind, we can write $\epsilon = \hat x \hat y \hat z$ under the geometric product, and we can use it with the geometric product to turn bivectors into vectors and trivectors into scalars. We can actually write an expression relating the cross product and triple scalar product to our clifford algebra stuff:
$$a \times b = -\epsilon (a \wedge b), \quad a \cdot (b \times c) = -\epsilon(a \wedge b \wedge c)$$
(If you began to suspect $\epsilon$ has to do with the Levi-Civita tensor, you'd be right! The components of $\epsilon$ in some coordinate system are exactly those of the Levi-Civita tensor. And that's why I denote it $\epsilon$.)
So, for any bivector, multiplying by $\epsilon$ generates a corresponding vector--in fact, it is the normal vector to the plane represented by the bivector, and the convention adopted here ensures that the normal vector is related to the bivector by the usual right-hand rule.
Now, as we established, a bivector doesn't change under inversions, and so its normal vector doesn't change under inversions. This is what physicists often call "pseudovector" behavior, since all vectors should change under inversions. At first, this may seem paradoxical. The resolution to the paradox is simple: the transformation law of $\epsilon$ has been ignored. Inversion makes $\epsilon$ left-handed, but people kept using the right-hand rule to find the normal vector, even after transformation, which makes the normal vector seem different than regular vectors. The same explanation applies for pseudoscalars.
You can read further about exterior and clifford algebras on the internet in various places. You can do calculus with them either using the common formalism of "differential forms" or the calculus of clifford algebra, "geometric calculus." Bivectors and trivectors arise in many situations in physics, though most conventional texts ignore how quantities could be viewed this way. Angular momentum is a simple example of a bivector quantity, and electric flux through a surface can be viewed as a trivector. Though common problems seldom exploit these properties, they can be handy to remember when doing coordinate transformations.