Why do we say that the tensor product of vector spaces is commutative, but the tensor product of vectors is not

commutative-algebranoncommutative-algebratensor-productstensorsterminology

The Wikipedia article on the tensor product says

The tensor product of two vector spaces $V$ and $W$ is commutative in the sense that there is a canonical isomorphism V ⊗ W ≅ W ⊗ V that maps v ⊗ w to w ⊗ v.

On the other hand, even when $V = W$, the tensor product of vectors is not commutative; that is v ⊗ w ≠ w ⊗ v in general.

I've seen similar usage in other sources.

I'm a little confused by this usage of the word "commutative". It seems to me that the situations for the tensor product of vectors and for the tensor product of vector spaces are equivalent: in both cases, the tensor product in one order and the tensor product in the other order are not directly equal, but they are equivalent up to canonical isomorphism.

So why do we say that the tensor product of vectors is not commutative, but the tensor product of vector spaces is? Doesn't consistency of terminology require that we either say that both operations are commutative, or neither is, depending on whether we are allowing equivalence up to canonical isomorphism?

It's possible that I'm reading too much into the distinction between "tensor product of vectors" and "tensor product of vector spaces" in the Wikipedia article's presentation, and the answer is simply that there's one sense in which both operations are commutative and another sense in which neither operation is.

Best Answer

Upon further reflection, here's how I think about this. Warning: I'm coming at this from a physicist's perspective, and I suspect that mathematicians won't like this approach, but I think it's useful for physicists.

The tensor product $V \otimes W$ of two vector spaces is best thought of as inputting two different vector spaces $V$ and $W$. They might happen to be isomorphic, but there's no canonical isomorphism between them; any such isomorphism would give additional algebraic structure beyond that of the tensor product itself.

So you shouldn't think about putting the exact same vector space $V$ into both slots of a tensor product. Instead, it's better to think of them as two different vector spaces - call them $V_A$ and $V_B$ - which might happen to be isomorphic, but can't be identically equal. Plugging the same vector space $V$ into both slots is essentially equivalent to specifying a canonical isomorphism between the two input spaces, but (as mentioned above) this is best thought of as imposing additional structure beyond that of the tensor product itself.

The fundamental "point" of the tensor product is to start from a collection of "buckets" of vectors (each bucket being a different vector space) and take linear combinations of products of exactly one vector from each bucket, while tracking which bucket each vector came from. Instead of tracking this by the order in which the vectors are listed, we could just as easily use subscripts or something: instead of $v \otimes w$, we could write $v_A w_B$. Then any tensor product operators like $X_A Y_B$ act within each bucket, e.g. $(X_A Y_B)(v_A w_B) = (X_A v_A)(Y_B w_B)$. Written this way, it's clear that the order doesn't really matter, as long as we carry the subscript "bucket labels" along with each vector or operator: $v_A w_B = w_B v_A$, but $v_A w_B \neq w_A v_B$; indeed, the latter expression isn't even well-defined without specifying an isomorphism between $V_A$ and $V_B$.

So if you think of each input vector as carrying around its "bucket" index internally, then the tensor product of vectors is trivially commutative, and if you don't, then the notion of interchanging the order of vectors within a tensor product isn't well-defined (without separately specifying an isomorphism between the input vector spaces).

Related Question