[Math] When to pick a basis

linear algebramatrices

Picking a specific basis is often looked upon with disdain when making statements that are about basis independent quantities. For example, one might define the trace of a matrix to be the sum of the diagonal elements, but many mathematicians would never consider such a definition since it presupposes a choice of basis. For someone working on algorithms, however, this might be a very natural perspective.

What are the advantages and disadvantages to choosing a specific basis? Are there any situations where the "right" proof requires choosing a basis? (I mean a proof with the most clarity and insight — this is subjective, of course.) What about the opposite situation, where the right proof never picks a basis? Or is it the case that one can very generally argue that any proof done in one manner can be easily translated to the other setting? Are there examples of proofs where the only known proof relies on choosing a basis?

Best Answer

One answer to your question is already hinted at in the question. At the level of algorithms, basis-independent vector spaces don't really exist. If you want to compute a linear map $L:V \to W$, then you're not really computing anything unless both $V$ and $W$ have a basis. This is a useful reminder in our area, quantum computation, that has come up in discussion with one of my students. In that context, a quantum algorithm might compute $L$ as a unitary operator between Hilbert spaces $V$ and $W$. But the Hilbert spaces have to be implemented in qubits, which then imply a computational basis. So again, nothing is being computed unless both Hilbert spaces have distinguished orthonormal bases. The reminder is perhaps more useful quantumly than classically, since serious quantum computers don't yet exist.

On the other hand, when proving a basis-independent theorem, it is almost never enlightening (for me at least) to choose bases for vector spaces. The reason has to do with data typing: It is better to write formulas in such a way that the two sides of an incorrect equation are unlikely to even be of the same type. In algebra, there is a trend towards using bases as sparingly as possible. For instance, there is widespread use of direct sum decompositions and tensor decompositions as a way to have partial bases.

I think that your question about examples of proofs can't have an explicit answer. No basis-independent result needs a basis, and yet all of them do. If you have a reason to break down and choose a basis, it means that the basis-independent formalism is incomplete. On the other hand, anything that is used to build that formalism (like the definition of determinant and trace and the fact that they are basis-independent) needs a basis.

There is an exception to the point about algorithms. A symbolic mathematics package can have a category-theoretic layer in which vector spaces don't have bases. In fact, defining objects in categories is a big part of the interest in modern symbolic math packages such as Magma and SAGE.

Related Question