[Math] Why a subspace of a vector space is useful

linear algebravector-spaces

I'm in a linear algebra class and am having a hard time wrapping my head around what subspaces of a vector space are useful for (among many other things!). My understanding of a vector space is that, simplistically, it defines a coordinate plane that you can plot points on and figure out some useful things about the relationship between vectors/points.

I think what I'm curious about is more application of some of these ideas. Such as, is a subspace useful for a reason other than you don't have to look at the entire space something exists in (I guess one way I've been thinking about it is if you want to make a map of a city, you don't necessarily need to make a map of the state it's in) or am I even wrong about that much? Also, even though I feel like I should know this at this point, is if the subspace is linearly independent, is it still a subspace? If it is, what exactly does that describe and/or why is that still useful? If it's not, is it still useful for something?

I think the most difficult part of this for me is I'm having a hard time being able to visualize what exactly we're talking about and I have a hard time thinking that abstractly. I know one or two examples of this might be too specific and doesn't generalize the concept enough, but I think if I have some example to relate back to when applying the idea to new things it might be helpful.

Best Answer

It can help to think of these concepts geometrically.

In the context of our 3d world, subspaces might be thought of as lines or planes (through the origin).

Why do we care about subspaces? Again, a geometric picture of linear transformations (which is what we use matrices to model) helps with these ideas. A linear transformation (matrix) might leave certain lines invariant: they simply map the line to the same line, within a scaling factor. Any vector on such a line is an eigenvector, and the scale factor by which the line is magnified or shrunk is the eigenvalue.

A linear transformation (matrix) might, even when given any vector in 3d, only spit out vectors on a certain plane or line. The set of vectors spat out in this way is the image of the transformation, and you should see that the dimensionality of the image is clear from its geometric dimension: if the image is a plane, then the image has dimension 2, and so on.

For inputs to the transformation (matrix), some lines or planes might be wholly annihilated by the transformation---the transformation (matrix) forces them to zero. These lines or planes form the kernel of the transformation (matrix).

Now, something you might not be taught about are whole planes that are left invariant under a transformation, even though no individual line is kept invariant. These planes might be scaled by some factor, and so they can be thought as "eigenplanes". Rotation maps are an example of transformations that leave whole planes invariant without leaving any individual (real) line in that plane invariant.

Related Question