I understand the concept of $\mathbb{Z}/n\mathbb{Z}$, but I am having a really hard time understanding how this concept of quotients applies to vector spaces. Suppose $V = \mathbb{F}[x]$ is a vector space and $U \le V$. What exactly does $V/U$ represent?
[Math] What’s an intuitive way of looking at quotient spaces
intuitionlinear algebra
Related Solutions
Your trouble with determinants is pretty common. They’re a hard thing to teach well, too, for two main reasons that I can see: the formulas you learn for computing them are messy and complicated, and there’s no “natural” way to interpret the value of the determinant, the way it’s easy to interpret the derivatives you do in calculus at first as the slope of the tangent line. It’s hard to believe things like the invertibility condition you’ve stated when it’s not even clear what the numbers mean and where they come from.
Rather than show that the many usual definitions are all the same by comparing them to each other, I’m going to state some general properties of the determinant that I claim are enough to specify uniquely what number you should get when you put in a given matrix. Then it’s not too bad to check that all of the definitions for determinant that you’ve seen satisfy those properties I’ll state.
The first thing to think about if you want an “abstract” definition of the determinant to unify all those others is that it’s not an array of numbers with bars on the side. What we’re really looking for is a function that takes N vectors (the N columns of the matrix) and returns a number. Let’s assume we’re working with real numbers for now.
Remember how those operations you mentioned change the value of the determinant?
Switching two rows or columns changes the sign.
Multiplying one row by a constant multiplies the whole determinant by that constant.
The general fact that number two draws from: the determinant is linear in each row. That is, if you think of it as a function $\det: \mathbb{R}^{n^2} \rightarrow \mathbb{R}$, then $$ \det(a \vec v_1 +b \vec w_1 , \vec v_2 ,\ldots,\vec v_n ) = a \det(\vec v_1,\vec v_2,\ldots,\vec v_n) + b \det(\vec w_1, \vec v_2, \ldots,\vec v_n),$$ and the corresponding condition in each other slot.
The determinant of the identity matrix $I$ is $1$.
I claim that these facts are enough to define a unique function that takes in N vectors (each of length N) and returns a real number, the determinant of the matrix given by those vectors. I won’t prove that, but I’ll show you how it helps with some other interpretations of the determinant.
In particular, there’s a nice geometric way to think of a determinant. Consider the unit cube in N dimensional space: the set of N vectors of length 1 with coordinates 0 or 1 in each spot. The determinant of the linear transformation (matrix) T is the signed volume of the region gotten by applying T to the unit cube. (Don’t worry too much if you don’t know what the “signed” part means, for now).
How does that follow from our abstract definition?
Well, if you apply the identity to the unit cube, you get back the unit cube. And the volume of the unit cube is 1.
If you stretch the cube by a constant factor in one direction only, the new volume is that constant. And if you stack two blocks together aligned on the same direction, their combined volume is the sum of their volumes: this all shows that the signed volume we have is linear in each coordinate when considered as a function of the input vectors.
Finally, when you switch two of the vectors that define the unit cube, you flip the orientation. (Again, this is something to come back to later if you don’t know what that means).
So there are ways to think about the determinant that aren’t symbol-pushing. If you’ve studied multivariable calculus, you could think about, with this geometric definition of determinant, why determinants (the Jacobian) pop up when we change coordinates doing integration. Hint: a derivative is a linear approximation of the associated function, and consider a “differential volume element” in your starting coordinate system.
It’s not too much work to check that the area of the parallelogram formed by vectors $(a,b)$ and $(c,d)$ is $\Big|{}^{a\;b}_{c\;d}\Big|$ either: you might try that to get a sense for things.
We begin by noting that for vector spaces $X,Y$, a subspace $S \subset X$ and a linear map $F \colon X \to Y$, we have an induced map
$$\tilde{F} \colon X/S \to Y$$
that satisfies $\tilde{F}(x+S) = F(x)$ for all $x\in X$ if and only if $S \subset \ker F$. [prove it, or cite a theorem]
Then we apply the above to the situation $S = U$, $X = V$, and $Y = V/U$, with $F = \pi\circ T$, where $\pi \colon V \to V/U$ is the canonical projection.
For the remaining part, note that $\widetilde{\pi\circ p(T)} = p(\widetilde{\pi\circ T}) = p(B)$ for all polynomials $p\in K[X]$. [prove it, or cite a theorem]
Best Answer
The way I think about quotient spaces (or quotient algebraic structures in general) is as an identification of things which differ by some subspace/subgroup/subring/... of the original structure. Then the quotient space (resp. algebraic structure) can be thought of as what you get when you squash the subspace (resp. substructure) to a point and extend to the rest of the space (resp. structure).
To illustrate this, with $\mathbb{Z}/n\mathbb{Z}$, we identify integers which differ by a multiple of $n$. What we do is squash $n\mathbb{Z}$ to a point, namely $0$, and this extends to the rest of the space by squashing $a+n\mathbb{Z}$ to $a$ for $0 \le a < n$. Then the operations of addition and so on are inherited naturally from the quotient operation.
As another example, with $\mathbb{R}^2/\langle (1,1) \rangle$ we identify things which lie on the same line lying at $45^{\circ}$ to the (positive) horizontal, i.e. we identify vectors which differ by some scalar multiple of the vector $(1,1)$. We can visualise this as contracting $\langle (1,1) \rangle$ to a point, which when you extend linearly contracts $\mathbb{R}^2$ to a line through the origin, leaving you with $\langle (1,-1) \rangle$. (Imagine squashing the whole plane down towards the origin perpendicular to the line $\langle (1,1) \rangle$). Another perspective is that the 'points' in $\mathbb{R}^2/\langle (1,1) \rangle$ can be thought of as precisely the lines in $\mathbb{R}^2$ with gradient $1$.
More generally, if $V$ is a vector space and $U \le V$ is a subspace then you can think of $V/U$ as the space you get when you identify two elements $v, v' \in V$ if $v'=v+u$ for some $u \in U$. What it 'looks like' is what you get when you contract $U$ to a point and extend linearly to the rest of the space.
I hope this wasn't too waffly.