[Math] How to stop worrying about root systems and decomposition theorems (for reductive groups)

ag.algebraic-geometryalgebraic-groupslie-algebraslie-groupsrt.representation-theory

I apologize for this being a very very vague question.

Just as personal experience, I never feel that I fully grasped the theory of root systems in Lie algebras and Lie/algebraic groups (I shall call these $\textbf {LAG} $objects for short). My "psychological" response to these objects seem to be–they are really "ugly" and unnatural, where do they come from?? Of course, we can say, OK, let's work out the theory of $sl_2$, then everything is just "natural" generalization of easier cases. But this explanation just doesn't satisfy me.

I've been worrying about such things for really quite a long time. And probably the real question is, do these root systems have some (really nice, really simple) geometrical interpretation? For example, is it possible that they are somehow related to some nice topological space, some sheaf, some cohomology, etc…? And do they have some nice counterparts in other branches of mathematics? (it seems to me they only appear in places like LAG objects), do they show up in some other (probably surprising) places?

Actually, as we know, study of LAG objects themselves are closely related to their representation theories. And their representation theory are also obejects that seem hard to visualize in some "easy" geometrical way. I know we can probably put these things as a category and do some abstract algebraic geomtry with them. But still, we cannot avoid using these ad-hoc techniques with them. (like root systems, Dynkin diagrams, etc, etc)

Perhaps I should stop here. In any case, I hope this question will not be closed as a spam. And hope some one can understand some of my confusion and shed some light (geometry) on it.
Thank you.

Best Answer

One thing to keep in mind is that the process that starts with LAGs and ends up with root systems is not "functorial". To keep everything very definite, let me restrict attention to finite-dimensional semisimple Lie algebras over $\mathbb C$. Then there are maps {isomorphism classes of s.s. complex Lie algebras}{isomorphism classes of Dynkin diagrams}, and at the level of isomorphism classes, the two maps are inverse to each other. In fact, there is a wonderfully functorial map going ←, i.e. from Dynkin diagrams to algebras, which was worked out by Serre, I think (maybe Chevalley). But the → map requires making all sorts of choices: pick a Cartan subalgebra, pick a notion of "positive" for it. Let $\mathfrak g$ be the Lie algebra and $\mathfrak h$ the chosen Cartan; then the group $\operatorname{Aut}(\mathfrak g)$ acts transitively on choices of simple system, and the stabilizer is precisely $\mathfrak h$. (Or, rather, in $\operatorname{Aut}(\mathfrak g)$ there are "inner" automorphism and "outer" automorphisms, and the inner ones in fact act transitively, and $\operatorname{Out}/\operatorname{Inn}$ acts as non-trivial diagram automorphisms. By "precisely $\mathfrak h$" I mean that the stabilizer is $\exp\, \mathfrak h \subset \operatorname{Inn}$.) So the space of choices is a homogeneous space for $\operatorname{Inn}(\mathfrak g)$ (which is the smallest group integrating $\mathfrak g$) modeled on $\operatorname{Inn}(\mathfrak g)/\exp(\mathfrak h)$. But, anyway, the point is that $\exp(\mathfrak h)$ acts nontrivially on $\mathfrak g$ but, as I've said, trivially on the Dynkin diagram, and hence trivially on the group that you construct from the Dynkin diagram.

This might be why you don't like the notion of root systems: you really do need to make choices to identify algebras with their root systems. It's something like picking a basis for a vector space — great for computations, but not very geometric. As a precise example: for $\mathfrak{sl}(V) = \{x\in \operatorname{End}(V) \text{ s.t. }\operatorname{tr}(x)=0\}$, a choice of root system is the same as a choice of (ordered) basis for the vector space $V$.

On the other hand, I claim that you should like the representation theory of a LAG. One way to study this representation theory (I might go so far as to say "the best way") is to pick a root system for your LAG and look at how $\mathfrak h$ acts, etc. Then, for example, finite-dimensional irreducible representations of a semisimple Lie algebra $\mathfrak g$ correspond bijectively with ways to label the Dynkin diagram with nonnegative integers. So you can really get your hands on the representation theory.

But representations of a group $G$ is a very geometric thing. There is some sort of "space", called "$BG$" or "$\{\text{pt}\}/G$", for which the representations of $G$ are the same as vector bundles on this space. If you don't like thinking about "one $G$th of a point", there are homotopy-theoretic models of $BG$.

You should also think of categories as geometric. Think about the case when $G$ is a (finite) abelian group. Then you might remember Pontrjagin duality: the irreducible representations of $G$ are the same as points in the dual group $G^\vee$. Then, at least for $G$ a semisimple LAG, you might think of its finite-dimensional representations as being like the points of some "space" $G^\vee$. The difference is that in the abelian case, all the points on $G^\vee$ correspond to one-dimensional representations, whereas in the semisimple nonabelian case in general the points are "bigger". The points are parameterized by the positive weight lattice, but they aren't actually the positive weight lattice. But the space "$G^\vee$" is some space noncanonically-isomorphic to the positive weight lattice. Again, this is like how the Euclidean plane is noncanonically isomorphic to the Cartesian plane.

Related Question