[Math] Definitions of Reductive and Semisimple Groups

lie-groupsrt.representation-theory

I'm a graduate student. I've been reading Knapp's two books Representation Theory of Semisimple Groups and Lie Groups Beyond an Introduction. He seems to give wildly different definitions for the basic terms, 'reductive' and 'semisimple', and it leaves me unclear on what to think when I read 'reductive group' or 'semisimple group' alone in the literature.

To be more specific, Knapp doesn't actually define "reductive" in Representation Theory; he defines "linear connected reductive group". But in his other book, one of the reasons he says we need to introduce reductive groups is to deal with the disconnected groups arising from our analysis. Their disconnectedness is "not too wild", and so much of the structure theory developed for semisimple groups goes through. The definitions in Lie Groups make more sense to me.

I'm similarly confused about whether semisimple groups have finite center, because Knapp seems to disagree with himself on that.

How should I think about what these terms mean? Is there an authoritative definition. The definitions in Representation Theory seem most strange to me, but the resulting theory he develops is very familiar from everything else I've read. I'm curious why Knapp chose such strange definitions for his Representation Theory in the first place. Does the substance of his book go through if I replace his definitions with something more standard. (I'm aware this last bit is rather vague.)

Best Answer

Rather than imitate Matt Emerton's multiple comment style, I'll just make my answer community-wiki (since there is much to say from different viewpoints). The first lesson is that you have to insist on definitions in each source you consult, since people sometimes use terms a bit differently. (As someone once said, the beginning of wisdom is total confusion.)

It's always important to define your context, as others have indicated. Historically, Lie groups (and Lie algebras) come first, initially over $\mathbb{R}$ and then in complex versions. There's no need to retrace all the history, since it took quite some time for the local theory of analytic and Lie groups to take global form and then be translated into Lie algebra language. But there is always a problem about dealing with non-connected Lie groups, which can even have infinitely many connected components. Here you just have to be cautious.

Initially Lie groups tended to divide into solvable and semisimple types, with a Lie group called semisimple precisely when its solvable radical is trivial, or equivalently its Lie algebra is semisimple (has zero radical). The Lie algebra doesn't see connected components other than the one containing the identity, being essentially the tangent space to the group at that point. While the real case is most natural at first, the study of structure tends to translate mostly into the Lie algebra setting where complexified versions are much easier to start with. (Then you have to go back to real forms, etc.) But the algebraic theory in the semisimple case is fairly elementary though tricky and leads to a good classification.

In the Lie algebra setting, it's easiest to define a real or complex finite dimensional Lie algebra to be reductive if its solvable radical equals its center; this definition can then be used for connected Lie groups, though the disconnected case tends to get messy. The word "reductive" and the motivation for its use arise in group theory via the notion of complete reducibility of (finite dimensional) representations. But as you can see in the introductory pages of one of Knapp's books, it's tricky to apply the term successfully to Lie groups. In his 1965 book Structure of Lie Groups Hochschild defined a complex analytic group to be reductive just when it has a faithful finite dimensional analytic linear representation and moreover all such representations are semisimple (= completely reducible). That's the spirit in which Hochschild and Mostow studied the groups, characterizing the real linear ones in terms of symmetry under transpose; but here you need a fixed linear realization.

Anyway, it's usually best to proceed with an intrinsic characterization of the groups or Lie algebras. While most of the structure theory focuses on the simple or semisimple cases, modern work (especially involving Harish-Chandra's program and the Langlands program) works best with reductive groups by allowing induction from parabolic subgroups with reductive Levi factors to play a major role.

P.S. In the algebraic group setting, much but not all of the structure theory goes through in a similar way, but finite dimensional representations are usually not completely reducible; even so, "reductive" groups are commonly used along with "semisimple" groups and people get confused. In the algebraic version, reductive doesn't require connected but does require that the unipotent radical be trivial (while the solvable radical of an algebraic group is the semidirect product of a torus and a unipotent normal subgroup). In any case, radicals are required to be connected normal subgroups.

Related Question