Rather than imitate Matt Emerton's multiple comment style, I'll just make my answer community-wiki (since there is much to say from different viewpoints). The first lesson is that you have to insist on definitions in each source you consult, since people sometimes use terms a bit differently. (As someone once said, the beginning of wisdom is total confusion.)
It's always important to define your context, as others have indicated. Historically, Lie groups (and Lie algebras) come first, initially over $\mathbb{R}$ and then in complex versions. There's no need to retrace all the history, since it took quite some time for the local theory of analytic and Lie groups to take global form and then be translated into Lie algebra language. But there is always a problem about dealing with non-connected Lie groups, which can even have infinitely many connected components. Here you just have to be cautious.
Initially Lie groups tended to divide into solvable and semisimple types, with
a Lie group called semisimple precisely when its solvable radical is trivial, or equivalently its Lie algebra is semisimple (has zero radical). The Lie algebra doesn't see connected components other than the one containing the identity, being essentially the tangent space to the group at that point. While the real case is most natural at first, the study of structure tends to translate mostly into the Lie algebra setting where complexified versions are much easier to start with. (Then you have to go back to real forms, etc.) But the algebraic theory in the semisimple case is fairly elementary though tricky and leads to a good classification.
In the Lie algebra setting, it's easiest to define a real or complex finite dimensional Lie algebra to be reductive if its solvable radical equals its center; this definition can then be used for connected Lie groups, though the disconnected case tends to get messy. The word "reductive" and the motivation for its use arise in group theory via the notion of complete reducibility of (finite dimensional) representations. But as you can see in the introductory pages of one of Knapp's books, it's tricky to apply the term successfully to Lie groups. In his 1965 book Structure of Lie Groups Hochschild defined a complex analytic group to be reductive just when it has a faithful finite dimensional analytic linear representation and moreover all such representations are semisimple (= completely reducible). That's the spirit in which Hochschild and Mostow studied the groups, characterizing the real linear ones in terms of symmetry under transpose; but here you need a fixed linear realization.
Anyway, it's usually best to proceed with an intrinsic characterization of the groups or Lie algebras. While most of the structure theory focuses on the simple or semisimple cases, modern work (especially involving Harish-Chandra's program and the Langlands program) works best with reductive groups by allowing induction from parabolic subgroups with reductive Levi factors to play a major role.
P.S. In the algebraic group setting, much but not all of the structure theory goes through in a similar way, but finite dimensional representations are usually not completely reducible; even so, "reductive" groups are commonly used along with "semisimple" groups and people get confused. In the algebraic version, reductive doesn't require connected but does require that the unipotent radical be trivial (while the solvable radical of an algebraic group is the semidirect product of a torus and a unipotent normal subgroup). In any case, radicals are required to be connected normal subgroups.
Best Answer
This is an old question, probably abandoned because its answer would require a short article (complete with references). While I can't supply such an article, I can point to some of Borel's writings which involve a definition of "real reductive group". In an old comment, Brian Conrad already referred to the added survey in Section 24C of Borel's second edition of Linear Algebraic Groups (GTM 126, Springer, 1991). There is also a set of his lecture notes intended for a 2003 Chinese summer school, published (along with other lectures) as: Lie groups and linear algebraic groups. I. Complex and real groups. Lie groups and automorphic forms, 1–49, AMS/IP Stud. Adv. Math., 37, Amer. Math. Soc., Providence, RI, 2006. (See Sections 5-6.)
The definition he uses is in some ways the simplest and most natural, I think. In the general framework of finite dimensional Lie algebras over an arbitrary field $K$ of characteristic 0, there are elementary definitions of semisimple and reductive Lie algebras. When you take $K = \mathbb{R}$ and work in the classical framework of real Lie groups, it's then natural to define a connected Lie group $G$ to be reductive if its Lie algebra is. Of course, the Lie algebra only sees local behavior, so one could leave it at that. However, disconnected Lie groups come up immediately when Lie group theory is combined with linear algebraic groups in the study of representations, automorphic forms, etc. So Borel adds in this case the extra requirement that $G$ have only finitely many connected components in the euclidean topology.
Where does this condition come from? Starting with a connected linear algebraic group (scheme) $H$ over $\mathbb{R}$, the resulting group $G:= H(\mathbb{R})$ is a real Lie group but need not be connected. An obvious example is the multiplicative group. But a basic theorem states that this Lie group has only finitely many components in the euclidean topology. The theorem comes, for instance, from Whitney's older work on real affine varieties but is also a consequence of a finiteness theorem in Galois cohomology proved in the work of Borel-Serre. (I'm not sure what the best modern proof of the theorem is, but that's another question.)
Borel's notes were intended partly to prepare for Wallach's related lectures. In any case, his approach is closely related to some of the other proposed definitions of real reductive group in the question. But it strikes me as a more straightforward starting point.