If $X$ and $Y$ are any random variables, and $a$ and $b$ are constants, then
$$E(aX+bY)=aE(X)+bE(Y).$$
If $X$ and $Y$ are independent random variables, and $a$ and $b$ are constants, then
$$\text{Var}(aX+bY)=a^2\text{Var}(X)+b^2\text{Var}(Y).$$
In our case we have $a=b=1$, but the general expressions may be useful to you later.
So you don't need to find the distribution of the random variable $X_1+X_2$, all you need is formulas for the mean and variance of an exponential random variable $T$ with parameter $\lambda$. These are respectively $\dfrac{1}{\lambda}$ and $\dfrac{1}{\lambda^2}$. Thus your mean and variance are respectively
$$\frac{1}{\lambda_1}+\frac{1}{\lambda_2}\quad\text{and}\quad \frac{1}{\lambda_1^2}+\frac{1}{\lambda_2^2}.$$
Edit: The question was changed. It turns out that $\lambda_1=\lambda_2=\lambda$. that is just a special case of the above formulas.
It is not clear whether you were asking also about how to compute the individual means and variances, or whether you already know these.
We need $E(T)$, and $E(T^2)$, since $\text{Var}(T)=E(T^2)-(E(T))^2$.
The expectations can be found as usual by integration. There are some shortcuts, but they tend to involve more advanced notions. In case you are interested, let me mention the term moment generating function.
Added: We compute the moment generating function $m(s)$ of our random variable $T$ (sorry about $s$, the traditional $t$ is already taken). So we want
$$E(e^{Ts})=\int_0^\infty e^{ts}\lambda e^{-\lambda t}\,dt=\int_0^\infty \lambda e^{-(\lambda-s)t}\,dt.$$
Integrate by substitution, easy. We get
$$m(s)=\frac{\lambda}{\lambda-s}=\frac{1}{1-\frac{s}{\lambda}}.$$
So $m(s)$ has a very nice Taylor expansion $m(s)=1+\frac{1}{\lambda}s+\frac{1}{\lambda^2}s^2+\cdots$. From this we can pick up $E(X^k)$ for any $k$. In particular, $E(X)=\frac{1}{\lambda}$, and $E(X^2)=\frac{2!}{\lambda^2}$.
We can even get the moment-generating function of $X_1+X_2$, since the mgf of an independent sum is the product of the mgf's..
Having said that, I also believe that $\max(T,V) \ \big| \ V \leq T$
should have the same distribution as the random variable, $T + \min(T,V)$ which is apparently wrong.
Of course it's wrong: if you know that $\ V \leq T$, then you know that the maximum is $T$, hence $\max(T,V) \ \big| \ V \leq T$ has the same distribution as $T$.
Edited:
"Of course", the above it's totally wrong. Sorry. Knowing $\ V \leq T$ not only informs us that the maximum is $T$, but also tell us something about the value of the maximum (and it should push it expected value up ).
I think you are confusing two kind of knowledge (condition): which is the minimum, and what is the value of the minimum. If you know that the mininum is the variable $V$, and that its value is $v$ (don't confuse the random variables with their values), then you know that $T\ge v$, and in that case the distribution of the conditioned variable shifts by $v$.
Best Answer
There is nothing particularly interesting going on here. It follows from the memoryless property for $X$, and moreover all that matters about $S$ is that it is a non-negative RV independent of $X$, not that it is exponential: $$ P(X>S+t\mid X> S) \\= \frac{P(X>S+t)}{P(X>S)}\\=\frac{\int P(X>s+t)f_S(s)ds}{\int P(X>s)f_S(s)ds} \\=\frac{\int P(X>t)P(X>s)f_S(s)ds}{\int P(X>s)f_S(s)ds} \\=P(X>t)$$ where in the second to last line we used the memoryless property.
Intuitively, since the conditional distribution is (functionally) independent of the current wait time it really shouldn't matter if the wait time is random or not (as long as it is independently random).