If $X$ is a random matrix, the eigenvectors and eigenvalues of $X$ are random as well. Thus the vector $v$ and the real number $\lambda$ in your example are random.
Edit: At the risk of belaboring the obvious, in this context, the random matrix $X$ is a function $X:\Omega\to\mathcal M_n(\mathbb R)$ and an eigenvector and an eigenvalue of $X$ are functions $v:\Omega\to\mathbb R^n\setminus\{0\}$ and $\lambda:\Omega\to\mathbb C$ such that, for every $\omega$ in $\Omega$, $X(\omega)v(\omega)=\lambda(\omega)v(\omega)$. For example, the first coordinate of the vector $X(\omega)v(\omega)$ is $\lambda(\omega)v_1(\omega)$.
A note on how to "derive" the convolution formula (it is not the most general version). Let $X,Y$ be independent random variables with continuous densities $f_X, f_Y$. Using conditional expectation, we get
$$\mathbb P(X+Y\leq t)
=\int_{\mathbb R}\mathbb P(X\leq t-y)f_Y(y)\,dy
=\int_{\mathbb R}F_X(t-y)f_Y(y)\,dy.$$
One can then use Leibniz's formula for derivation of integrals and we get
$$f_{X+Y}(t)
=\frac{d}{dt}\mathbb P(X+Y\leq t)
=\frac{d}{dt}\int_{\mathbb R}F_X(t-y)f_Y(y)\,dy\\
=\int_{\mathbb R}\Big(\frac{\partial}{\partial t}F_X(t-y)\Big)f_Y(y)\,dy
=\int_{\mathbb R}f_X(t-y)f_Y(y)\,dy.$$
Now, since for exponential distribution, the density is $f(t)=\lambda \exp(-\lambda t)\mathbb 1_{t\geq0}$, we have for $t\in\mathbb R^+$
$$
\begin{align*}
f_{X+Y}(t)
&=\int_{\mathbb R}f_X(t-y)f_Y(y)\,dy
=\lambda_1\lambda_2\int_{\mathbb R}e^{-\lambda_1(t-y)}e^{-\lambda_2y}\mathbb 1_{0\leq y\leq t}\,dy\\
&=\lambda_1\lambda_2e^{-\lambda_1 t}\int_{0}^te^{-(\lambda_2-\lambda_1)y}\,dy
=\frac{\lambda_1\lambda_2}{\lambda_2-\lambda_1}\Big(e^{-\lambda_1t}-e^{-\lambda_2t}\Big)\\
&\xrightarrow{\lambda_1\rightarrow\lambda_2}\lambda^2 t\exp(-\lambda t).
\end{align*}$$
Remember: The sum of two independent exponential random variables with the same parameter is Gamma-distributed. This is not the case here.
For the second exercise: You need an expression for $f_{X|X+Y=a}(x)$. But
$$\begin{align*}
f_{X|X+Y}(x|a)
&=\frac{f_{X,X+Y}(x,a)}{f_{X+Y}(a)}
=\frac{f_{X+Y,X}(a,x)}{f_X(x)}\frac{f_X(x)}{f_{X+Y}(a)}\\
&=f_{X+Y|X}(a|x)\frac{f_X(x)}{f_{X+Y}(a)}
=f_{Y}(a-x)\frac{f_X(x)}{f_{X+Y}(a)}\\
&=\lambda_1\lambda_2\,\frac{e^{-\lambda_2(a-x)}\,e^{-\lambda_1 x}}{\frac{\lambda_1\lambda_2}{\lambda_2-\lambda_1}\Big(e^{-\lambda_1 a}-e^{-\lambda_2 a}\Big)}\\
&=(\lambda_2-\lambda_1)\frac{e^{-\lambda_2 a}}{e^{-\lambda_1 a}-e^{-\lambda_2 a}}e^{(\lambda_2-\lambda_1)x}
\xrightarrow{\lambda_1\rightarrow\lambda_2}\frac{1}{a}.
\end{align*}$$
Taking the expected value is now easy (Note, that the conditional distribution is defined on the finite interval [0,a]). We thus get
$$
\begin{align*}
\mathbb E(X\,|\,X+Y=a)
&=\int_0^ax\,f_{X|X+Y}(x|a)\,dx\\
&=\frac{a\,e^{a\lambda_2}}{e^{a\lambda_2}-e^{a\lambda_1}}+\frac{1}{\lambda_1-\lambda_2}
\xrightarrow{\lambda_1\rightarrow\lambda_2}\frac{a}{2}
\end{align*}$$
Finally, the above is the general answer.
Sanity Check: For $\lambda_1=\lambda_2$, the sum is Erlang distributed, the conditional distribution ist the uniform distribution on [0,a], and the expectation thus a/2.
NOTE: I let you struggle with all integrals and the check whether the densities are actually densities. I did use wolfram alpha for most of the integration and checkup. After all, this problem is a lot more involved than i remembered...
Best Answer
Okay, on closer sketching, this isn't actually that bad. For the sake of convenience, let's assume instead that $Z\sim \mathcal{N}(0,I)$ when viewed as an $\mathbb{R}^2$-valued random variable. Denote its density $f$.
Define $P:\mathbb{R}^2\setminus \{x\in (0,\infty),y=0\}\to (0,\infty)\times (0,2\pi)$ to be the standard polar coordinate transformation. Then, since $\{x\in (0,\infty),y=0\}$ is a $\mathcal{N}(0,I)$-null set, we can apply the Jacobi Coordinate Transformation theorem to get that $(R,\Theta):=P(Z)$ has density $$rf(P^{-1}(r,\theta))=rf(r(\cos(\theta)+i\sin(\theta))=\frac{r}{2\pi}\exp(-\frac{r^2}{2})=\frac{1}{2\pi}\cdot r\exp(-\frac{r^2}{2}),$$ which is clearly a factorisation of the density, implying that $R$ and $\Theta$ are independent, and $\Theta$ is uniformly distributed on $(0,2\pi)$. Note that $R$ and $\Theta$ clearly have moments of all orders.
Accordingly, we get, by applying independence coordinate-wise, that $$ E Z^m \overline{Z^k}=E(R^{m+k} e^{i (m-k)\Theta})=E(R^{m+k})E(e^{i(m-k)\Theta}), $$ and $$ E(e^{i(m-k)\Theta})=\frac{1}{2\pi}\left(\int_0^{2\pi} \cos((m-k)\theta)\textrm{d}\theta+i\int_0^{2\pi}\sin((m-k)\theta)\textrm{d}\theta\right)=0, $$ since $m\neq k$. This yields the desired.