Note that
$$
\mathbb{E}[Y] \overset{\star}= \mathbb{E}[0.9 X]
$$
does not in general imply $Y = 0.9X$. And what about the deductible? As you already noted, we actually have the following relationship between $X$ and $Y$:
\begin{align*}
Y = \boldsymbol{1}_{\{X\geq d\}} (X-d).
\end{align*}
The idea is to use $\star$ to determine $d$, and then calculate $\text{Var}[Y]$ more or less directly.
Proof
Note that $X \sim \text{exp}(\lambda)$, so that in particular $\mathbb{E}[X] = \lambda$. To simplify matters, let us first note that conditionally on $(X \geq d)$, the random variable $Y$ also follows an $\text{exp}(\lambda)$-distribution. This may be seen from the calculation
\begin{align*}
\mathbb{P}(Y \leq y \, | \, X \geq d)
&=
\mathbb{P}(X - d \leq y \, | \, X \geq d) \\
&=
\frac{\mathbb{P}(d \leq X \leq y + d)}{\mathbb{P}(d \leq X)} \\
&=
\frac{e^{-\frac{d}{\lambda}}-e^{-\frac{y+d}{\lambda}}}{e^{-\frac{d}{\lambda}}} \\
&= 1 - e^{-\frac{y}{\lambda}}
\end{align*}
for $y > 0$. It is actually a direct consequence of the so-called memoryless property of the exponential distribution.
For $k\in\mathbb{N}$, we obtain from the law of total expectation that
\begin{align*}
\mathbb{E}[Y^k]
&=
\mathbb{E}[Y^k \, | \, X \geq d]\cdot\mathbb{P}(X \geq d) + \mathbb{E}[Y^k \, | \, X < d]\cdot\mathbb{P}(X < d) \\
&=
\mathbb{E}[Y^k \, | \, X \geq d]\cdot\mathbb{P}(X \geq d) \\
&=
k! \lambda^k e^{-\frac{d}{\lambda}},
\end{align*}
since the $n$'th moment of an exponential distribution with mean $\lambda$ is $n!\lambda^n$.
In particular,
\begin{align*}
\mathbb{E}[Y]
&=
\lambda e^{-\frac{d}{\lambda}}.
\end{align*}
Thus imposing $\star$, the deductible $d$ is given as the solution to
\begin{align*}
\lambda e^{-\frac{d}{\lambda}} = 0.9\mathbb{E}[X] = 0.9\lambda,
\end{align*}
which yields $d = -\log(0.9)\lambda \approx 0.11 \lambda$. (You may think about why $d>0.1 \lambda$ is reasonable.)
Using this value of $d$, which exactly yields $e^{-\frac{d}{\lambda}} = 0.9$, we find that
\begin{align*}
\text{Var}[Y]
&=
\mathbb{E}[Y^2] - \mathbb{E}[Y]^2 \\
&=
2\lambda^2 \cdot 0.9 - (0.9 \lambda)^2 \\
&=
0.99 \lambda^2 \\
&=
0.99 \text{Var}[X],
\end{align*}
as desired.
You wrote
Because $\mathbb E[Y] = (X-d) \Pr[X > d] = 0.9 \mathbb E[X]$
which is not correct. You should write
$$\mathbb E[Y] = \mathbb E[X-d \mid X > d] \Pr[X > d] = \operatorname{E}[X]\Pr[X > d] = 0.9 \mathbb E[X];$$ that is to say, you have omitted the expectation operator, and expectation on the RHS is conditional on $X > d$; then since $X$ is memoryless, $(X - d \mid X > d) \sim X$. This is what allows us to claim $\mathbb E[X - d \mid X > d] = \mathbb E[X]$, and ultimately, $\Pr[X > d] = 0.9$. It is not necessary to do all the previous work. If you wish to perform the computation explicitly, then
$$\begin{align}
\operatorname{E}[Y] &= \int_{x=0}^\infty \max(x - d, 0) f_X(x) \, dx \\
&= \int_{x=d}^\infty (x-d) \frac{1}{\theta} e^{-x/\theta} \, dx \\
&= \int_{y=0}^\infty y \frac{1}{\theta} e^{-(y+d)/\theta} \, dy \tag{$x = y + d$} \\
&= e^{-d/\theta} \int_{y=0}^\infty \frac{y}{\theta} e^{-y/\theta} \, dy \\
&= \theta e^{-d/\theta} \\
&= \mathbb E[X] \Pr[X > d].
\end{align}$$
The purpose of memorylessness is to avoid this computation, but either way, it is not difficult.
To calculate the variance of $Y$, we first compute the second moment in the same way as we did the first:
$$\mathbb E[Y^2] = \mathbb E[(X-d)^2 \mid X > d]\Pr[X > d] = \mathbb E[X^2] \Pr[X > d].$$ Again, we use the fact that $X$ is memoryless, hence $\left((X - d)^2 \mid X > d\right) \sim X^2$. So $$\operatorname{E}[Y^2] = 2\theta^2 \Pr[X > d] = 1.8 \theta^2,$$ and
$$\operatorname{Var}[Y] = 1.8 \theta^2 - (0.9)^2 \theta^2 = 0.99 \theta^2.$$
It is easy to see in the general case that
$$\mathbb E[Y^k] = \mathbb E[(X - d)^k \mid X > d]\Pr[X > d] + \mathbb E[0 \mid X \le d]\Pr[X \le d] = \mathbb E[X^k] \Pr[X > d].$$ This is just a consequence of the memorylessness property.
Then the moments are simply $$\mathbb E[X^k] = \int_{x=0}^\infty x^k \frac{1}{\theta} e^{-x/\theta} \, dx = \theta^{k-1} \int_{x=0}^\infty (x/\theta)^k e^{-x/\theta} \, dx = \theta^k \int_{z=0}^\infty z^k e^{-z} \, dz = \theta^k k!.$$ Alternatively, we can reason that
$$M_X(t) = \mathbb E[e^{tX}] = \int_{x=0}^\infty \frac{1}{\theta} e^{tx} e^{-x/\theta} \, dx = \frac{1}{\theta(1/\theta - t)} \int_{x=0}^\infty (1/\theta - t) e^{-(1/\theta - t)x} \, dx = \frac{1}{1 - \theta t},$$ for $t < 1/\theta$. But by series expansion and linearity of expectation, $$\mathbb E[e^{tX}] = \sum_{k=0}^\infty \mathbb E \left[\frac{(tX)^k}{k!}\right] = \sum_{k=0}^\infty \frac{\mathbb E[X^k]}{k!} t^k,$$ hence
$$\frac{1}{1 -\theta t} = \sum_{k=0}^\infty (\theta t)^k = \sum_{k=0}^\infty \frac{\mathbb E[X^k]}{k!} t^k,$$ and by comparing coefficients, we obtain
$$\mathbb E[X^k] = \theta^k k!.$$
Best Answer
The problem is the radiation and chemotherapy are not mutually exclusive. So by independence, you have a $0.9\cdot 0.4=0.36$ probability to have both radiation and chemotherapy. So you have:
$0.36=0.9\cdot 0.4$ probability for both radiation and chemotherapy (cost $2+3=5$)
$0.04=0.40-0.36$ probability for chemotherapy only (cost $3$)
$0.54=0.9-0.36$ probability for radiation only (cost 2)
$0.06=1-0.36-0.04-0.54$ probability for neither radiation nor chemotherapy (cost 0)
So for one patient, the variance is:
$5^2\cdot 0.36+3^2\cdot 0.04 + 2^2\cdot 0.54 - (5\cdot 0.36+3\cdot 0.04 + 2\cdot 0.54)^2 = 2.52$
For all patients, the variance is (by independence): $15\cdot 2.52 = 37.8$