I don't know if you are looking for a closed and simplified solution based on the values of $a$ and $c$, but one approach for any problems in the conditional expectation is as follows:
As you have specified in your question, throughout the solution, I have assumed that $X$ and $Y$ are independent and they are defined over the unit interval, however the approach doesn't really depend on independency or a specific range.
First, find the conditional density:
$$f_{X|Z>c}(x|z>c) = \frac{f_{X,Y}(x,y)}{P(Z>c)}=\frac{f_{X,Y}(x,y)}{P(aX+(1-a)Y>c)}.$$
You can ignore the notation I have used for $f_{X|Z>c}(x|z>c)$. It's just the conditional density of $X$ given $Z>c$ and that's a function of $x,y$ so for simplicity you can call it $g(x,y).$
Note that the numerator is just the joint density of $X$ and $Y$, $f_{X,Y}(x,y)$.
If $X$ and $Y$ are independent, then
$$f_{X|Z>c}(x|z>c) = \frac{f_{X}(x)f_Y(y)}{P(aX+(1-a)Y>c)}.$$
The denominator can be calculated as
$$P(aX+(1-a)Y>c) = \iint_\limits{D} f_X(x) f_Y(y) dx dy.$$
Calculating the above double integral is straightforward. You need to find the region $D$. It depends on values of $a$ and $c$ but it's easy to calculate the bounds for the integrals, because the region $D$ is inside the unit square and above the line $y= -\frac{a}{1-a}x + \frac ca $.
For example, if $a=\frac 12$ and $c=\frac 13$, then we can draw the region and calculate the integral like this:
$$P(aX+(1-a)Y>c) = \int_\limits{0}^{\frac 23} \int_\limits{\frac 23-x}^{1} f_X(x) f_Y(y) \,dy\, dx+ \int_\limits{\frac 23}^{1} \int_\limits{0}^{1} f_X(x) f_Y(y) \,dy\, dx.$$
After we find the conditional distribution, then we have
$$\mathbb{E}[X | Z>c] = \iint_\limits{D} x f_{X|Z>c}(x|z>c)\,dx \,dy.$$
So combining everything gives,
$$\mathbb{E}[X | Z>c] = \iint_\limits{D} \frac{x\,f_X(x)\,f_Y(y)}{\iint_\limits{D} f_X(x) f_Y(y) dx dy}\,dx \,dy.$$
where $D = [0,1]\times[0,1]\, \bigcap \, \{(x,y): ax+(1-a)y>c\}.$
For a concrete example, assume that $X$ and $Y$ are independent and uniformly distributed over $[0,1]$ and $a=\frac 12$ and $c=\frac 13$.
Then,
$$P(\frac{1}{2}X+\frac{1}{2}Y>\frac{1}{3}) = \int_\limits{0}^{\frac 23} \int_\limits{\frac 23-x}^{1} 1 \,dy\, dx+ \int_\limits{\frac 23}^{1} \int_\limits{0}^{1} 1 \,dy\, dx = \frac 79.$$
So,
$$\mathbb{E}[X | Z>\frac 13] = \int_\limits{0}^{\frac 23} \int_\limits{\frac 23-x}^{1} \frac{9x}{7} \,dy\, dx+ \int_\limits{\frac 23}^{1} \int_\limits{0}^{1} \frac{9x}{7} \,dy \,dx = \frac{73}{126}.$$
More general cases: Consider random variables $X$ and $Y$ with joint density distribution $f_{X,Y}(x,y)$ defined on region $\Omega$ and we want to calculate $\mathbb{E}[g(X,Y) | A]$ for some functions $g$:
Case 1: $P(A) > 0$:
Then the conditional density is $\frac{f_{X,Y}(x,y)}{P(A)}$ and therefore,
$\mathbb{E}[g(X,Y) | A] = \iint_\limits{A\cap\Omega} g(x,y) \frac{f_{X,Y}(x,y)}{P(A)} dx\,dy =\frac{1}{P(A)}\,\iint_\limits{A\cap\Omega} g(x,y) f_{X,Y}(x,y) dx\,dy.$
Case 2: $P(A)=0$.
It depends what $A$ is. For example, if $A=\{X=c\}$ then the conditional density is calculated as $\frac{f_{X,Y}(c,y)}{f_X(c)}$ where $f_X(x)$ is the marginal density of $X$ and therefore, $\mathbb{E}[g(X,Y) | X=c] = \int_\limits{A\cap\Omega} g(c,y) \frac{f_{X,Y}(c,y)}{f_X(c)} dy = \frac{1}{f_X(c)} \int_\limits{A\cap\Omega} g(c,y) f_{X,Y}(c,y) dy. $
Similarly,
$\mathbb{E}[g(X,Y) | Y=c] =\frac{1}{f_Y(c)} \int_\limits{A\cap\Omega} g(x,c) f_{X,Y}(x,c) dx.$
For other cases such as $\mathbb{E}[g(X,Y) | X+Y=c]$, we need to calculate the density of $X+Y$ first and then
$\mathbb{E}[g(X,Y) | X+Y=c] =\frac{1}{f_{X+Y}(c)} \int_\limits{A\cap\Omega} g(x,c-x) f_{X,Y}(x,c-x) dx.$
Intuition behind the conditional density:
When you condition on an event, say $aX+(1-a)Y>c$, it means that your new region is now $D = [0,1]\times[0,1]\, \bigcap \, \{(x,y): ax+(1-a)y>c\}$ and the conditional expectation $E[X | Z>c]$ is just the usual expectation $E[X^*]$ on the new region, where $X^*=X|_{Z>c}$ is the restriction of $X$ on the new region.
So, $E[X^*] = \int_\limits{X} x f^*(x) dx$ where $f^*(x)$ is the density distribution of $X^*$. This $f^*(x)$ is actually the conditional distribution because we need some kind of normalization for $f(x)$ on the new region where $f(x)$ is the marginal density of $X$ on the original region which is $[0,1] \times [0,1]$. So to summarize, the conditional density is just the density distribution of the new random variable which is restricted on the new region.
Let us work in a probability space $(\Omega, \Sigma, \mathbb{P})$ and suppose
$$X : (\Omega, \Sigma) \to (\Lambda, \mathcal{F})$$
$$Y : (\Omega, \Sigma) \to (\Gamma, \mathcal{G})$$
$$Z : (\Omega, \Sigma) \to (\Gamma, \mathcal{G})$$
are mutually independent random variables. Define
$$f(u, v) = \mathbf{1}_{F \times G}(u, v) = \mathbf{1}_{F}(u) \mathbf{1}_{G}(v)$$
with $F \in \mathcal{F}$ and $G \in \mathcal{G}$. Observe
\begin{aligned}
\mathbb{E}\left[ f(X, Y) f(X, Z) \mid X \right] &= \mathbb{E}\left[ \mathbf{1}_{F}(X) \mathbf{1}_G(Y) \mathbf{1}_{F}(X) \mathbf{1}_G(Z) \mid X \right] \\
&\overset{(1)}{=} \mathbf{1}_{F}(X) \mathbf{1}_{F}(X) \mathbb{E}\left[\mathbf{1}_G(Y) \mathbf{1}_G(Z) \mid X \right] \\
&\overset{(2)}{=} \mathbf{1}_{F}(X) \mathbf{1}_{F}(X) \mathbb{E}\left[\mathbf{1}_G(Y) \mathbf{1}_G(Z) \right] \\
&\overset{(3)}{=} \mathbf{1}_{F}(X) \mathbf{1}_{F}(X) \mathbb{E}\left[\mathbf{1}_G(Y) \right] \mathbb{E}\left[ \mathbf{1}_G(Z) \right]
\end{aligned}
The numbered equalities are justified at the end. Furthermore we have
$$\mathbb{E}\left[ f(X, Y) \mid X \right] = \mathbb{E}\left[ \mathbf{1}_{F}(X) \mathbf{1}_G(Y) \mid X \right] \overset{(1)}{=} \mathbf{1}_{F}(X) \mathbb{E}\left[ \mathbf{1}_G(Y) \mid X \right] \overset{(4)}{=} \mathbf{1}_{F}(X) \mathbb{E}\left[\mathbf{1}_G(Y) \right]$$
$$\mathbb{E}\left[ f(X, Z) \mid X \right] = \mathbb{E}\left[ \mathbf{1}_{F}(X) \mathbf{1}_G(Z) \mid X \right] = \mathbf{1}_{F}(X) \mathbb{E}\left[ \mathbf{1}_G(Z) \mid X \right] = \mathbf{1}_{F}(X) \mathbb{E}\left[\mathbf{1}_G(Z) \right]$$
so evidently
$$\mathbb{E}\left[ f(X, Y) f(X, Z) \mid X \right] = \mathbb{E}\left[ f(X, Y) \mid X \right] \mathbb{E}\left[ f(X, Z) \mid X \right]$$
Proof of $(1)$. It suffices to show $\mathbf{1}_{F}(X)$ is $\sigma(X)$-measurable. For any Borel set $A \in \mathcal{B}_{\mathbb{R}}$
$$\mathbf{1}_{F}(X)^{-1}(A) = \left( X^{-1} \circ \mathbf{1}_F^{-1} \right)(A) = X^{-1}\left( \left\{ u \in \Lambda : \mathbf{1}_{F}(u) \in A \right\} \right)$$
We can consider four cases:
$$\{0, 1\} \cap A = \varnothing \implies \left\{ u \in \Lambda : \mathbf{1}_{F}(u) \in A \right\} = \varnothing \in \mathcal{F}$$
$$\{0, 1\} \cap A = \{0\} \implies \left\{ u \in \Lambda : \mathbf{1}_{F}(u) \in A \right\} = F^\complement \in \mathcal{F}$$
$$\{0, 1\} \cap A = \{1\} \implies \left\{ u \in \Lambda : \mathbf{1}_{F}(u) \in A \right\} = F \in \mathcal{F}$$
$$\{0, 1\} \cap A = \{0, 1\} \implies \left\{ u \in \Lambda : \mathbf{1}_{F}(u) \in A \right\} = \Omega \in \mathcal{F}$$
and we conclude $\mathbf{1}_F(X)$ is $\sigma(X)$-measurable.
Proof of $(2)$. It suffices to show $\mathbf{1}_G(Y) \mathbf{1}_G(Z) \perp\!\!\!\perp X$. In particular we must prove
$$\mathbb{P}\left( A \cap B \right) = \mathbb{P}\left( A \right) \mathbb{P}\left( B \right)$$
for all $A \in \sigma\left( \mathbf{1}_G(Y) \mathbf{1}_G(Z) \right)$ and $B \in \sigma(X)$. We can identify the elements of $\sigma\left( \mathbf{1}_G(Y) \mathbf{1}_G(Z) \right)$: take any Borel set $M \in \mathcal{B}_{\mathbb{R}}$ and consider four cases
$$\{0, 1\} \cap M = \varnothing \implies \Big[ \mathbf{1}_G(Y) \mathbf{1}_G(Z) \Big]^{-1}(M) = \varnothing$$
$$\{0, 1\} \cap M = \{0\} \implies \Big[ \mathbf{1}_G(Y) \mathbf{1}_G(Z) \Big]^{-1}(M) = Y^{-1}(G^{\complement}) \cup Z^{-1}(G^\complement)$$
$$\{0, 1\} \cap M = \{1\} \implies \Big[ \mathbf{1}_G(Y) \mathbf{1}_G(Z) \Big]^{-1}(M) = Y^{-1}(G) \cap Z^{-1}(G)$$
$$\{0, 1\} \cap M = \{0, 1\} \implies \Big[ \mathbf{1}_G(Y) \mathbf{1}_G(Z) \Big]^{-1}(M) = \Omega$$
With $A \in \{\varnothing, \Omega\}$ the result is trivial so we are left with two cases to verify. By mutual independence
$$\mathbb{P}\left( Y^{-1}(G) \cap Z^{-1}(G) \cap B \right) = \mathbb{P}\left( Y^{-1}(G) \right) \mathbb{P}\left( Z^{-1}(G) \right) \mathbb{P}\left( B \right) = \mathbb{P}\left( Y^{-1}(G) \cap Z^{-1}(G) \right) \mathbb{P}\left( B \right)$$
To finish observe
$$Y^{-1}(G^{\complement}) \cup Z^{-1}(G^\complement) = \Big[ Y^{-1}(G) \cap Z^{-1}(G) \Big]^\complement$$
and
$$\mathbb{P}( A^\complement \cap B^\complement ) = \mathbb{P}( A^\complement ) \mathbb{P}( B^\complement ) \implies \mathbb{P}\left( A \cap B\right) = \mathbb{P}\left( A \right) \mathbb{P}\left( B \right)$$
Proof of $(3)$. It suffices to show $\mathbf{1}_G(Y) \perp\!\!\!\perp \mathbf{1}_G(Z)$. By $(1)$ we know $\mathbf{1}_G(Y)$ is $\sigma(Y)$-measurable and $\mathbf{1}_G(Z)$ is $\sigma(Z)$-measurable. In particular this means
$$\sigma\left( \mathbf{1}_G(Y) \right) \subseteq \sigma(Y)$$
$$\sigma\left( \mathbf{1}_G(Z) \right) \subseteq \sigma(Z)$$
and independence follows immediately.
Proof of $(4)$. It suffices to show $\mathbf{1}_G(Y) \perp\!\!\!\perp X$. By $(1)$ we know $\mathbf{1}_G(Y)$ is $\sigma(Y)$-measurable. In particular this means
$$\sigma\left( \mathbf{1}_G(Y) \right) \subseteq \sigma(Y)$$
and independence follows immediately.
Best Answer
Find joint density function of $U=X$ and $V=X+Y$ via transformation rule. Then we have $$ f_{U,V}(u,v) = 1_{\{0<u<1,\; 0<v-u<1\}}. $$ We can compute conditional pdf $f_{U|V}(u|v)$ as follows. $$ f_{U|V}(u|v)=\frac{f_{U,V}(u,v)}{f_V(v)} = \begin{cases}\frac{1}{v}1_{\{0<u<1,\; 0<v-u<1\}} \text{ for }v\in (0,1)\\\frac{1}{2-v}1_{\{0<u<1,\; 0<v-u<1\}}\text{ for }v\in (1,2) \end{cases}$$What is left is to actually calculate $E[U^2|V=v]$ as follows. $$ E[U^2|V=v] = \int u^2f_{U|V}(u|v)du = \frac{1}{v}\int_{\{0<u<1,\; v-1<u<v\}} u^2 du = \frac{1}{v}\int_0^v u^2du = \frac{v^2}{3} $$ for $v\in (0,1)$ and $$ \frac{1}{2-v}\int_{\{0<u<1,\; v-1<u<v\}} u^2 du = \frac{1}{2-v}\int_{v-1}^1 u^2 du = \frac{1}{2-v}\frac{u^3}{3}\Big|^{u=1}_{u=v-1} = \frac{v^2-v+1}{3}. $$