$\newcommand{\si}{\sigma}\newcommand{\ep}{\varepsilon}\newcommand\num{\operatorname{num}}\newcommand\den{\operatorname{den}}\newcommand{\R}{\mathbb R}
\newcommand{\vpi}{\varphi}$The conjecture is not true in general.
The limit depends on $\si$. In particular, let us show that the limit in question is, not $0$, but $\infty$ if
\begin{equation*}
\si>2 + \sqrt3; \tag{-2}\label{-2}
\end{equation*}
(also see the heuristics at the end of this answer).
Indeed, let $P:=\mathbb P$, $P_\ep:=\mathbb P_\ep$, $E_\ep:=\mathbb E_{P_\ep}$,
\begin{equation*}
m:=\ln\frac1\ep\to\infty,\quad l:=m+\si^2/2,\quad\mu:=-\frac\si2, \quad r:=\frac m\si,
\end{equation*}
\begin{equation*}
M_t:=\max_{s\in[0,t]}(W_s+\mu s),
\end{equation*}
\begin{equation*}
B_t:=\{M_t\ge r\}.
\end{equation*}
Note that $(X_t)$ is a geometric Brownian motion, so that
\begin{equation*}
X_t=\exp(\si W_t-\si^2 t/2),
\end{equation*}
whence
\begin{equation*}
Y_t:=Y^\ep_t=X_t^{-C(\ep)}=e^{\si l t/2}e^{-l(W_t+\mu t)} \tag{-1}\label{-1}
\end{equation*}
and
\begin{equation*}
A_\ep=\{M_1\ge r\}\supseteq B_t;
\end{equation*}
here and in the sequel, $t\in(0,1)$.
It follows that
\begin{equation*}
E_\ep Y_t\ge\frac\num\den, \tag{0}\label{0}
\end{equation*}
where
\begin{equation*}
\num:=Ee^{-l(W_t+\mu t)}1_{B_t},\quad \den:=P(A_\ep).
\end{equation*}
Formula 1.4.8(1) on p. 256 in Handbook of Brownian Motion - Facts and Formulae, Second Edition, by Borodin and Salminen can be rewritten as
\begin{equation*}
P(M_t<u,W_t+\mu_t\in dz) \\
=\vpi\Big(\frac{z-\mu t}{\sqrt t}\Big)\frac{dz}{\sqrt t}
-e^{2\mu u}\vpi\Big(\frac{z-2u-\mu t}{\sqrt t}\Big)\frac{dz}{\sqrt t}
\tag{1}\label{1}
\end{equation*}
for $z<u$, where $\vpi$ is the standard normal pdf.
Using \eqref{1} (and noting that $M_t\ge W_t+\mu_t$), one can find
\begin{equation*}
\begin{gathered}
\num=\int_\R P(M_t\ge r,W_t+\mu_t\in dz)e^{-lz} \\
=
\frac{1}{2} \left(\text{erf}\left(\frac{\si t \left(2 m+\si ^2+\si \right)-2 m}{2 \sqrt{2} \si
\sqrt{t}}\right)+1\right) \\
\times \exp \left(\frac{1}{8} \left(\frac{4 m^2 (\si t-4)}{\si }+4 m (\si +1)
(\si t-2)+(\si +2) \si ^3 t\right)\right) \\
+\frac{1}{2} e^{\frac{1}{8} t \left(2 m+\si ^2\right)
(2 m+\si (\si +2))} \text{erfc}\left(\frac{\si t \left(2 m+\si ^2+\si \right)+2 m}{2
\sqrt{2} \si \sqrt{t}}\right)
\end{gathered}
\end{equation*}
and
\begin{equation*}
\begin{gathered}
\den=P(M_1\ge r) \\
=
1-\frac{1}{2} e^{-m}
\left(\text{erfc}\left(\frac{\frac{\si }{2}-\frac{m}{\si }}{\sqrt{2}}\right)-2\right)-\frac{1}{2}
\text{erfc}\left(-\frac{\frac{m}{\si }+\frac{\si }{2}}{\sqrt{2}}\right).
\end{gathered}
\end{equation*}
If now $\si>2+\sqrt3$, then the interval $I_\si:=(\frac{4\si-1}{\si^2},\min(1,\frac4\si))$ is nonempty and contained in the interval $(0,1)$. Moreover, for any $\si>2+\sqrt3$ and any $t\in I_\si$, we have $\frac\num\den\to\infty$ (as $m\to\infty$), and hence, by \eqref{0}, $E_\ep Y_t\to\infty$. Thus, by Fatou's lemma, the limit in question is $\infty$. $\quad\Box$
Let me offer two competing heuristics to explain this result:
Heuristic I: The large-deviation effect: The large-deviation event $A_\ep=\{M_1\ge r\}=\{M_1\ge m/\si\}$ (with $m\to\infty$) implies that $W_t\approx mt/\si$. (In this case, this follows, for instance, from the independence of $W_1$ from the Brownian bridge $B_\cdot$, where $B_t:=W_t-tW_1$ for $t\in[0,1]$.) So, on the event $A_\ep$ we have $X_t\approx\exp((m-\si^2/2)t)$ and hence
\begin{equation*}
Y_t\approx\exp\Big(-\frac{m^2}\si\,(1+o(1))t\Big), \tag{2}\label{2}
\end{equation*}
so that we may expect $\int_0^1 Y_t\,dt$ to be somewhat small on the event $A_\ep$, on the order of $\si/m^2$. The smaller $\si$ is, the more pronounced this effect should be. I think we will indeed have $E_\ep\int_0^1 Y_t\,dt\to0$ if \eqref{-2} does not hold, but I have not checked all the details here.
Heuristic II: The counterbalancing effect of a re-weighting exponential factor: However, if $\si$ is large enough, then the large-deviation effect of Heuristic I may be overshadowed by the factor $e^{-lW_t}$ in the representation of $Y_t$ in \eqref{-1}. Indeed, this exponential factor can be very large for negative values of $W_t$ and negligible for positive values of $W_t$, since $l\sim m\to\infty$. So, even though negative values of $W_t$ are somewhat suppressed by the large-deviation condition $M_1\ge m/\si$, this suppression may be counterbalanced by the re-weighting exponential factor $e^{-lW_t}$, which greatly "favors" negative values of $W_t$. This counterbalancing effect will be more successful when the large-deviation effect is less strong, that is, when the spread/diffusion coefficient $\si$ of the Brownian motion is large enough. In this case,
the conditional expectation of $Y_t$ given $A_\ep$ may resemble much more
the unconditional expectation of $Y_t$, which is
\begin{equation*}
e^{l^2t/(2+o(1))}=e^{m^2t/(2+o(1))},
\end{equation*}
which is very, very large (as $m\to\infty$).
Heuristic II is absent in the previous setting, where we do not have a very influential re-weighting exponential factor, such as the just considered factor $e^{-lW_t}$.
Best Answer
Let $\tau = \inf\{ t>0 : W_t = 1 \}$. The conjecture is true and the essence of the proof outlined below appears to be the following peculiar property of the hitting time $\tau$: $$ \lim_{\epsilon \searrow 0} P_{\epsilon}[\tau > \epsilon-\epsilon^{3/2}] = 1 \;. $$ (This can be computed directly using the fact that distribution of $\tau$ is inverse gamma with parameters $1/2$ and $1/2$.) In order to leverage this property, one must carefully split up $e_t:= X_t - Y_{t/\epsilon}$ as outlined below.
Proof. The proof shows that for any $T \in [0, \epsilon]$ $$ E_{\epsilon} [ \sup_{0 \le t \le T} |e_t|^2 ] \le C_1(\epsilon) + C_2 (\epsilon) \int_0^T E_{\epsilon} [ \sup_{0 \le r \le s} |e_r|^2 ] ds $$ where $C_1(\epsilon)$ and $C_2 (\epsilon) $ are non-negative and $\lim_{\epsilon \searrow 0} C_1(\epsilon) =0$ and $\lim_{\epsilon \searrow 0} C_2(\epsilon) \epsilon = O(1)$. By Grönwall's inequality, $$ E_{\epsilon} [ \sup_{0 \le t \le \epsilon} |e_t|^2 ] \le C_1(\epsilon) \exp(C_2 (\epsilon) \epsilon) \;, $$ and then passing to the limit gives the required result. The remaining details follow.
By Itô's formula, \begin{align*} & |e_t|^2 = \mathrm{I} + \mathrm{II} + \mathrm{III} \quad \text{where} \\ & \mathrm{I}:= \frac{2}{\epsilon} \int_0^t e_s (\sigma(X_s) - \sigma(Y_{s/\epsilon})) ds \;, \\ & \mathrm{II}:= \frac{2}{\epsilon} \int_0^t e_s \sigma(X_s) (\epsilon dW_s - ds) \;, \\ & \mathrm{III}:= \int_0^t \sigma(X_s)^2 ds \;. \end{align*}
Estimate for $\mathrm{I}$.
This term exclusively contributes to $C_2(\epsilon)$. Since $\sigma$ is $L$-Lipschitz for some $L>0$ $$ I \le \frac{2 L}{\epsilon} \int_0^t |e_s|^2 ds $$ and thus $$ \sup_{0 \le t \le \epsilon} I \le \frac{2 L}{\epsilon} \int_0^{\epsilon} |e_s|^2 ds \le \frac{2 L}{\epsilon} \int_0^{\epsilon} \sup_{0 \le r \le s} |e_r|^2 ds \;. $$ Thus, $C_2(\epsilon) = 2 L / \epsilon$.
Estimate for $\mathrm{II}$.
This term contributes to $C_1(\epsilon)$, and here is where we leverage the aforementioned peculiar property of $\tau$.
\begin{align*} & \lim_{\epsilon \searrow 0} E_{\epsilon} [ \sup_{0 \le t \le \epsilon} \left| \mathrm{II} \right| ] = \lim_{\epsilon \searrow 0} E_{\epsilon} [ \sup_{0 \le t \le \epsilon} \left| \mathrm{II} \right| \mathbf{1}_{ \{ \tau > \epsilon - \epsilon^{3/2} \} } ] \\ & \quad = \lim_{\epsilon \searrow 0} E [ \sup_{0 \le t \le \epsilon} \left| 2 \int_0^t e_s \sigma(X_s) dW_s \right| ] = 0 \end{align*} Here we took 3 steps that are explained in detail below.
In the first step, we used Cauchy-Schwarz to show that $$ \left( E_{\epsilon} \sup_{0 \le t \le \epsilon} |\mathrm{II}| \mathbf{1}_{\{\tau < \epsilon - \epsilon^{3/2} \} } \right)^2 \le \underbrace{E[\sup_{0 \le t \le \epsilon} |\mathrm{II}|^2 ]}_{\to O(1)} \, \underbrace{P_{\epsilon}[ \tau < \epsilon - \epsilon^{3/2}]}_{\to 0} $$
In the second step, we used a natural splitting and the triangle inequality to write, \begin{align*} & E_{\epsilon} \sup_{0 \le t \le \epsilon} |\mathrm{II}| \mathbf{1}_{\{\tau > \epsilon - \epsilon^{3/2} \} } \le \mathrm{II}_a + \mathrm{II}_b \quad \text{where} \\ & \mathrm{II}_a := E_{\epsilon} \sup_{0 \le t \le \epsilon} \frac{2}{\epsilon} \left| \int_0^{t \wedge \tau} e_s \sigma(X_s) (\epsilon dW_s - ds) \right| \mathbf{1}_{\{\tau > \epsilon - \epsilon^{3/2} \} }\\ & \mathrm{II}_b := E_{\epsilon} \sup_{0 \le t \le \epsilon} \frac{2}{\epsilon} \left| \int_{t \wedge \tau}^t e_s \sigma(X_s) (\epsilon dW_s - ds) \right| \mathbf{1}_{\{\tau > \epsilon - \epsilon^{3/2} \} } \;. \end{align*} To estimate these terms, there are two cases to consider:
In other words, conditioned on the event $(\tau < \epsilon)$, the law of $\epsilon W_s - s$ is equal to the law of a standard Brownian bridge.
In the third and last step, we used Doob's martingale inequality, Itô isometry, and (standard) a priori bounds on $X_t$ and $Y_{t/\epsilon}$ over $(0,\epsilon)$. Since the estimate of this term is almost identical to the estimate of $\mathrm{III}$ given below, the details are suppressed.
Estimate for $\mathrm{III}$.
This term also contributes to $C_1(\epsilon)$. Noting that $\sigma$ is $L$-Lipschitz, \begin{align*} E_{\epsilon} [ \sup_{0 \le t \le \epsilon} \mathrm{III} ] &= E_{\epsilon}[ \int_0^{\epsilon} [ \sigma(X_s)^2 ds ] \\ &\le 2 \epsilon \sigma(0)^2 + 2 L^2 E_{\epsilon}[ \int_0^{\epsilon} |X_s|^2 ds ] \\ &\le 2 \epsilon ( \sigma(0)^2 + L^2 |x_0|^2 ) + 2 L^2 \epsilon \int_0^{\epsilon} \sigma(X_s)^2 ds \\ & \quad + 2 L^2 \epsilon \int_0^{\epsilon} X_s \sigma(X_s) dW_s \end{align*} and as long as $2 L^2 \epsilon \le 1/2$, it follows that $$ E_{\epsilon} [ \sup_{0 \le t \le \epsilon} \mathrm{III} ] \le 4 \epsilon ( \sigma(0)^2 + L^2 |x_0|^2 ) + 4 L^2 \epsilon \int_0^{\epsilon} X_s \sigma(X_s) dW_s $$ The last term in this expression can be treated in a similar way as the last step in the estimate for $\mathrm{II}$, namely Doob's martingale inequality, Cauchy-Schwarz, Itô isometry, and (standard) a priori bounds on $X_t$ over $(0,\epsilon)$. In particular, \begin{align*} \left( E_{\epsilon} [ \sup_{0 \le t \le \epsilon} \left| \int_0^t X_s \sigma(X_s) d W_s \right| ] \right)^2 &\le E \sup_{0 \le t \le \epsilon} \left| \int_0^t X_s \sigma(X_s) dW_s \right|^2\\ &\le 4 E \left| \int_0^{\epsilon} X_s \sigma(X_s) dW_s \right|^2 \\ &\le 4 E \int_0^{\epsilon} X_s^2 \sigma(X_s)^2 ds \\ &\le 4 \tilde{C}_2 (1+ x_0^4) e^{\tilde{C}_1 \epsilon} \epsilon \end{align*} where in turn we used Cauchy-Schwarz, Doob's martingale inequality with $p=2$, Itô's isometry, and then an a priori bound on the second/fourth moment of $X_t$ over $(0, \epsilon)$.
$\Box$