An almost complete answer. First of all, indeed the $\lVert \alpha\rVert_2$-based bound mentioned in the question can be shown to be tight for many "simple" $\alpha$'s, such as balanced, or uniform/balanced on a subset of $m$ coordinates. However, it is not tight in general, and the right answer appears to be captured by a quantity, the $K$-functional between $\ell_1$ and $\ell_2$.
Converting the problem to that of looking at a weighted sum $Y\stackrel{\rm def}{=}\sum_{k=1}^n \alpha_k Y_k$ of Rademacher random variables $Y_1,\dots, Y_n$ (where $Y_k = 1-2X_k$),
$$
\mathbb{P}\{ X \leq \varepsilon \}
= \mathbb{P}\{ Y \geq 1-2\varepsilon \}.
$$
In Montgomery-Smith [1], the following is shown:
Theorem. For $c\stackrel{\rm def}{=} \frac{1}{4\ln 12}\simeq \frac{1}{10}$, we have
$$\begin{align}
\mathbb{P}\{ Y > K_{1,2}(\alpha,t) \} &\leq e^{-\frac{t^2}{2}}, \tag{1} \\ \mathbb{P}\{ Y > cK_{1,2}(\alpha,t) \} &\geq ce^{-\frac{t^2}{c}} \tag{2}
\end{align}$$
where $K_{1,2} (\alpha,t) \stackrel{\rm def}{=} \inf\{\lVert \alpha'\rVert_1+t\lVert \alpha''\rVert_2 \ \colon\ \alpha',\alpha'' \in \ell_2,\ \alpha'+\alpha'' = \alpha\}$.
Moreover, the following bound holds for $K_{1,2}(\alpha,t)$, ([1], citing Theorem 4.2 of [2]): assuming wlog that $\alpha_1 \geq \alpha_2 \geq \dots\geq \alpha_n$,
$$
\gamma K_{1,2}(\alpha,t) \leq \sum_{k=1}^{\lfloor t^2\rfloor} \alpha_k + \left(\sum_{k=\lfloor t^2\rfloor+1}^n \alpha_k^2\right)^{\frac{1}{2}} \leq K_{1,2}(\alpha,t) \tag{3}
$$
for some absolute constant $\gamma \in (0,1)$.
This gives the following corollary, that can be found in [1]:
Corollary. With $c$ as above, we have that for all $\alpha\in\ell_2$ (which is trivial in our case as $\alpha$ has finite support) and $0 < t \leq c\frac{\lVert \alpha\rVert_2^2}{\lVert \alpha\rVert_\infty}$,
$$\begin{align}
ce^{-\frac{t^2}{c^3\lVert \alpha\rVert_2^2}} \leq \mathbb{P}\{ Y >t \} \leq e^{-\frac{t^2}{2\lVert \alpha\rVert_2^2}}, \tag{4}
\end{align}$$
where the upper bound holds for all $t>0$.
Since the range of interest is $t\in(0,1]$, the above shows that the $e^{-\Theta\left(\frac{t^2}{\lVert \alpha\rVert_2^2}\right)}$ dependence is tight as long as $\lVert \alpha\rVert_2^2$ and $\lVert \alpha\rVert_\infty$ are within constant factors, which happens to be the case for $\alpha$ roughly balanced or uniform on a subset of coordinates, for instance.
Now, for a counter example, one can consider (a variant of) the "truncated random Harmonic series," where we set, for $1\leq k\leq n$,
$$
\alpha_k \stackrel{\rm def}{=} \frac{1}{k H_n} \operatorname*{\sim}_{n\to \infty} \frac{1}{k\ln n}.
$$
Then, using techniques similar as [1], one can show that
$$
\mathbb{P}\{ Y > t \} \leq e^{-\Theta(n^t)}
$$
for $t\in(0,1)$. However, $
\lVert \alpha\rVert_2^2 {\displaystyle\operatorname*{\sim}_{n\to\infty}} \frac{\pi^2}{6\ln^2 n}
$, so (4) would only yield
$$
\mathbb{P}\{ Y > t \} \leq e^{-\Theta( t^2 \ln^2 n )}.
$$
Note that (sanity check) in this case, $\lVert \alpha\rVert_\infty = \frac{1}{H_n}$, so the lower bound of (4) does not apply: $\frac{\lVert \alpha\rVert_2^2}{\lVert \alpha\rVert_\infty} {\displaystyle\operatorname*{\sim}_{n\to\infty}} \frac{1}{\ln n} = o(1)$.
Edit: based on results from [3] (adapting and generalizing the proof of Theorem 2.2), one can get the following refinement of (2) above:
Theorem. Fix any $c>0$. Then, for any $\alpha\in\ell_1+\ell_2$, we have that for all $t\geq 1$,
$$
\mathbb{P}\left\{ Y \geq \frac{1}{1+c}K_{1,2}(\alpha,t) \right\} \geq e^{-\left( \frac{2}{c}\ln\frac{\sqrt{6}(1+c)}{c}\right) (t+c)^2 }.
$$
In particular,
$$
\mathbb{P}\left\{ Y \geq \frac{1}{2}K_{1,2}(\alpha,t) \right\} \geq e^{-\left( \ln 24 \right) (t+1)^2 } \geq e^{-\left( 4\ln 24 \right)t^2 }.
$$
[1] Montgomery-Smith S.J. (1990) The distribution of Rademacher sums. Proc. Amer. Math. Soc. 109:517522
[2] Holmstedt, Tord. (1970) Interpolation of quasi-normed spaces. Math. Scand. 26, 177–199.
[3] Astashkin, S.V. (2010) Rademacher functions in symmetric spaces Journal of Mathematical Sciences, 169(6):725–886
For each real $k>0$,
\begin{equation}E\psi_\infty(|X|/k)=\infty\,P(|X|>k)+P(|X|=k) \\
=\left\{\begin{aligned}\infty\text{ if } P(|X|>k)>0,\\
P(|X|=k)\le1\text{ if } P(|X|>k)=0.
\end{aligned}\right.
\end{equation}
So, indeed, $\|X\|_{\psi_\infty}<\infty$ iff $X$ is essentially bounded. Moreover, $\|X\|_{\psi_\infty}=\text{ess}\,\text{sup}\,|X|$.
Generally, for any non-constant nondecreasing convex function $F\colon[0,\infty)\to[0,\infty]$ such that $F(0)\le1$, the formula
\begin{equation}\|X\|_F:=\inf\{t>0\colon EF(|X|/t)\le1\}
\end{equation}
defines a norm on the linear space, say $L_F$, of random variables (r.v.'s) $X$ on a probability space $\mathcal P$ with $\|X\|_F<\infty$. The proof of this is the same as the one in the case when $F$ is not allowed to take the value $\infty$. (If, in addition, it is assumed that $F(0+)<1$, then all bounded r.v.'s on $\mathcal P$ will be in $L_F$.)
Best Answer
In general, the answer is no. Moreover, the answer is no even if \begin{equation} \phi(t)=t\ln(1+t). \tag{1} \end{equation}
Indeed, suppose that $P(Z_i=0)=1-2p$ and $P(Z_i=b)=p=P(Z_i=-b)$ for all $i$, where \begin{equation*} p:=\frac1{2\phi(b)}, \end{equation*} $\phi$ is as given by (1), and $b$ is a large enough positive real number so that $p\in(0,1/2)$.
Then for all $i$ we have $EZ_i=0$ and $E\phi(|Z_i|)=1$, so that $\|Z_i\|_\phi\le1$. On the other hand, for all real $c>0$ and all natural $n\ge2$ \begin{equation*} \begin{aligned} &E\phi\Big(\Big|\frac1n\sum_{i=1}^n Z_i\Big|/c\Big) \\ &\ge\sum_{i=1}^n \phi\Big(\frac b{cn}\Big)P(|Z_i|=b,\ Z_j=0\ \forall j\ne i) \\ &=n \phi\Big(\frac b{cn}\Big)2p(1-2p)^{n-1} \\ &=\frac{2pb}c\,\ln\Big(1+\frac b{cn}\Big)(1-2p)^{n-1}\to\frac1{2c}>1 \end{aligned} \end{equation*} as $n\to\infty$, if $b=n^2$ and $c\in(0,1/2)$. So, for all large enough $n$ we have $E\phi\big(\big|\frac1n\sum_{i=1}^n Z_i\big|/c\big)>1$ and hence $\|\frac1n\sum_{i=1}^n Z_i\|_\phi\ge c$ and hence \begin{equation*} \Big\|\frac1n\sum_{i=1}^n Z_i\Big\|_\phi\not\to0 \end{equation*} as $n\to\infty$.
More generally, the answer will remain no if $\phi(t)=t \ell(t)$, where $\ell$ is any function such that $\ell(t)$ is slowly varying as $t\to\infty$. Yet more generally, the answer will remain no if $\phi(t)=t L(t)$, where $L$ is any function such that $\sup\limits_{K\in(0,\infty)}\limsup\limits_{t\to\infty}\dfrac{L(Kt)}{L(t)}<\infty$.