This is not true. Indeed, suppose that $X_k=X_{s;k}=k+sZ_k$, where $s\downarrow0$ and the $Z_k$'s are any iid random variables (r.v.'s).
To obtain a contradiction, suppose that, for the random Borel measure $\mu_s$ over $\mathbb R$ defined by $\mu_s(B):=\sum_{k\in\mathbb Z}1(X_{s;k}\in B)$, the distribution of the random variable (r.v.) $\mu_s(B)$ is Poisson with parameter $\lambda(s)|B|$ for some $\lambda(s)>0$ and all Borel $B$, where $|B|$ is the Lebesgue measure of $B$.
Note that
\begin{equation}
\mu_s((-1/2,1/2))\to1 \tag{1}
\end{equation}
in probability (see details on (1) below). Therefore and because the r.v. $\mu_s((-1/2,1/2))$ has the Poisson distribution with parameter $\lambda(s)$,
necessarily $\lambda(s)\to0$ and hence $\mu_s((-1/2,3/2))\to1$ in probability. However, similarly to (1) we have $\mu_s((-1/2,3/2))\to2$ in probability, a contradiction.
So, the random measure $\mu_s$ cannot be Poisson for all $s>0$.
Proof of (1): Note that $\mu_s(B)=\sum_{k\in\mathbb Z}1(Z_k\in\frac{B-k}s)$ and hence
\begin{equation}
1-\mu_s((-1/2,1/2))=s_1-s_2,
\end{equation}
where
\begin{equation}
s_1:=1-1\Big(Z_0\in\Big(\frac{-1/2}s,\frac{1/2}s\Big)\Big),
\end{equation}
and
\begin{equation}
s_2:=\sum_{k\in\mathbb Z\setminus\{0\}}1\Big(Z_k\in\Big(\frac{-1/2-k}s,\frac{1/2-k}s\Big)\Big).
\end{equation}
Next,
\begin{equation}
Es_1=1-P\Big(Z_0\in\Big(\frac{-1/2}s,\frac{1/2}s\Big)\Big)\to0
\end{equation}
and
\begin{equation}
Es_2=\sum_{k\in\mathbb Z\setminus\{0\}}P\Big(Z_0\in\Big(\frac{-1/2-k}s,\frac{1/2-k}s\Big)\Big)\le Es_1.
\end{equation}
Therefore and because $s_1,s_2\ge0$, we have
\begin{equation}
E|\mu_s((-1/2,1/2))-1|\le Es_1+Es_2\to0.
\end{equation}
So, by Markov's inequality, (1) follows.
$\newcommand{\R}{\mathbb R}\newcommand{\Z}{\mathbb Z}\newcommand{\ep}{\varepsilon}\newcommand{\de}{\delta}$Let $\psi_j:=0$ for $j=-1,-2,\dots$. Then
\begin{equation*}
X_t=\sum_{j\in\Z}X_{t,j}
\end{equation*}
for $t\in\Z$, where
\begin{equation*}
X_{t,j}:=\psi_{t-j}\ep_j.
\end{equation*}
Let
\begin{equation*}
B:=\sqrt{\sum_{j\in\Z} \psi_i^2},\quad m:=\max_{j\ge0}|\psi_j|=\max_{j\in\Z}|\psi_j|.
\end{equation*}
Suppose that $B>0$ and $m$ vary in any manner such that
\begin{equation*}
m/B\to0. \tag{1}\label{1}
\end{equation*}
Let us show that then $X_t/B$ converges in distribution to a standard normal random variable, for each $t\in\Z$.
For each real $\de>0$,
\begin{equation*}
\begin{aligned}
L&:=\frac1{B^2}\sum_{j\in\Z}EX_{t,j}^2\,1(|X_{t,j}|\ge\de B) \\
&=\frac1{B^2}\sum_{j\in\Z}E(\psi_{t-j}\ep_j)^2\,1(|\psi_{t-j}\ep_j|\ge\de B) \\
&\le\frac1{B^2}\sum_{j\in\Z}\psi_{t-j}^2E\ep_j^2\,1(|\ep_j|\ge\de B/m) \\
&=\frac1{B^2}\sum_{j\in\Z}\psi_{t-j}^2E\ep_0^2\,1(|\ep_0|\ge\de B/m) \\
&=E\ep_0^2\,1(|\ep_0|\ge\de B/m)\to0.
\end{aligned}
\end{equation*}
Hence,
\begin{equation*}
\frac1{B^2}\sum_{j\in\Z}EX_{t,j}^2\,1(|X_{t,j}|<\de B)=1-L\to1,
\end{equation*}
\begin{equation*}
\begin{aligned}
&\frac1{B^2}\sum_{j\in\Z}(EX_{t,j}\,1(|X_{t,j}|<\de B))^2 \\
&=\frac1{B^2}\sum_{j\in\Z}(EX_{t,j}\,1(|X_{t,j}|\ge\de B))^2
\le L\to0,
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
&\Big|\frac1B\sum_{j\in\Z}EX_{t,j}\,1(|X_{t,j}|<\de B)\Big| \\
&=\Big|\frac1B\sum_{j\in\Z}EX_{t,j}\,1(|X_{t,j}|\ge\de B)\Big| \\
&\le\frac1B\sum_{j\in\Z}E|X_{t,j}|\,1(|X_{t,j}|\ge\de B)\
\le \frac L\de\to0,
\end{aligned}
\end{equation*}
\begin{equation*}
\sum_{j\in\Z}P(|X_{t,j}|\ge\de B)\le L\to0.
\end{equation*}
So, by Theorem 18 in Chapter IV, $X_t/B$ converges in distribution to a standard normal random variable, for each $t\in\Z$.
Thus, under condition \eqref{1}, all the one-dimensional distributions of the process $(X_t)$ are asymptotically normal.
Similarly considered are all the finite-dimensional distributions of the process $(X_t)$ -- that is, all the joint distributions of $(X_{t_1},\dots,X_{t_p})$ for integers $t_1<\cdots<t_p$. This is done by writing
\begin{equation*}
\sum_{i=1}^p c_i X_{t_i}=\sum_{j\in\Z}Y_j
\end{equation*}
for any real $c_1,\dots,c_p$, where
\begin{equation*}
Y_j:=\phi_j\ep_j,\quad\phi_j:=\sum_{i=1}^p c_i \psi_{t_i-j},
\end{equation*}
so that $\sum_{j\in\Z}\phi_j^2<\infty$ and $\max_{j\in\Z}|\phi_j|\le m\sum_{i=1}^p |c_i|$.
Best Answer
$\newcommand\ep\varepsilon\newcommand\si\sigma\newcommand\N{\mathbb N}$Your counterexample is correct. Indeed, if $$X_t=\sum_{j=0}^\infty a_j \ep_{t-j} \tag{1}\label{1}$$ for $t\in\N$, $\sum_{j=0}^\infty a_j^2=1$, and the $\ep_t$'s are iid zero-mean random variables with variance $\si^2\in(0,\infty)$, then for $u\in\N$ $$EX_tX_{t+u}=\si^2\sum_{j=0}^\infty a_ja_{j+u}\to0$$ as $u\to\infty$, by the Cauchy--Schwarz inequality.
This contradicts the condition that $X_t=Y$ for all $t$, because then we would have $EX_tX_{t+u}=EX_1^2=\si^2>0$ for all $u\in\N$.
Generally (see e.g. Theorem 2, p. 263), a Gaussian process $(X_t)$ is of the form \eqref{1} if and only if $(X_t)$ is a zero-mean stationary process with $EX_t^2<\infty$ which has an absolutely continuous spectral measure and its spectral density $f$ satisfies the condition $$\int_{-\pi}^\pi\ln f(t)\,dt>-\infty$$ (so that $f$ does not get too close to $0$ on any set of significant enough Lebesgue measure).
(In your counterexample, the spectral measure is degenerate, a multiple of the Dirac measure supported on the singleton set $\{0\}$, and hence not all absolutely continuous.)