Solving the Lagrange Equations, we get that the maximum entropy distribution with mean $0$ and variance $1$ is where
$$
\sum_{k\in\mathbb{Z}}(k^2-1)e^{-ak^2}=0
$$
which is $a\doteq0.4999998943842821\sim\frac12$. We need to compute the coefficient where
$$
c\sum_{k\in\mathbb{Z}}e^{-ak^2}=1
$$
which is $c\doteq0.3989422361322933\sim0.3989422804014327=\frac1{\sqrt{2\pi}}$.
Thus, the maximum entropy distribution on the integers that has a mean of $0$ and variance of $1$, is
$$
p_k=c\,e^{-ak^2}
$$
where $a$ and $c$ are given above. These values are extremely close to the Gaussian, which has the maximum entropy for a continuous distribution with the same constraints.
Although the function derived above is very close to the Gaussian distribution restricted to $\mathbb{Z}$, $\frac1{\sqrt{2\pi}}e^{-n^2/2}$ is not a probability measure on $\mathbb{Z}$. In fact, the Poisson Summation Formula says that
$$
\begin{align}
\frac1{\sqrt{2\pi}}\sum_{n\in\mathbb{Z}}e^{-n^2/2}
&=1+2\sum_{n=1}^\infty e^{-2\pi^2n^2}\\
&\gt1
\end{align}
$$
I believe the second paper you cited (by Harremoës) is actually the answer you're looking for. The Poisson distribution describes the number of occurrences of an event in a fixed interval, under the assumption that occurrences are independent. In particular, the constraint that the events should be independent means that not every discrete distribution is a valid candidate for describing this system, and motivates the choice of the union of infinite Bernoulli variables. Then, Harremoës shows that if you further constrain the expected value (i.e., $\lambda$), then the maximum entropy distribution is the Poisson distribution.
So, the Poisson distribution is the maximum entropy distribution given constraints of counting independent events and having a known expected value.
That said, you can also easily reverse-engineer a (contrived) constraint for which the Poisson distribution would be the maximum entropy distribution.
Let our unknown constraint be $\mathbb{E}[f(k)] = c$. Maximizing the entropy with this constraint, along with the mean being $\lambda$, gives the minimization problem
$\sum_k p(k) \ln p(k) - \alpha \left( \sum_k p(k) - 1\right) - \beta\left(\sum_k k p(k) - \lambda\right) - \gamma \left( \sum_k p(k)f(k) - c \right)$,
where $\alpha$, $\beta$, and $\gamma$ are Lagrange multipliers. Taking the derivative with respect to $p(k)$ yields
$\ln p(k) = -1 + \alpha + \beta k + \gamma f(k)$,
We already know the Poisson distribution has the form $p(k) = e^{-\lambda}\lambda^k/k!$, or $\ln(p(k)) = -\lambda + k \ln(\lambda) - \ln(k!)$. Therefore, we can guess that $f(k)$ has the functional form $\ln(k!)$.
So, the Poisson distribution maximizes entropy when $p$ has mean $\lambda$ and $\mathbb{E}(\ln k!) = $[some particular value depending on $\lambda$].
This approach may not be very satisfying, since it's not clear why we would want a distribution with a specified expectation value of $\ln k!$. The Johnson paper you cited is (in my opinion) similarly unsatisfying, since it essentially proves that the Poisson distribution is the maximal entropy distribution among distributions which are "more log-convex than the Poisson distribution".
Best Answer
$\def\deq{\stackrel{\mathrm{d}}{=}}\def\aseq{\stackrel{\mathrm{a.s.}}{=}}$For any postive $X$ satisfying $P(X \leqslant a) = P\left( X \geqslant \dfrac{1}{a} \right)$ for any $a > 0$, since$$ P\left( \frac{1}{X} \leqslant a \right) = P\left( X \geqslant \dfrac{1}{a} \right) = P(X \leqslant a), \quad \forall a > 0 $$ then $\dfrac{1}{X} \deq X$ and$$ E(X) = \frac{1}{2} (E(X) + E(X)) = \frac{1}{2} \left( E(X) + E\left( \frac{1}{X} \right) \right) = \frac{1}{2} E\left( X + \frac{1}{X} \right) \geqslant 1, $$ where the equality holds iff $X \aseq 1$. Thus the only distrbution satisfies the symmetry condition with mean $1$ is $δ(x - 1)$, which renders the whole problem degenerate.