Solving the Lagrange Equations, we get that the maximum entropy distribution with mean $0$ and variance $1$ is where
$$
\sum_{k\in\mathbb{Z}}(k^2-1)e^{-ak^2}=0
$$
which is $a\doteq0.4999998943842821\sim\frac12$. We need to compute the coefficient where
$$
c\sum_{k\in\mathbb{Z}}e^{-ak^2}=1
$$
which is $c\doteq0.3989422361322933\sim0.3989422804014327=\frac1{\sqrt{2\pi}}$.
Thus, the maximum entropy distribution on the integers that has a mean of $0$ and variance of $1$, is
$$
p_k=c\,e^{-ak^2}
$$
where $a$ and $c$ are given above. These values are extremely close to the Gaussian, which has the maximum entropy for a continuous distribution with the same constraints.
Although the function derived above is very close to the Gaussian distribution restricted to $\mathbb{Z}$, $\frac1{\sqrt{2\pi}}e^{-n^2/2}$ is not a probability measure on $\mathbb{Z}$. In fact, the Poisson Summation Formula says that
$$
\begin{align}
\frac1{\sqrt{2\pi}}\sum_{n\in\mathbb{Z}}e^{-n^2/2}
&=1+2\sum_{n=1}^\infty e^{-2\pi^2n^2}\\
&\gt1
\end{align}
$$
I believe the second paper you cited (by Harremoës) is actually the answer you're looking for. The Poisson distribution describes the number of occurrences of an event in a fixed interval, under the assumption that occurrences are independent. In particular, the constraint that the events should be independent means that not every discrete distribution is a valid candidate for describing this system, and motivates the choice of the union of infinite Bernoulli variables. Then, Harremoës shows that if you further constrain the expected value (i.e., $\lambda$), then the maximum entropy distribution is the Poisson distribution.
So, the Poisson distribution is the maximum entropy distribution given constraints of counting independent events and having a known expected value.
That said, you can also easily reverse-engineer a (contrived) constraint for which the Poisson distribution would be the maximum entropy distribution.
Let our unknown constraint be $\mathbb{E}[f(k)] = c$. Maximizing the entropy with this constraint, along with the mean being $\lambda$, gives the minimization problem
$\sum_k p(k) \ln p(k) - \alpha \left( \sum_k p(k) - 1\right) - \beta\left(\sum_k k p(k) - \lambda\right) - \gamma \left( \sum_k p(k)f(k) - c \right)$,
where $\alpha$, $\beta$, and $\gamma$ are Lagrange multipliers. Taking the derivative with respect to $p(k)$ yields
$\ln p(k) = -1 + \alpha + \beta k + \gamma f(k)$,
We already know the Poisson distribution has the form $p(k) = e^{-\lambda}\lambda^k/k!$, or $\ln(p(k)) = -\lambda + k \ln(\lambda) - \ln(k!)$. Therefore, we can guess that $f(k)$ has the functional form $\ln(k!)$.
So, the Poisson distribution maximizes entropy when $p$ has mean $\lambda$ and $\mathbb{E}(\ln k!) = $[some particular value depending on $\lambda$].
This approach may not be very satisfying, since it's not clear why we would want a distribution with a specified expectation value of $\ln k!$. The Johnson paper you cited is (in my opinion) similarly unsatisfying, since it essentially proves that the Poisson distribution is the maximal entropy distribution among distributions which are "more log-convex than the Poisson distribution".
Best Answer
As stochasticboy321's comments, there is no lower bound, you can obtain a differential entropy as low as you wish (towards $-\infty$) by choosing a random variable that is almost constant, i.e. a (continous) density that is near a Dirac delta $f_X(x)=\delta(x-\mu)$ where $\mu$ is the desired mean.
For a concrete example, you can take a Log-normal with parameters $(M,S)$. Its mean is $\mu = \exp(M + S^2/2)$ and its entropy $h= \log( S e^M \sqrt{2 \pi e}) $ . Then, by taking $S>0$ arbitrarily small we can find an apt $M$ (bounded, around $\log(\mu)$) that gives the desired mean, and the entropy tends towards $-\infty$.