Method 0: The lazy statistician.
Note that for $y \neq 0$ we have $f(y) = (1-\pi) p_y$ where $p_y$ is the probability that a Poisson random variable takes value $y$. Since the term corresponding to $y = 0$ does not affect the expected value, our knowledge of the Poisson and the linearity of expectation immediately tells us that
$$
\mu = (1-\pi) \lambda
$$
and
$$
\mathbb E Y^2 = (1-\pi) (\lambda^2 + \lambda) \> .
$$
A little algebra and the identity $\mathrm{Var}(Y) = \mathbb E Y^2 - \mu^2$ yields the result.
Method 1: A probabilistic argument.
It's often helpful to have a simple probabilistic model for how a distribution arises. Let $Z \sim \mathrm{Ber}(1-\pi)$ and $Y \sim \mathrm{Poi}(\lambda)$ be independent random variables. Define
$$
X = Z \cdot Y \>.
$$
Then, it is easy to see that $X$ has the desired distribution $f$. To check this, note that $\renewcommand{\Pr}{\mathbb P}\Pr(X = 0) = \Pr(Z=0) + \Pr(Z=1, Y=0) = \pi + (1-\pi) e^{-\lambda}$ by independence. Similarly $\Pr(X = k) = \Pr(Z=1, Y=k)$ for $k \neq 0$.
From this, the rest is easy, since by the independence of $Z$ and $Y$,
$$
\mu = \mathbb E X = \mathbb E Z Y = (\mathbb E Z) (\mathbb E Y) = (1-\pi)\lambda \>,
$$
and,
$$
\mathrm{Var}(X) = \mathbb E X^2 - \mu^2 = (\mathbb E Z)(\mathbb E Y^2) - \mu^2 = (1-\pi)(\lambda^2 + \lambda) - \mu^2 = \mu + \frac{\pi}{1-\pi}\mu^2 \> .
$$
Method 2: Direct calculation.
The mean is easily obtained by a slight trick of pulling one $\lambda$ out and rewriting the limits of the sum.
$$
\mu = \sum_{k=1}^\infty (1-\pi) k e^{-\lambda} \frac{\lambda^k}{k!} = (1-\pi) \lambda e^{-\lambda} \sum_{j=0}^\infty \frac{\lambda^j}{j!} = (1-\pi) \lambda \> .
$$
A similar trick works for the second moment:
$$
\mathbb E X^2 = (1-\pi) \sum_{k=1}^\infty k^2 e^{-\lambda} \frac{\lambda^k}{k!} = (1-\pi)\lambda e^{-\lambda} \sum_{j=0}^\infty (j+1) \frac{\lambda^j}{j!} = (1-\pi)(\lambda^2 + \lambda) \>,
$$
from which point we can proceed with the algebra as in the first method.
Addendum: This details a couple tricks used in the calculations above.
First recall that $\sum_{k=0}^\infty \frac{\lambda^k}{k!} = e^\lambda$.
Second, note that
$$
\sum_{k=0}^\infty k \frac{\lambda^k}{k!} = \sum_{k=1}^\infty k \frac{\lambda^k}{k!} = \sum_{k=1}^\infty \frac{\lambda^k}{(k-1)!} = \sum_{k=1}^\infty \frac{\lambda \cdot \lambda^{k-1}}{(k-1)!} = \lambda \sum_{j=0}^\infty \frac{\lambda^j}{j!} = \lambda e^{\lambda} \>,
$$
where the substitution $j = k-1$ was made in the second-to-last step.
In general, for the Poisson, it is easy to calculate the factorial moments $\mathbb E X^{(n)} = \mathbb E X(X-1)(X-2)\cdots(X-n+1)$ since
$$
e^\lambda \mathbb E X^{(n)} = \sum_{k=n}^\infty k(k-1)\cdots(k-n+1) \frac{\lambda^k}{k!} = \sum_{k=n}^\infty \frac{\lambda^n \lambda^{k-n}}{(k-n)!} = \lambda^n \sum_{j=0}^\infty \frac{\lambda^j}{j!} = \lambda^n e^\lambda \>,
$$
so $\mathbb E X^{(n)} = \lambda^n$. We get to "skip" to the $n$th index for the start of the sum in the first equality since for any $0 \leq k < n$, $k(k-1)\cdots(k-n+1) = 0$ since exactly one term in the product is zero.
Let the first (two point) random variable be $A$ and the second be $B.$ Let $q$ be the probability that $B$ takes on the value $0.$ Instead of $r$ as in your problem statement, we equivalently let the probability that $B$ is $z$ be $0.3-q.$ Equating the means, we have $$0.3x+0.7(6000) = (0.3-q)z+0.7(6000),$$
so $$x = {z \left( 1-{q \over 0.3} \right) }$$
If the variances are identical, then the second moments about the origin are the same also. This gives us
$$E[A^2]= 0.3x^2 + 0.7 (6000)^2 = (0.3-q)z^2 + 0.7(6000)^2$$
Plugging in our expression for $x,$ this leads to
$$0.3 \left[z \left( 1 - {q \over 0.3} \right) \right]^2 = 0.3z^2 -qz^2$$ Solving this for $q$ gives the two solutions $q=0$ or $q=0.3,$ both of which imply that $A$ and $B$ have the same two-point distribution.
So your problem as posed does not have a solution in terms of your definition of an acceptable answer.
Best Answer
I'll assume you only want samples consisting of non-negative integers (otherwise its obviously not Poisson). Indeed, without those restrictions, it's pretty trivial - though largely meaningless, because the variance-ratio changes as you change scale. (It makes more sense with counts though.)
I'll also assume you want the $n-1$ form of the sample variance (the unbiased form).
Some examples:
These took me a few minutes to construct.
It doesn't really have much significance other than giving a lack of evidence against the Poisson (at least on the basis of the ratio of variance to mean). In no way does it tell you that the data is Poisson.
Edit: For example, here's a sample that is pretty obviously not Poisson (in the sense that the chance that you could end up with a sample like that from a Poisson is reaally small):
For starters, all the values are even!
Here's another that's pretty clearly not Poisson:
Edit: here's a biggish sample that's not so plainly inconsistent with Poisson:
... though it's a bit too kurtotic to really be very consistent with a Poisson.