To add a slightly different angle to PhotonicBoom's sound answer, the link between the two entities - discrete sum and integral - is the concept of measure, not of limit. You can think of your sum as a Lebesgue integral if you choose a discrete measure for the real line with the measure's "anchors" at a countable set of "allowed values". Discrete and continuous measures are highly analogous insofar that they both have all the "Real MacCoy" properties of measures: non-negativity, positivity and countable ($\sigma$-) additivity. Ultimately, though, the two are as different as are $\aleph_0$ and $\aleph_1$, dramatically illustrated by the Cantor Slash argument: a quantum observable which can in principle yield any real number in an interval as a measurement and one which can only have discrete values as measurements are very different beasts.
In quantum mechanics, or at least all the QM I've seen(see footnote), one makes an assumption of a separable or a first countable Hilbert space for the state space. This means, in effect, that there exists a countable basis for the space of states: for example, the quantum harmonic oscillator's state can be expressed as a superposition of the countable set of energy eigenstates. So in "normal QM", there is always a co-ordinate transformation which will turn an integral completeness relation into a discrete one, although, at the same time, you are changing the observable whose eigenstates span the state space.
Footnote: Separability is part of the Wightman axioms. But sometimes the quantum field theorists drop even this assumption, although I understand that they still assume a separable subspace containing physical fields embedded in a non-separable state space of potential fields.
So let's start a step back because your coherent states are not normalized as I would normalize them.
Coherent states
The coherent states come from their response to the bosonic annihilator, $$\hat b |x , y\rangle = (x + i y) |x,y\rangle.$$From this one can derive that any particular one's representation among the number states must satisfy, $$\hat b~\sum_n c_n |n\rangle = \sum_n c_n \sqrt{n} |n-1\rangle=(x + i y) \sum_n c_n |n\rangle,$$giving the recursive relation that $c_n = \frac{x+iy}{\sqrt n}~c_{n-1}.$ Starting from $c_0$ we then find indeed the relation that $$|x,y\rangle = c_0~\sum_n \frac{(x + i y)^n}{\sqrt{n!}} |n\rangle.$$The remaining $c_0$ with the proper normalization gives $$\langle x,y|x,y\rangle = 1 = |c_0|^2 \sum_n \frac{(x-iy)^n(x+iy)^n}{n!} = |c_0|^2 \exp\big(x^2 + y^2\big).$$Choosing these to all have the same complex phase for their vaccum component finally yields,$$|x, y\rangle = \exp\left(-\frac12(x^2 + y^2)\right)\sum_n \frac{(x + i y)^n}{\sqrt{n!}}~|n\rangle.$$
So the question is, why does your expression have a leading $\pi^{-1/2}$ in it? That's because they resolve the identity in a somewhat weird way. What does that mean?
Resolving the identity
Suppose you have an expression for some average $\langle A \rangle.$ QM is very clear that this expression may be written based on its quantum state $|\psi\rangle$ as $\langle \psi|\hat A|\psi\rangle.$
But using the fact that $1 = \sum_n |n\rangle\langle n|,$ for example, we can insert these sums ad-hoc into that expression to find that in fact this expectation value also reads, $$\langle A \rangle = \sum_{mn} \langle\psi|m\rangle\langle m|\hat A|n\rangle\langle n|\psi\rangle = \sum_{mn} \psi^*_m~A_{mn}~\psi_n.$$ So that is the value of resolving the identity; it means that you can define this matrix $A_{mn}$ which fully specifies the action of $\hat A$ on the Hilbert space, recovering every single expectation value from the matrix.
Well we see something very similar when we look at the operator, $$\hat Q = \int_{-\infty}^\infty dx~\int_{-\infty}^\infty dy~|x,y\rangle\langle x, y| = \sum_{mn} \iint dx~dy~e^{-x^2-y^2}\frac{(x-iy)^m(x+iy)^n}{\sqrt{m!n!}} |m\rangle\langle n|.$$
At this point it is useful to shift to polar coordinates where $x + i y = r e^{i\theta},$ yielding $$\hat Q = \sum_{mn}\int_{0}^\infty dr~\int_0^{2\pi} r~d\theta~e^{-r^2}~\frac{r^{m+n} e^{i(n-m)\theta}}{\sqrt{m!n!}} |m\rangle\langle n|.$$ Note that the angle over $\theta$ integrates a sinusoid over one or more full periods and therefore vanishes if $m\ne n$; it is $2\pi$ if $m = n$, so we
must get:$$\hat Q = \pi\sum_{n}\int_{0}^\infty dr~2r~e^{-r^2}~\frac{r^{2n} }{n!} |n\rangle\langle n|.$$Substituting $u=r^2, du=2r~dr$ we find that this is:$$\hat Q = \pi\sum_{n}\frac1{n!}~|n\rangle\langle n|~\int_{0}^\infty du~e^{-u}~u^n.$$If you've never seen the gamma function before, the integral on the right hand side is $n!$ and in fact it is the canonical way to extend the factorial function to non-integers to find e.g. that $(-1/2)! = \sqrt{\pi},$ though of course we only need the integers here. After cancelling that through we find out that in fact, $$\hat Q = \pi,$$ or in other words we recover this property of resolving the identity even though not all of these functions are orthogonal, because the way that they're non-orthogonal just comes down to a constant multiplicative factor. We can therefore state unequivocally, $$1 = \iint dx~dy~\frac1\pi~|x,y\rangle\langle x,y|.$$ Your expression absorbs a $1/\sqrt{\pi}$ term into each of these kets, and writes $\pi^{-1/2} |x, y\rangle = |\alpha\rangle$ (where $\alpha = x + i y$) for short, both of which help in writing these expansions. One then finds similarly to the above expression with $A_{mn}$, that $$\langle A \rangle = \iint d^2\alpha~d^2\beta~\psi^*(\alpha)~A(\alpha,\beta)~\psi(\beta).$$The only cost to this notation is that we then have to express the above integrals with the more clumsy $\int d^2\alpha$ which is short for something like $d\alpha_x~d\alpha_y$ where $\alpha = \alpha_x + i \alpha_y.$
Best Answer
A Hilbert space $\cal H$ is complete which means that every Cauchy sequence of vectors admits a limit in the space itself.
Under this hypothesis there exist Hilbert bases also known as complete orthonormal systems of vectors in $\cal H$. A set of vectors $\{\psi_i\}_{i\in I}\subset \cal H$ is called an orthonormal system if $\langle \psi_i |\psi_j \rangle = \delta_{ij}$. It is also said to be complete if a certain set of equivalent conditions hold. One of them is $$\langle \psi | \phi \rangle = \sum_{i\in I}\langle \psi| \psi_i\rangle \langle \psi_i| \phi \rangle\quad \forall \psi, \phi \in \cal H\tag{1}\:.$$ (This sum is absolutely convergent and must be interpreted if $I$ is not countable, but I will not enter into these details here.) Since $\psi,\phi$ are arbitrary, (1) is often written $$I = \sum_{i\in I}| \psi_i\rangle \langle \psi_i|\tag{2}\:.$$