A Hilbert space $\cal H$ is complete which means that every Cauchy sequence of vectors admits a limit in the space itself.
Under this hypothesis there exist Hilbert bases also known as complete orthonormal systems of vectors in $\cal H$. A set of vectors $\{\psi_i\}_{i\in I}\subset \cal H$ is called an orthonormal system if $\langle \psi_i |\psi_j \rangle = \delta_{ij}$. It is also said to be complete if a certain set of equivalent conditions hold. One of them is
$$\langle \psi | \phi \rangle = \sum_{i\in I}\langle \psi| \psi_i\rangle \langle \psi_i| \phi \rangle\quad \forall \psi, \phi \in \cal H\tag{1}\:.$$
(This sum is absolutely convergent and must be interpreted if $I$ is not countable, but I will not enter into these details here.)
Since $\psi,\phi$ are arbitrary, (1) is often written
$$I = \sum_{i\in I}| \psi_i\rangle \langle \psi_i|\tag{2}\:.$$
The discovery and study of coherent states represents one aspect of one
of the biggest problems physicists have faced with the birth and the
subsequent development, supported by excellent experimental results, of the
quantum mechanics: the search for a correspondence between the new theory, conceived
for the analysis of microscopic systems, and classical physics, still fully valid
for the description of the macroscopic world.
The history of coherent states begins immediately after the advent of mechanics
quantum: their introduction on a conceptual level dates back to a
article published in 1926, in which Schrödinger reports the existence of a class of
states of the harmonic oscillator that show, in a certain sense, behavior
analogous to that of a classic oscillator: for these states it is verified that the energy
mean corresponds to the classical value and the position and momentum averages have
oscillatory forms in constant phase relation.
Returning to Schrödinger's article, the "almost classical" states from him
identified present, in addition to the characteristics already mentioned, an important aspect:
being represented by Gaussian wave packets that do not change shape in the
time, guarantee the minimization of the product among the uncertainties about
position and on the impulse, that is the condition closest to the possibility of measuring
simultaneously the aforesaid quantities with arbitrary precision, allowed
from classical physics.
So, starting from the following relations:
\begin{equation}
a^{\dagger}|n\rangle=\sqrt{n+1}|n+1\rangle \quad a|n\rangle=\sqrt{n}|n-1\rangle
\end{equation}
it is noted that, by virtue of the orthonormality of the states
stationary, the diagonal matrix elements of the position and momentum operators are
null in the representation of energy, which means that the expectation values of
position and momentum on any stationary state are zero instant by instant.
The stationary states just analyzed are characterized by distributions of
constant probabilities with respect to the position over time; the wait values of the position e
of the impulse are null at all times: this aspect is a fundamental one
difference with the states of the classic oscillator, for which, once the energy is defined
(as long as different from zero), the observables position and momentum evolve over time
according to sinusoidal functions and are always in phase quadrature with each other. Also, if yes
calculate the uncertainties on position and momentum for a steady state with n photons, yes
gets the uncertainty relation $\Delta x \Delta p=(n+1/2)\hbar$.
it is therefore possible to obtain the minimization of the product of the uncertainties on impulse and
position, which represents the maximum similarity with classical mechanics.
A state that is as similar as possible to the classical case must therefore
have the following characteristics:
1): The evolution over time of the position and momentum expectation values must
be of a simple periodic type, with a constant phase ratio between position and
impulse.
2): The wave functions must be as narrow as possible around the value
average of the position, so that the probability distribution with respect to the
position may tend, by varying appropriate parameters, to a delta function of
Dirac;
3): The product of the uncertainties on the position and on the impulse must be minimal.
So you can see this "classical" behavior, that admits these particular states, as an intrinsic property of QHO.
Furthermore I have say that: also the harmonic oscillator energy eigenstates actually behave like oscillators.
Maybe this answer is a bit too long but I hope it can help you.
Best Answer
So let's start a step back because your coherent states are not normalized as I would normalize them.
Coherent states
The coherent states come from their response to the bosonic annihilator, $$\hat b |x , y\rangle = (x + i y) |x,y\rangle.$$From this one can derive that any particular one's representation among the number states must satisfy, $$\hat b~\sum_n c_n |n\rangle = \sum_n c_n \sqrt{n} |n-1\rangle=(x + i y) \sum_n c_n |n\rangle,$$giving the recursive relation that $c_n = \frac{x+iy}{\sqrt n}~c_{n-1}.$ Starting from $c_0$ we then find indeed the relation that $$|x,y\rangle = c_0~\sum_n \frac{(x + i y)^n}{\sqrt{n!}} |n\rangle.$$The remaining $c_0$ with the proper normalization gives $$\langle x,y|x,y\rangle = 1 = |c_0|^2 \sum_n \frac{(x-iy)^n(x+iy)^n}{n!} = |c_0|^2 \exp\big(x^2 + y^2\big).$$Choosing these to all have the same complex phase for their vaccum component finally yields,$$|x, y\rangle = \exp\left(-\frac12(x^2 + y^2)\right)\sum_n \frac{(x + i y)^n}{\sqrt{n!}}~|n\rangle.$$
So the question is, why does your expression have a leading $\pi^{-1/2}$ in it? That's because they resolve the identity in a somewhat weird way. What does that mean?
Resolving the identity
Suppose you have an expression for some average $\langle A \rangle.$ QM is very clear that this expression may be written based on its quantum state $|\psi\rangle$ as $\langle \psi|\hat A|\psi\rangle.$
But using the fact that $1 = \sum_n |n\rangle\langle n|,$ for example, we can insert these sums ad-hoc into that expression to find that in fact this expectation value also reads, $$\langle A \rangle = \sum_{mn} \langle\psi|m\rangle\langle m|\hat A|n\rangle\langle n|\psi\rangle = \sum_{mn} \psi^*_m~A_{mn}~\psi_n.$$ So that is the value of resolving the identity; it means that you can define this matrix $A_{mn}$ which fully specifies the action of $\hat A$ on the Hilbert space, recovering every single expectation value from the matrix.
Well we see something very similar when we look at the operator, $$\hat Q = \int_{-\infty}^\infty dx~\int_{-\infty}^\infty dy~|x,y\rangle\langle x, y| = \sum_{mn} \iint dx~dy~e^{-x^2-y^2}\frac{(x-iy)^m(x+iy)^n}{\sqrt{m!n!}} |m\rangle\langle n|.$$ At this point it is useful to shift to polar coordinates where $x + i y = r e^{i\theta},$ yielding $$\hat Q = \sum_{mn}\int_{0}^\infty dr~\int_0^{2\pi} r~d\theta~e^{-r^2}~\frac{r^{m+n} e^{i(n-m)\theta}}{\sqrt{m!n!}} |m\rangle\langle n|.$$ Note that the angle over $\theta$ integrates a sinusoid over one or more full periods and therefore vanishes if $m\ne n$; it is $2\pi$ if $m = n$, so we must get:$$\hat Q = \pi\sum_{n}\int_{0}^\infty dr~2r~e^{-r^2}~\frac{r^{2n} }{n!} |n\rangle\langle n|.$$Substituting $u=r^2, du=2r~dr$ we find that this is:$$\hat Q = \pi\sum_{n}\frac1{n!}~|n\rangle\langle n|~\int_{0}^\infty du~e^{-u}~u^n.$$If you've never seen the gamma function before, the integral on the right hand side is $n!$ and in fact it is the canonical way to extend the factorial function to non-integers to find e.g. that $(-1/2)! = \sqrt{\pi},$ though of course we only need the integers here. After cancelling that through we find out that in fact, $$\hat Q = \pi,$$ or in other words we recover this property of resolving the identity even though not all of these functions are orthogonal, because the way that they're non-orthogonal just comes down to a constant multiplicative factor. We can therefore state unequivocally, $$1 = \iint dx~dy~\frac1\pi~|x,y\rangle\langle x,y|.$$ Your expression absorbs a $1/\sqrt{\pi}$ term into each of these kets, and writes $\pi^{-1/2} |x, y\rangle = |\alpha\rangle$ (where $\alpha = x + i y$) for short, both of which help in writing these expansions. One then finds similarly to the above expression with $A_{mn}$, that $$\langle A \rangle = \iint d^2\alpha~d^2\beta~\psi^*(\alpha)~A(\alpha,\beta)~\psi(\beta).$$The only cost to this notation is that we then have to express the above integrals with the more clumsy $\int d^2\alpha$ which is short for something like $d\alpha_x~d\alpha_y$ where $\alpha = \alpha_x + i \alpha_y.$