The existing answers list some important situations where Poisson Summation plays a role, the application to proving the functional equation of $\theta$ and hence of $\zeta$ being my personal favourite. My best answer to Tim's question as he actually asked it might be: why not have it in mind to try using it whenever you have a discrete sum that you are having trouble estimating, especially if you fancy your chances of understanding the Fourier transform of the summands. You'll end up with a different sum and it might be a lot easier to understand, and you might even be able to approximate your first sum by an integral (the term $\hat{f}(0)$ in the Poisson summation formula).
To explain a little more with an example, there's a whole theory concerned with the estimation of exponential sums $\sum_{n \leq N} e^{2\pi i \phi(n)}$. There are two processes called A and B that can be used to turn a sum like this into something you might be better positioned to understand. Process A is basically Weyl/van der Corput differencing (Cauchy-Schwarz) and process B is essentially Poisson summation. It's not a very straightforward task to put together a theory of how these processes interact, and how they may best be combined to estimate your sum, and in fact this is in general something of an art. The 10 lectures book by Montgomery contains a nice exposition, and there's a whole LMS lecture note volume by Graham and Kolesnik if you want more details.
I want to share a perhaps slightly obscure paper of Roberts and Sargos (Three-dimensional exponential sums with monomials, Journal fur die reine und angewandte Mathematik (Crelle) 591), in which they use Poisson Summation in the form of Process B mentioned above arbitrarily many times to establish the following rather simple-to-state result: the number of quadruples $x_1,x_2,x_3,x_4$ in $[X, 2X)$ with
$$|1/x_1 + 1/x_2 - 1/x_3 - 1/x_4| \leq 1/X^3$$
is $X^{2 + o(1)}$. In other words, the quantities $1/a + 1/b$ tend to avoid one another to pretty much the same extent as random numbers of the same size. Very very roughly speaking (I don't really understand the argument in depth) the proof involves looking at exponential sums $\sum_x e^{2\pi i m/x}$, and it is these that are transformed repeatedly using Poisson summation followed by other modifications (it being reasonably pointless to try and apply Poisson sum twice in succession).
[Some next-day edits in response to comments] As counterpoint to other viewpoints, one can say that Mellin inversion is "simply" Fourier inversion in other coordinates. Depending on one's temperament, this "other coordinates" thing ranges from irrelevancy to substance... The question about moral imperatives for Fourier inversion is addressed a bit below.
[Added: the exponential map $x\rightarrow e^x$ gives an isomorphism of the topological group of additive reals to multiplicative. Thus, the harmonic analysis on the two is necessarily "the same", even if the formulaic aspects look different. The occasional treatment of values (and derivatives) at $0$ for functions on the positive reals, as in "Laplace transforms", is a relative detail, which certainly has a corresponding discussion for Fourier transforms.]
The specific riff in Perron's identity in analytic number theory amounts to (if one tolerates a change-of-coordinates) guessing/discerning an L^1 function on the line whose Fourier transform is (in what function space?!) the characteristic function of a half-line.
Since the char fcn of a half-line is not in L^2, and does not go to 0 at infinity, there are bound to be analytical issues... but these are technical, not conceptual.
[Added: the Fourier transform families $x^{\alpha-1}e^{-x}\cdot \chi_{x>0}$ and $(1+ix)^{-\alpha}$, (up to constants) where $\chi$ is the characteristic function, when translated to multiplicative coordinates, give one family approaching the desired "cut-off" effect of the Perron integral. There are other useful families, as well.]
To my taste, the delicacies/failures/technicalities of not-quite-easily-legal aspects of Fourier transforms are mostly crushed by simple ideas about Sobolev spaces and Schwartz' distributions... tho' these do not change the underlying realities. They only relieve us of some of the burden of misguided fussiness of some self-appointed guardians of a misunderstanding of the Cauchy-Weierstrass tradition.
[Added: surely such remarks will strike some readers as inappropriate poesy... but it is easy to be more blunt, if desired. Namely, in various common contexts there is a pointless, disproportionate emphasis on "rigor". Often, elementary analysis is the whipping-boy for this impulse, but also one can see elementary number theory made senselessly difficult in a similar fashion. Supposedly, the audience is being made aware of a "need/imperative" for care about delicate details. However, in practice, one can find oneself in the role of the Dilbertian "Mordac the Preventer (of information services)" [see wiki] proving things like the intermediate value theorem to calculus students: it is obviously true, first, or else one's meaning of "continuous" or "real numbers" needs adjustment; nevertheless, the traditional story is that this intuition must be delegitimized, and then a highly stylized substitute put in its place. What was the gain? Yes, something foundational, but time has passed, and we have only barely recovered, at some expense, what was obviously true at the outset.
On another hand, Bochner's irritation with "distributions theory" was that it was already clear to him that things worked like this, and he could already answer all the questions about generalized functions... so why be impressed with Schwartz' "mechanizing" it? For me, the answer is that Schwartz arranged a situation so that "any idiot" could use generalized functions, whereas previously it was an "art". Yes, sorta took the fun out of it... but maybe practical needs over-rule preservation of secret-society clubbiness?]
Why should there be Fourier inversion? (for example...) Well, we can say we want such a thing, because it diagonalizes the operator $d/dx$ on the line (and more complicated things can be said in more complicated situations).
Among other things, this renders "engineering math" possible... That is, one can understand and justify the almost-too-good-to-be-true ideas that seem "necessary" in applied situations... where I can't help but add "like modern number theory". :)
[Added: being somewhat an auto-didact, I was not aware until relatively late that "proof" was absolutely sacrosanct. To the point of fetishism? In fact, we do seem to collectively value insightful conjecture and not-quite-justifiable heuristics, and interesting unresolved ideas offer more chances for engagement than do settled, ironclad, finished discussions. For that matter, the moments that one intuits "the truth", and then begins looking for reasons, are arguably more memorable, more fun, than the moments at which one has dotted i's and crossed t's in the proof of a not-particularly-interesting lemma whose truth was fairly obvious all along. More ominous is the point that sometimes we can see that something is true and works despite being unable to "justify" it. Heaviside's work is an instance. Transatlantic telegraph worked fine despite...]
In other words: spectral decomposition and synthesis. Who couldn't love it?!
[Added: and what recourse do we have than to hope that reasonable operators are diagonalizable, etc? Serre and Grothendieck (and Weil) knew for years that the Lefschetz fixed-point theorem should have an incarnation that would express zeta functions of varieties in terms of cohomology, before being able to make sense of this. Ngo (Loeser, Clucker, et alteri)'s proof of the fundamental lemma in the number field case via model theoretic transfer from the function field case is not something I'd want to have to "justify" to negativists!]
Best Answer
It is a special case of the trace formula. Both sides are the trace of the same operator.