Both the Euler-Maclaurin formula and Poisson summation are basic examples of deeper ideas. But at their core, the underlying ideas are very different.
The Euler-Maclaurin formula can be thought of as coming from repeated integration by parts, regarding the initial sum as a Rieman-Stieltjes integral
$$ \sum f(n) = \int_a^b f(t) d \lfloor t \rfloor, $$
where the constant terms in the antiderivatives appearing in the integrations by parts are chosen to minimize some cross error. This turns out to be the same view as estimating the expected error in an iterated composite trapezoidal rule for estimating an integral.
At its core, Euler-Maclaurin concerns estimating the difference between the discrete and continuous sums of a nice function $f(t)$. To that end, there exist generalizations to higher dimensions, and in particular to functions on polytopes. See this MO question for more on this connection, but the theme remains that one estimates a discretized sum (which is often quite hard to understand, but easier to actually compute) with an integral (which is often much easier to understand properties of, but can be hard to actually compute).
Many examples from elementary and analytic number theory of successes of Euler-Maclaurin formulas arise from computing high accuracy estimates of discrete sums of extremely well-behaved functions, such as computing
$$ \sum \log n, \quad \sum \frac{1}{x}.$$
In these cases, the smoothness properties of the function $f$ are more easily viewed from the integral.
Conversely, Euler-Maclaurin formulas are used to give high accuracy estimates of integrals in standard numerical analysis methods, as discrete sums with understandable error terms are actually computable and estimable. Note that when approximating an integral $\int_1^b f(x) dx$, the error comes in the form of high derivatives of $f$, so this still leverages smoothness properties of $f$, but in a different way.
Poisson summation is very different in nature. Poisson summation concerns Fourier series, or perhaps stated more fundamentally, the Fourier transform. The underlying big picture concerns the structure of $\mathbb{R}$ and $\mathbb{Z}$, and how $\mathbb{Z}$ behaves inside $\mathbb{R}$ as a discrete subgroup.
One can study Poisson summation for $\mathbb{Z}^n$ sitting within $\mathbb{R}^n$, or more general group quotients. Carrying this to one extreme leads to the Selberg Trace Formula (and in fact, one can directly view regular Poisson summation as a "trivial" trace formula. This point of view is apparent in the introductory chapter of Bump's Automorphic Forms and Representations, as a way of motivating eventual progress towards discussion trace formulas). From this point of view, it is clear that Poisson summation is fundamentally a consequence of representation-theoretic information.
Although over $\mathbb{R}/\mathbb{Z}$, this looks superficially similar to the integrals in Euler-Maclaurin summation, I would say that the underlying ideas are substantively different.
Best Answer
I provide here a brief outline of how Ramanujan proved the functional equation in question.
Let us then start with a real variable $x\in(0,1)$ and define variables $q, y, z$ dependent on $x$ as follows \begin{align} z&={}_2F_1\left(\frac{1}{2},\frac{1}{2};1;x\right)\tag{1}\\ y&=\pi\cdot\dfrac{{}_2F_1\left(\dfrac{1}{2},\dfrac{1}{2};1;1-x\right)} {{}_2F_1\left(\dfrac{1}{2},\dfrac{1}{2};1;x\right)}\tag{2}\\ q&=e^{-y} \tag{3} \end{align} where $${}_2F_1(a,b;c;x)=1+\frac{a\cdot b} {c} \cdot\frac{x} {1!}+\frac{a(a+1)\cdot b(b+1)}{c(c+1)}\cdot\frac{x^2}{2!}+\dots$$ is Gauss hypergeometric function. You can get a more detailed explanation of the notation for hypergeometric functions in this answer of mine.
The relation between $x$ and $z$ can also be written in terms of elliptic integrals as $$z=\frac{2}{\pi}\int_0^{\pi/2}\frac{dt}{\sqrt{1-x\sin^2t}}\tag{4}$$ Using these definitions Ramanujan proved that $$\lim_{x\to 0^+}\frac{q}{x}=\frac{1}{16}\tag{5}$$ Using this result and some properties of hypergeometric functions he inverted the relation between $q$ and $x$ to obtain $$x=1-\frac{\phi^4(-q)}{\phi^4(q)}, z=\phi^2(q)\tag{6}$$ where $$\phi(q) =\sum_{n\in\mathbb{Z}} q^{n^2}\tag{7}$$ is a theta function (in Ramanujan's notation, Jacobi used $\vartheta_3(q)$ to denote the same function). This theta function is related to the function $\psi$ mentioned in question as $$\phi(q) =1+2\psi(y/\pi)\tag{8}$$ Let the functional dependence of $q$ on $x$ be denoted as $q=F(x) $ and then one can note that $\log F(x) =-y$ and $$\log F(x) \cdot \log F(1-x)=\pi^2\tag{9}$$ Using this equation we get $\log F(1-x)=-\pi^2/y$ or $$F(1-x)=e^{-\pi^2/y}\tag{10}$$ Let us also observe that the second equation in $(6)$ can be written as $$\phi^2(F(x)) ={}_2F_1\left(\frac{1}{2},\frac{1}{2};1;x\right)\tag{11}$$ so that we also have $$\phi^2(F(1-x))={}_2F_1\left(\frac{1}{2},\frac{1}{2};1;1-x\right)\tag{12}$$ Let $\alpha, \beta$ be positive real numbers with $\alpha\beta=\pi$ and let $y=\alpha^2$ so that $\pi^2/y=\beta^2$. Dividing $(12)$ by $(11)$ we get $$\frac{\phi^2(F(1-x))}{\phi^2(F(x))}=\frac{y}{\pi}=\frac{\alpha}{\beta}$$ or $$\alpha \phi^2(e^{-\alpha^2})=\beta\phi^2(e^{-\beta^2})$$ Taking square roots we get the desired identity in question.
The crux of Ramanujan's approach lies in the proof of identities $(5),(6)$ using the definitions $(1),(2),(3),(7)$ and identity $(4)$. The proofs have been provided with all relevant details in my blog posts: post 1, post 2.