There exists a variety of options for this task but let me stress first that this is an extremely complicated and difficult issue that is still subject of current research because analytical continuation is an ill posed problem!
1) The 'analytical' analytical continuation can be performed when the function $f(\mathrm i\omega)$ under consideration is a rational function of $\mathrm i\omega$. So $$f(\mathrm i\omega)=\frac{1}{\mathrm i\omega}$$ can be continued to the complex plane $\mathrm i\omega \rightarrow z\in\mathbb C$ while $$f(\mathrm i\omega)=\frac{e^{\mathrm i\omega\beta}}{\mathrm i\omega}$$ is not a rational function of $\mathrm i\omega$ and making the replacement here is a mistake. Instead one needs to evaluate the exponential first and find $e^{\mathrm i\omega\beta}=\pm 1$ depending on the statistics.
2) Directly inferred from this replacement rule comes the expansion of a function in a finite Laurent series
$$f(z)=\sum^{m_2}_{n=m_1} a_n z^n,\;\; m_1,m_2\in\mathbb Z$$
where the coefficients may be calculated from the numerical values known at $m_2-m_1$ Matsubara energies.
3) One of the oldest methods to do a numerical analytical continuation is the Pade approximation. The function in question is expanded in a continued fraction
$$f(z)=b_0+\frac{a_1z}{1-\frac{a_2z}{1-\frac{a_3z}{1-...}}}.$$
The coefficients can be computed from a Pade table, see http://en.wikipedia.org/wiki/Pad%C3%A9_table
Method 1) is exact and other than for almost trivial calculations of little practical value. 2) and 3) suffer from cutoff effects due to the limited amount of available Matsubara points at which the function value might also have a numerical error as is the case for data from Quantum Monte Carlo calculations. But in fact analytical continuation is very volatile towards cutoff and noise effects. This is where physical considerations have to be accounted for.
To tackle the cutoff one can approximate the tail (large $\mathrm i\omega$ or respectively $z$ expansion) of the function with an analytical form that can often times be computed exactly from the many body problem or general physical necessities, e.g. the 1-Particle Green's function of a fermionic system always has the form $\frac{1}{z}+\frac{a_1}{z^2}+...$. The tail can be used to compute an arbitrary number of expansion coefficients but keep in mind that the interesting low energy spectrum of your system is strongly influenced by small Matsubara energies and less so by the tail so from computing a large number of coefficients from the tail one gains little to nothing.
The treatment of statistical noise is even more delicate than the cutoff and the reason why a lot of people try to avoid calculating on the Matsubara axis altogether.
4) A prominent method for noisy data is the maximum entropy method about which you can read more here http://arxiv.org/pdf/1001.4351v1.pdf where you will also find references to alternative techniques.
There is a version for the FDT in the Keldysh formalism, which is general for treating non-equilibrium systems in the so called steady state limit. In this approach, one assumes that the system is initially non-interacting and characterised by some equilibrium distribution function $n(\omega)$, which will depend on the particle's statistics. Then interactions are switched on adiabatically at later times. Using Green's functions formalism, one can work out a relation between the different time-ordered Green's functions in the Keldysh contour:
https://en.wikipedia.org/wiki/Keldysh_formalism
One can see that in the Keldysh contour, four different contour orderings take place. The four different contour ordered Green's functions are not independent from each other, and they are related by a causality constraint. This allows to "rotate" the components of the Green's functions into the usual advanced and retarded components $G^{A},G^{R}$, plus another component that is termed the Keldysh Green function $G^{K}$. In the special case where the system is found in thermodynamic equilibrium (and thus there is time-translation invariance), these three components are not independent from each other: they relate by a FDT relation of the form:
\begin{eqnarray}
G^{K}(\omega)=(1+ \eta 2n(\omega))\left(G^{R}(\omega)-G^{A}(\omega)\right),
\end{eqnarray}
where $\eta=\pm$ is for bosons $+$ and $-$ for fermions. Here $n(\omega)$ can be either bosonic (which would be the Bose distribution function) or fermionic (the Fermi-Dirac distribution). These components of the Green's functions appear in the solution of Dyson's equation:
\begin{eqnarray}
G^{-1}(\omega)=G^{-1}_0(\omega)-\Sigma(\omega),
\end{eqnarray}
where $G_0(\omega)$ corresponds to the non-interacting Green's functions, and $\Sigma(\omega)$ is the self-energy. Dyson's equation is completely general for bosons or fermions, and the FDT equation above, which is only valid in equilibrium, is also general independent of the particle's statistics.
Best Answer
To your first point of "are they one-body or many-body ones?", I am not quite sure what you mean. This quantity involves all of the many-body eigenstates, but it is a one-body quantity in the sense that it only accounts for processes of adding/removing one particle.
From the definition of the retarded Green's functions we have
$$ G^{r} = -i\Theta(t-t')\frac{1}{Z}\text{Tr}\Big[e^{-\beta(\hat{H}-\mu \hat{N})}\{\hat{\psi}(xt),\hat{\psi}^{\dagger}(x't') \} \Big] $$ where $\{\cdot,\cdot\}$ is the anti-commutator and $ Z = \text{Tr}\exp(-\beta(\hat{H}-\mu\hat{N}))$, and the trace is over all the many-body eigenstates. The advanced is defined analogously: $$ G^{a} = i\Theta(t'-t)\frac{1}{Z}\text{Tr}\Big[e^{-\beta(\hat{H}-\mu \hat{N})}\{\hat{\psi}(xt),\hat{\psi}^{\dagger}(x't') \} \Big] $$
Assuming a time-independent $\hat{H}$, we fourier transform to $t-t' \to \omega$, from the Lehmann representation we have that: $$ G^{r}(\omega) - G^{a}(\omega) = 2i\ \text{Im}G^{r}(\omega) $$
Substituting this relation back into your expression gives,
$$ G^{<}(\omega) = - 2if(\omega)\text{Im}G^{r}(\omega), $$ which shows a much more immediate analogy to your standard fluctuation-dissipation relation. That is, an analog that connects a correlation function and the imaginary part of a response function. I am unaware of a deeper physical intuition in this expression, so if someone else has a better sense, please feel free to correct my answer.
A great source is this chapter submitted to the Autumn School on Correlated Electrons: https://arxiv.org/abs/1907.11302