I don't know if the following is what you are looking for, but: To give the division a sense, what you can do is look for functions $\phi \in \mathcal E^0(\mathbb R)$ (that is continuous functions $\mathbb R \to \mathbb R$ that fulfill $\phi \delta = \delta$. As for any $\psi \in \mathcal D(\mathbb R)$ we have $$(\phi \delta)(\psi) =\delta(\phi\psi) = \phi(0)\psi(0) = \phi(0)\delta(\psi), $$
that is $\phi \delta = \phi(0)\delta$, so we have $$\phi \delta = \delta \iff \phi(0) = 1. $$
The basic reason for this is that $G(\mathbf{r},\mathbf{r_0})$ is locally integrable on the whole $\mathbb{R}^n$, $n\geq 3$, i.e $G\in L^1_{loc}(\mathbb{R}^n)$. To use the terminology of V.S. Vladimirov [1, § 1.6 p. 15], $G$ is a regular generalized function so, by the linearity of and by the theorem on the passage to the limit under the sign of integral, it is represented exactly as
$$
\langle G,\varphi \rangle= \int_{\mathbb{R}^n} G(\mathbf{r},\mathbf{r_0})\varphi(\mathbf{r}) d\mathbf{r} \quad \forall \varphi \in C^\infty_0(\mathbb{R}^n)
$$
It is therefore a somewhat natural choice to use the structure of this distribution to prove that
$$
\langle\nabla^2 G,\varphi \rangle= \langle G,\nabla^2\!\varphi \rangle= \int_{\mathbb{R}^n} G(\mathbf{r},\mathbf{r_0}) \nabla^2\!\varphi(\mathbf{r}) d\mathbf{r}=\varphi(\mathbf{r_0}) \quad \forall \varphi \in C^\infty_0(\mathbb{R}^n)
$$
The method of proof of this equality to which user258521 alludes is simply the above formula in disguise. To see this, consider any domain $\Omega\in \mathbb{R}^n$ for which a form of Gauss's theorem holds, and a mollifier $\varphi_\varepsilon \in C^\infty_0(B_\varepsilon)$, i.e. an infinitely smooth function of compact support contained inside the ball $B_\varepsilon$ of radius $\varepsilon>0$ centered in $\mathbf{0}\in \mathbb{R}^n$ such that $\varphi_\varepsilon(\mathbf{r})\to \delta(\mathbf{r})$ in $\mathscr{D}^\prime$ for $\varepsilon\to 0$. Then
$$
\varphi_\varepsilon\ast\chi_\Omega (\mathbf{r})= \int_{\mathbb{R}^n} \varphi_\varepsilon(\mathbf{r}-\mathbf{s})\chi_\Omega(\mathbf{s}) d\mathbf{s} \in C^\infty_0(\Omega+B_\varepsilon) \subset C^\infty_0(\mathbb{R}^n)
$$
where
- $\chi_\Omega:\mathbb{R}^n\to\{0,1\}$ is the indicator or characteristic function of the domain $\Omega$
- $\Omega+B_\varepsilon$ is the set formed by summing a vector of $\Omega$ and a vector of $B_\varepsilon$.
By using the above definition for $\langle G,\varphi_\varepsilon\ast\chi_\Omega \rangle$ and letting $\varepsilon\to 0$
$$
\langle G,\varphi_\varepsilon\ast\chi_\Omega \rangle \to \int_\Omega G(\mathbf{r},\mathbf{r_0}) d\mathbf{r},
$$
and consequently applying Gauss's theorem:
$$
\langle \nabla^2 G,\varphi_\varepsilon\ast\chi_\Omega \rangle =
\langle G,\nabla^2\!\varphi_\varepsilon\ast\chi_\Omega \rangle\to
\begin{cases}
1 & \mathbf{r_0}\in\Omega\\
0 & \mathbf{r_0}\notin\Omega
\end{cases}
$$
- Note that this kind of proof works only for domains $\Omega$ for which some kind of form of Gauss's theorem holds.
- A direct distribution theory proof, like the one offered by Vladimirov [1, §2.3.8 pp. 33-35], does not need such hypothesis.
[1] Vladimirov, V. S. (2002), Methods of the theory of generalized functions, Analytical Methods and Special Functions, 6, London–New York: Taylor & Francis, pp. XII+353, ISBN 0-415-27356-0, MR 2012831, Zbl 1078.46029.
Best Answer
First, if $f_1$ and $f_2$ are solutions to $Tf=g,$ where $T$ is some linear operator and $g$ is given, then $f_1-f_2$ is a solution to $Tf=0.$ Therefore we will study $x f(x)=0.$ It can easily be shown in distribution theory that $x\delta(x)=0,$ $x^2\delta(x)=0,$ and $x^2\delta'(x)=0,$ but since you're studying Fourier transforms I will give an explanation using Fourier transforms:
Take the equation $x f(x) = 0$ and apply the Fourier transform to both sides. You get $i\hat{f}'(\xi) = 0.$ This is a differential equation with solutions $\hat{f}(\xi) = C,$ where $C$ is a constant. Taking the inverse Fourier transform gives us $f(x) = C\delta(x).$
Likewise, $x^2 f(x) = 0$ transforms to $-\hat{f}''(\xi)=0$ with solutions $\hat{f}(\xi) = A\xi+B,$ i.e. $f(x) = -iA\delta'(x)+B\delta(x).$