[Math] Dirac delta integral

measure-theory

Let $(X,\mathcal{A},\mu)$ be a measure space, and let $A \in \mathcal{A}$ be such that $\mu(A) = 0$. Then define $h\colon X \to [-\infty,\infty]$ by $h(x) = +\infty$ if $x \in A$ and $h(x) = 0$ otherwise. It's easy to see that $\int h d\mu = 0$ since, for example, one can take the sequence $f_n = n \chi_A$, all of which are simple, measurable, increasing and non-negative, and $f_n \to h$, so $\int h d\mu = \lim_n \int f_n d\mu = 0$. But our $h$ function is not so different from the Dirac delta, is it? If we take $(X,\mathcal{A},\mu) = (\mathbf{R},\mathcal{B},\lambda)$ (measure space of Borel sets equipped with the Lebesgue measure) then define $h$ using $A = \{0\}$ then we get the dirac delta. But, according to wikipedia,
the integral of the Dirac delta (over $\mathbf{R}$) is $1$, contrary to our result with $h$.

If someone could explain what's going on and what I'm misunderstanding, I would be grateful. Thanks.

Best Answer

The Dirac delta is not a function, not even a function that takes the value $\infty$ at one point. In particular, it is not the limit (in any reasonable topology that I can think of) of the functions $f_n$ that take the value $n$ at $0$ and the value $0$ everywhere else.

If one wants to approximate delta by genuine functions, one needs to use functions whose integrals converge to $1$, and the convergence of the functions to delta will be in the sense of distributions, not pointwise convergence.