Why is the weak convergence of probability measures on $\mathbb{R}$ with respect to bounded continuous test functions $C^0_b(\mathbb{R})$ metrizable by the bounded Lipschitz metric $$d(\mu, \nu) = \sup_{f \in \text{Lip}(\mathbb{R})} \Big | \int_{\mathbb{R}} f d \nu – \int_{\mathbb{R}} f d \mu \Big |$$ where $$\text{Lip}(\mathbb{R}) = \Big \{ f \in C_b(\mathbb{R}) : \sup_x |f(x) | \leq 1, \sup_{x \neq y} \frac{| f(x) – f(y) |}{|x-y|} \leq 1 \Big \}?$$ For those who would like a reference, this is invoked in the proof of the truncated version of Wigner's semicircle law in Anderson-Guionnet-Zeitouni's $\textit{Introduction to Random Matrices}$ and is cited in the appendix as part of Theorem C.8, though no proof is given there. If anyone could help me with this fact, I'd greatly appreciate it!
Analysis – Metrizability of Weak Convergence by the Bounded Lipschitz Metric
analysisfunctional-analysismeasure-theorymetric-spaces
Related Solutions
The answer is only partially YES. However $\mathcal{M}^+(\mathbb{\mathbb R})$ obviously cannot be a vector space due to the positivity constraint. So this rules out both questions as currently written. What is true, though, is that the metric space $(\mathcal{M}^+(\mathbb{R}),d_{BL})$ is complete and metrizes the weak convergence. I will only prove rigorously the completeness, see my final remark for how to get the "metrization". The proof below actually works in any dimension, and also in any domain $\Omega\subset \mathbb R^d$.
Let $\{\mu_n\}$ be a sequence of positive measures, let me denote the mass $m_n:=\mu_n(\mathbb R)\geq 0$, and assume that the sequence is Cauchy $$ d_{BL}(\mu_p,\mu_q)\to 0 \qquad \mbox{as }p,q\to\infty. $$
As a first step, it is easy to see that $\{m_n\}$ is a (real) Cauchy sequence: for, testing $ f \equiv 1$ in the definition of $d_{BL}$, we get $$ |m_p-m_q|=\left|\int 1 d\mu_p -\int 1 d\mu_q \right|\leq d_{BL}(\mu_p,\mu_q). $$ Since the real line is complete, there is $m\geq 0$ such that $m_n\to m$. If $m=0$ then it is immediate to see that, for any $f\in \mathcal C_b$, there holds $|\int f d \mu_n|\leq ||f||_\infty m_n\to 0$, which proves that $\mu_n\to 0$ weakly (narrowly).
If $m>0$ then we can assume that $m/2\leq m_n\leq 2m$ for $n$ large enough, and the renormalized sequence $\tilde \mu_n:=\frac {\mu_n}{m_n}\in\mathcal P(\mathbb R)$ is well-defined. I claim that $\{\tilde\mu_n\}$ is $d_{BL}$-Cauchy as well. Indeed, for $p,q$ large enough we have by triangular inequality \begin{multline*} \left| \int f d\tilde\mu_p -\int f d\tilde\mu_q\right| = \left| \int f \frac{1}{m_p}d\mu_p -\int f \frac{1}{m_q}d\mu_q \right| \\ \leq \frac 1m \left| \int f d\mu_p -\int f d\mu_q \right| \\ + \left|\left(\frac 1{m_p}-\frac 1m \right)\int f d\mu_p\right| + \left|\left(\frac 1{m_q}-\frac 1m \right)\int f d\mu_q\right| \\ \leq \frac 1m \left| \int f d\mu_p -\int f d\mu_q \right| \\ + \left|\frac 1{m_p}-\frac 1m \right| \|f\|_\infty 2m + \left|\frac 1{m_q}-\frac 1m \right| \|f\|_\infty 2m. \end{multline*} Taking the supremum over $f$ such that $\|f\|,Lip(f)\leq 1$ gives $$ d_{BL}(\tilde\mu_p,\tilde\mu_q)\leq \frac 1md_{BL}(\mu_p,\mu_q) + 2m\left|\frac 1{m_p}-\frac 1m \right| +2m\left|\frac 1{m_q}-\frac 1m \right| $$ and entails my claim.
Since $(\mathcal P(\mathbb R),d_{BL})$ is complete there is a proabability measure $\tilde \mu\in \mathcal P(\mathbb R)$ such that $d_{BL}(\tilde\mu_n,\tilde \mu)\to 0$. Because we already proved that $m_n\to m$, it is then easy to check that $ \mu_n=m_n\tilde\mu_n$ converges (in the Bounded-Lipschitz distance) to the limit $\mu:=m\tilde\mu$. Indeed for fixed $f$ \begin{multline*} \left|\int f d\mu_n- \int f d\mu \right| =\left|m_n\int f d\tilde\mu_n- m\int f d\tilde\mu \right| \\ \leq |m_n-m|\cdot \left|\int f d\tilde \mu_n\right| + m\left|\int f d\tilde\mu_n-\int f d\tilde\mu \right| \\ \leq |m_n-m|\cdot\|f\|_\infty+ m\left|\int f d\tilde\mu_n-\int f d\tilde\mu \right|. \end{multline*} Taking one last time the supremum over $f$'s gives $d_{BL}(\mu_n,\mu)\leq |m_n-m| + md_{BL}(\tilde\mu_n,\tilde\mu)\to 0$ and the proof is complete.
Final remark: following the same lines it is easy to see that $d_{BL}$ does indeed metrize the weak convergence. The strategy of proof is identical: show that the masses converge, use this to suitably renormalize $\tilde\mu_n:=\frac{1}{m_n}\mu$, and exploit that the statement is already known for probability measures. (The case of vanishing mass $m_n\to 0$ must be treated separately.)
Let $$ h(x)=\left\{\begin{array}{lll}\mathrm{e}^{-1/x^2} & \text{if} & x>0,\\ 0 & \text{if} & x\le 0.\end{array}\right. $$ Then $h\in C^\infty(\mathbb R)$. Then set $$ j(\boldsymbol{x})=c\,h\big(1-\|\boldsymbol{x}\|^2\big), $$ where $\boldsymbol{x}\in\mathbb R^n$, and $c>0$, so that $\int_{\mathbb R^n}j(\boldsymbol{x})\,d\boldsymbol{x}=1$. Clearly, $j\ge 0$, $j\in C^\infty(\mathbb R^n)$ and $\,\mathrm{supp}\,j\subset B(0,1)$ - the unit ball.
Next define $j_e(\boldsymbol{x})=\varepsilon^{-n}j(\varepsilon^{-1}\boldsymbol{x})$. Then $\int_{\mathbb R^n}j_\varepsilon(\boldsymbol{x})\,d\boldsymbol{x}=1$ and let the function $$ f_\varepsilon=f*j_\varepsilon, $$ i.e., $$ f_\varepsilon(\boldsymbol{x})=\int_{\mathbb R^n} f(\boldsymbol{y})\,j_\varepsilon(\boldsymbol{x}-\boldsymbol{y})\,d\boldsymbol{y}=\int_{\mathbb R^n} f(\boldsymbol{x}-\boldsymbol{y})\,j_\varepsilon(\boldsymbol{y})\,d\boldsymbol{y}= \frac{1}{\varepsilon^n}\int_{B(0,\varepsilon)} f(\boldsymbol{x}-\boldsymbol{y})\,j(\boldsymbol{y}/\varepsilon)\,d\boldsymbol{y}=\frac{1}{\varepsilon^n}\\=\int_{B(0,1)} f(\boldsymbol{x}-\varepsilon\boldsymbol{y})\,j(\boldsymbol{y})\,d\boldsymbol{y}. $$ Clearly $f_\varepsilon\in C^\infty(\mathbb R^n)$. Next $$ f_\varepsilon(\boldsymbol{x}_1)-f_\varepsilon(\boldsymbol{x}_2)= \int_{\mathbb R^n} \big(f(\boldsymbol{x}_1-\boldsymbol{y})-f(\boldsymbol{x}_2-\boldsymbol{y})\big)\,j_\varepsilon(\boldsymbol{y})\,d\boldsymbol{y} $$ and hence $$ \lvert\,f_\varepsilon(\boldsymbol{x}_1)-f_\varepsilon(\boldsymbol{x}_2)\rvert\le \int_{\mathbb R^n} \lvert\, f(\boldsymbol{x}_1-\boldsymbol{y})-f(\boldsymbol{x}_2-\boldsymbol{y})\rvert\,j_\varepsilon(\boldsymbol{y})\,d\boldsymbol{y}\le\|\boldsymbol{x}_1-\boldsymbol{x}_2\|\int_{\mathbb R^n}j_\varepsilon(\boldsymbol{y})\,d\boldsymbol{y}=\|\boldsymbol{x}_1-\boldsymbol{x}_2\|. $$ Finally $$ \lvert\,f_\varepsilon(\boldsymbol{x})-f(\boldsymbol{x})\rvert\le \left|\int_{B(0,1)} \big(f(\boldsymbol{x}-\varepsilon\boldsymbol{y})-f(\boldsymbol{x})\big)\,j(\boldsymbol{y})\,d\boldsymbol{y}.\,\right|\le \cdots\le \varepsilon. $$
Best Answer
There is a proof in Section 8.3 of Bogachev's Measure Theory.