Here's one way to do it without getting your hands too dirty. It's not constructive.
Let ${\cal A}$ be the union of a countable dense set in $C_0({\mathbb R}^n)$, and indicators of all balls of integer radius centered at the origin (not necessary, but helpful later).
Let $\{X_1,X_2,\dots \}$ be an IID $\mu$-distributed sequence.
Let $\mu_n = \frac{1}{n} \sum_{j\le n} \delta_{X_j}.$
Then $\mu_n$ is a random probability measure. Observe that for any bounded and measurable function,
$$\int f d\mu_n= \frac {1}{n} \sum_{j\le n } f(X_j).$$
Then by LLN
$$ \lim_{n\to\infty} \int f d\mu_n = \int f d\mu ,~\forall f \in {\cal A}\quad (*),$$
a.s. (here is where we use the countability of ${\cal A}$).
Let $\{\bar \mu_n\}$ be a realization of $\{\mu_n\}$ (or equivalently, $\{X_1,\dots\}$) for which $(*)$ holds. It remains to show that $\{\bar \mu_n\}$ converges weakly.
Let $f\in C_b({\mathbb R}^d)$, by a standard procedure, for every $M$, there exists $f_M\in C_0({\mathbb R}^d)$ such that $f_M$ coincides with $f$ on $\{x:|x|<M\}$ and $|f_M |\le |f|$. Fix $\epsilon>0$ and choose an integer $M$ such that $\mu(\{x:|x|>M\})<\epsilon$
There exists a continuous $g_M\in {\cal A}$ such that $\|f_M-g_M\|<\epsilon$.
We then have
$$ \int f d \bar \mu_n = \int g_M d \bar \mu_n + \int (f_M -g_M) d \bar \mu_n +
\int (f- f_M) d \bar \mu_n.$$
The first integral on the RHS converges to $\int g_M d\mu$ by construction. The second is bounded in absolute value by $\epsilon$. The third is bounded by $2\|f\|\mu_n (\{x:|x|>M\})$, which by assumption (recall the indicator functions in ${\cal A}$) converges to $2\|f\|\mu(\{x:|x|>M\})=2\|f\|\epsilon$.
Therefore
$$ \limsup_{n\to\infty} |\int f d\bar \mu_n - \int g_M d\mu| \le \epsilon (1+2\|f\|).$$
But
$$\begin{align*} |\int g_M d\mu - \int f d \mu| &\le \int |g_M -f_M| d \mu + \int |f_M -f| d\mu\\
& \le \epsilon + 2\|f\|\mu (\{x:|x|>M\})\le \epsilon (1+2\|f\|)\end{align*}.$$
The result follows.
$\newcommand{\norm}[1]{\left\lVert#1\right\rVert}$
Let $(X, \mathcal{A})$ by a measurable space and denote by $\mathcal{M}$ the space of (finite) complex measures on $(X, \mathcal{A})$, endowed with the variation norm. Let $B$ be the closed unit ball in $\mathcal{M}$. I will show the following:
For any $x \in X$ and $a \in \mathbb{C}$ with $|a| =1$, the measure $a \delta_x$ is an extreme point of $B$ (where $\delta_x$ is the Dirac measure centered at $x$).
Proof:
Let $\mu := a \delta_x$ for some $|a| =1$. The idea of the proof is that the variation measure $| \mu |$ is equal to $\delta_x$, so that $| \mu |$ is an extreme point of the convex set $\mathcal{P}$ of probability measures on $(X, \mathcal{A})$.
Suppose that $\mu = s \nu_1 + (1-s) \nu_2$, where $\nu_1, \nu_2 \in B$ and
$0 <s <1$. We want to prove that $\nu_1 = \nu_2 = \mu$.
First, we can assume that $\norm{ \nu_1 } = \norm{ \nu_2 } = 1$ (so that $| \nu_1 |$ and $| \nu_2 |$ are probabilities). Otherwise, we would have $\norm{ \mu } = \norm{ s \nu_1 + (1-s) \nu_2 } \leq s \norm{\nu_1} +(1-s) \norm{ \nu_2} < s + (1-s) = 1$, a contradiction.
We have by definition of $\mu$ that:
$$
\delta_x = | \mu | = | s \nu_1 +(1-s) \nu_2 | \leq s |\nu_1| +(1-s) |\nu_2|.
$$
The measure $\nu := s | \nu_1 | + (1-s) |\nu_2|$ is a probability measure, so the inequality $\delta_x \leq \nu$ is in fact an equality. Indeed, if $A$ is measurable and contains $x$ we must have
$$
1 = \delta_x (A) \leq \nu(A) \leq 1,
$$
so $\nu(A)=1$. On the other hand, if $x$ is not in $A$, then $\nu(A) = \nu(X) - \nu(X \setminus A) = 1 - 1 = 0$. This means that $\nu = \delta_x$.
Therefore we have $\delta_x = s | \nu_1 | + (1-s) | \nu_2 |$. Since $| \nu_1 |$ and $| \nu_2 |$ are probabilites and $\delta_x$ is an extreme point of $\mathcal{P}$, we must have $| \nu_1 | = | \nu_2 | = \delta_x$.
The equality $| \nu_1 | = | \nu_2 | = \delta_x$ implies that $\nu_1$ and $\nu_2$ are multiples of $\delta_x$. Indeed, if $A$ is a measurable set that does not contain $x$, then
$$
| \nu_i (A) | \leq |\nu_i|(A) = \delta_x (A) = 0,
$$
so that $\nu_i(A) =0$. On the other hand, if $A$ contains $x$ then $\nu_i(A) = \nu_i(X) - \nu_i(X \setminus A ) = \nu_i(X)$. Therefore we have $\nu_i = b_i \delta_x$, where $b_i = \nu_i(X)$. Furthermore, we have $| \nu_i | = |b_i \delta_x | = |b_i| \delta_x$, which implies that $|b_i|=1$.
Our assumption that $\mu = s \nu_1 + (1-s) \nu_2$ thus gives
$$
a \delta_x = (s b_1 + (1-s) b_2) \delta_x,
$$
and therefore $a = s b_1 +(1-s) b_2$, where $0< s <1$ and $a$, $b_1$ and $b_2$ are complex numbers of modulus $1$. This is only possible if $a = b_1 = b_2$, and we conclude that $\nu_1 = \nu_2 = a \delta _x$. $\square$
Best Answer
This might not satisfy you, but this is covered by a result of Dubins, cited in my answer to a previous MSE question. You have a compact convex set $\mathcal P$ of probability measures, its set of extreme points is the set of point masses. Your $\mathcal P(\rho)$ is the intersection of $\mathcal P$ and a codimension-1 affine space, namely the measures $\nu$ cut out by the single equation $\int_0^1 td\nu(t)=\rho$. (This equation is affine in $\nu$.) The Dubins result then tells us that the extreme points of $\mathcal P(\rho)$ are linear combinations of $1+1=2$ extreme points of $\mathcal P$, that is, of pairs of point masses.
I wrote "might not satisfy you" because this post only tells you that the fact you ask about has been noticed before but does not explain why the result is true. Maybe you can examine your proof and see how it squares with Dubins's?