Let us denote our probability space by $(\Omega,\mathcal{F},P)$ and let $X_1,X_2,\ldots,X_n$ be a sequence of i.i.d. random variables defined on $\Omega$.
You're correct that $\{X_i\leq x\}$ is shorthand notation for $\{\omega\in\Omega\mid X_i(\omega)\leq x\}$ which is a subset of $\Omega$ that belongs to $\mathcal{F}$ (since $X_i$ is a random variable). Futhermore, $I(X_i\leq x)$ is the indicator function for the set $\{X_i\leq x\}\subseteq\Omega$ and by definition it is a function defined on $\Omega$ (in fact it is a random variable since the set belongs to $\mathcal{F}$):
$$
\begin{align}
I(X_i\leq x)(\omega)&=
\begin{cases}
1,\quad \text{if }\omega\in \{X_i\leq x\},\\
0,\quad \text{otherwise}.
\end{cases}
\\
&=
\begin{cases}
1,\quad\text{if }X_i(\omega)\leq x,\\
0,\quad\text{otherwise}.
\end{cases}
\end{align}
$$
Therefore, $\frac1n \sum_{i=1}^n I(X_i\leq x)$ is also a random variable for each fixed $n$.
A sample in this connection just denotes a sequence of i.i.d. random variables $X_1,\ldots,X_n$. An outcome of this sample corresponds to a fixed $\omega$, and $X_1(\omega),\ldots,X_n(\omega)$ would be an outcome or observation of the sample $X_1,\ldots,X_n$.
The empirical distribution function $F_n(x)=\frac1n \sum_{i=1}^n I(X_i\leq x)$ is indeed a random variable, and we can evaluate it in the following way:
$$
(F_n(x))(\omega)=\frac1n\sum_{i=1}^n I(X_i(\omega)\leq x),
$$
i.e. for a fixed outcome $\omega\in\Omega$, $(F_n(x))(\omega)$ is the number of observations that are less than $x$ divided by $n$ based on the outcome $X_1(\omega),X_2(\omega),\ldots,X_n(\omega)$.
Now suppose we have an infinite sample of i.i.d. variables $X_1,X_2,\ldots$. Then by the law of large numbers one has that for every fixed $x$, the random variables $F_1(x), F_2(x),F_3(x)$ converges almost surely to the true CDF $F$:
$$
F_n(x)\to F(x)\;\;\text{almost surely as } n\to\infty.
$$
It's not correct.
The empirical measure isn't a measure on the sample space $\Omega$, it's a (random) measure on $\mathbb{R}$. Notationally, I think most people reserve letters like $P, P_n$, etc, for measures on $\Omega$, using letters like $\mu, \nu$ for measures on other spaces.
So I'd call your empirical measure $\mu_n$ and then write its mean as
$$\int_{\mathbb{R}} x\,\mu_n(dx) = \frac{1}{n} \sum_{i=1}^n X_i.$$
Note that the left-hand side denotes the integral over $\mathbb{R}$, with respect to the measure $\mu_n$, of the identity function $f : \mathbb{R} \to \mathbb{R}$ given by $f(x) = x$. The lower-case $x$ is intentional and not a typo.
Best Answer
For each $\omega$, $P_n(\omega)=\frac1n \sum\limits_{i=1}^n \delta_{X_i(\omega)}$ is a measure on $X$. More specifically, $$ P_n(\omega)(A)=\frac1n \sum_{i=1}^n\delta_{X_i(\omega)}(A)=\frac{\#\{1\leq i\leq n\mid X_i(\omega)\in A\}}{n},\quad A\subseteq X. $$ If $f:X\to\mathbb{R}$ is measurable, then $P_n f:\Omega\to\mathbb{R}$ is just integration of $f$ with respect to $P_n$ defined $\omega$-by-$\omega$. In other words, for fixed $\omega$, $$ P_n f(\omega):=\int_X f\,\mathrm dP_n(\omega)=\frac1n\sum_{i=1}^n\int_X f\,\mathrm d\delta_{X_i(\omega)}=\frac 1n\sum_{i=1}^n f(X_i(\omega)). $$