Let us denote our probability space by $(\Omega,\mathcal{F},P)$ and let $X_1,X_2,\ldots,X_n$ be a sequence of i.i.d. random variables defined on $\Omega$.
You're correct that $\{X_i\leq x\}$ is shorthand notation for $\{\omega\in\Omega\mid X_i(\omega)\leq x\}$ which is a subset of $\Omega$ that belongs to $\mathcal{F}$ (since $X_i$ is a random variable). Futhermore, $I(X_i\leq x)$ is the indicator function for the set $\{X_i\leq x\}\subseteq\Omega$ and by definition it is a function defined on $\Omega$ (in fact it is a random variable since the set belongs to $\mathcal{F}$):
$$
\begin{align}
I(X_i\leq x)(\omega)&=
\begin{cases}
1,\quad \text{if }\omega\in \{X_i\leq x\},\\
0,\quad \text{otherwise}.
\end{cases}
\\
&=
\begin{cases}
1,\quad\text{if }X_i(\omega)\leq x,\\
0,\quad\text{otherwise}.
\end{cases}
\end{align}
$$
Therefore, $\frac1n \sum_{i=1}^n I(X_i\leq x)$ is also a random variable for each fixed $n$.
A sample in this connection just denotes a sequence of i.i.d. random variables $X_1,\ldots,X_n$. An outcome of this sample corresponds to a fixed $\omega$, and $X_1(\omega),\ldots,X_n(\omega)$ would be an outcome or observation of the sample $X_1,\ldots,X_n$.
The empirical distribution function $F_n(x)=\frac1n \sum_{i=1}^n I(X_i\leq x)$ is indeed a random variable, and we can evaluate it in the following way:
$$
(F_n(x))(\omega)=\frac1n\sum_{i=1}^n I(X_i(\omega)\leq x),
$$
i.e. for a fixed outcome $\omega\in\Omega$, $(F_n(x))(\omega)$ is the number of observations that are less than $x$ divided by $n$ based on the outcome $X_1(\omega),X_2(\omega),\ldots,X_n(\omega)$.
Now suppose we have an infinite sample of i.i.d. variables $X_1,X_2,\ldots$. Then by the law of large numbers one has that for every fixed $x$, the random variables $F_1(x), F_2(x),F_3(x)$ converges almost surely to the true CDF $F$:
$$
F_n(x)\to F(x)\;\;\text{almost surely as } n\to\infty.
$$
It's not correct.
The empirical measure isn't a measure on the sample space $\Omega$, it's a (random) measure on $\mathbb{R}$. Notationally, I think most people reserve letters like $P, P_n$, etc, for measures on $\Omega$, using letters like $\mu, \nu$ for measures on other spaces.
So I'd call your empirical measure $\mu_n$ and then write its mean as
$$\int_{\mathbb{R}} x\,\mu_n(dx) = \frac{1}{n} \sum_{i=1}^n X_i.$$
Note that the left-hand side denotes the integral over $\mathbb{R}$, with respect to the measure $\mu_n$, of the identity function $f : \mathbb{R} \to \mathbb{R}$ given by $f(x) = x$. The lower-case $x$ is intentional and not a typo.
Best Answer
I think for any realization of random variable : $ X_{1},...,X_{n}$, the empirical distribution $F_{n}(x)=\frac{1}{n}\sum_{i=1}^{n}I_{\{X_{i}\le x\}}$ is just a discrete distribution function concentrated on these n value and attached weight $\frac{1}{n}$ to each of them. Thus, the integral $\int g(x)dF_{n}(x)$ is same as expected value of $g(x)$ with respect to a discrete distribution: $$\int g(x)dF_{n}(x)=\frac{1}{n}\sum_{i=1}^{n}g(X_{i})$$ I hope this is what you are looking for.