Let us denote our probability space by $(\Omega,\mathcal{F},P)$ and let $X_1,X_2,\ldots,X_n$ be a sequence of i.i.d. random variables defined on $\Omega$.
You're correct that $\{X_i\leq x\}$ is shorthand notation for $\{\omega\in\Omega\mid X_i(\omega)\leq x\}$ which is a subset of $\Omega$ that belongs to $\mathcal{F}$ (since $X_i$ is a random variable). Futhermore, $I(X_i\leq x)$ is the indicator function for the set $\{X_i\leq x\}\subseteq\Omega$ and by definition it is a function defined on $\Omega$ (in fact it is a random variable since the set belongs to $\mathcal{F}$):
$$
\begin{align}
I(X_i\leq x)(\omega)&=
\begin{cases}
1,\quad \text{if }\omega\in \{X_i\leq x\},\\
0,\quad \text{otherwise}.
\end{cases}
\\
&=
\begin{cases}
1,\quad\text{if }X_i(\omega)\leq x,\\
0,\quad\text{otherwise}.
\end{cases}
\end{align}
$$
Therefore, $\frac1n \sum_{i=1}^n I(X_i\leq x)$ is also a random variable for each fixed $n$.
A sample in this connection just denotes a sequence of i.i.d. random variables $X_1,\ldots,X_n$. An outcome of this sample corresponds to a fixed $\omega$, and $X_1(\omega),\ldots,X_n(\omega)$ would be an outcome or observation of the sample $X_1,\ldots,X_n$.
The empirical distribution function $F_n(x)=\frac1n \sum_{i=1}^n I(X_i\leq x)$ is indeed a random variable, and we can evaluate it in the following way:
$$
(F_n(x))(\omega)=\frac1n\sum_{i=1}^n I(X_i(\omega)\leq x),
$$
i.e. for a fixed outcome $\omega\in\Omega$, $(F_n(x))(\omega)$ is the number of observations that are less than $x$ divided by $n$ based on the outcome $X_1(\omega),X_2(\omega),\ldots,X_n(\omega)$.
Now suppose we have an infinite sample of i.i.d. variables $X_1,X_2,\ldots$. Then by the law of large numbers one has that for every fixed $x$, the random variables $F_1(x), F_2(x),F_3(x)$ converges almost surely to the true CDF $F$:
$$
F_n(x)\to F(x)\;\;\text{almost surely as } n\to\infty.
$$
Best Answer
Sometimes one says that a histogram based on a large sample size gives a good idea about the shape of the population density function. (But information is lost in binning, and a modern 'density estimator' usually works better.)
In somewhat the same way an empirical cumulative distribution function (ECDF) of a large sample is a good estimator of the population CDF.
The following R program samples 3000 observations from $Gamma(5, 1)$ to illustrate @Clement C's comment. The figure below shows the histogram (at left) along with the known population density (dotted) and a density estimator. At right, the CDF (thin light green) is superimposed on the ECDF (heavy black) of the sample. A larger sample would show better fit, but perhaps too good to see distinctions between population and sample curves.
If you have access to R, you can try other population distributions and sample sizes. The same program as above, except with a sample of size $n = 100$ was used to produce the figure below. Roughly speaking, the ECDF gives a better estimate of the CDF than a histogram gives of the PDF. A 'nonparametric bootstrap' procedure uses the sample ECDF in place of the unknown population CDF.