After a bit of research told me that nothing of the above is true; see for example the book of Dudley "Real analysis and Probability" (section 11.4). The empirical measure on a measure space $(\Omega,\mathcal F,\mu)$ is defined via a sequence of iid random variables $X_i$ defined on $\Omega^{\mathbb N}$, by the mapping
$$A\mapsto \frac{1}{n}\sum_{i=1}^n\delta_{X_i(\omega)}(A)\text{ for }\omega\in \Omega^{\mathbb N}.$$
The approximation property says that for almost all $\omega\in \Omega^{\mathbb N}$, the empirical measure converges weakly (or weakly-$*$ depending on your moods ;-) ) to $\mu$.
By definition,
$$
\textrm{Var}(Y\mid X):=\mathbb E\bigl[\bigl(Y-\mathbb E(Y\mid X)\bigr)^2\mid X\Bigr]=\mathbb E(Y^2\mid X)-\mathbb E(Y\mid X)^2.
$$
Thus, the conditional variance is a random variable, in the same way that the conditional expectation $\mathbb E(Y\mid X)$ is. Conceptually, the variance is the "same type of object" as the expectation, in this regard.
Now, one may also consider an event $A\subseteq \Omega$ (the sample space) and ask what is $\textrm{Var}(Y\mid A)$. And it follows the exact same behavior as the conditional expectation, namely that we define
$$
\textrm{Var}(Y\mid A):=\mathbb E\bigl[\bigl(Y-\mathbb E(Y\mid A)\bigr)^2\mid A\Bigr]=\mathbb E(Y^2\mid A)-\mathbb E(Y\mid A)^2.
$$
By definition, $$\mathbb E(Y\mid A):=\frac{\mathbb E(Y\cdot 1_A)}{\mathbb E(1_A)},$$
where $1_A$ denotes the indicator of the set $A$. It is a random variable taking the value $1$ on $A$ and $0$ off $A$. Note also that $\mathbb E(1_A)=\mathbb P(A)$, I just wrote it that way in the denominator of the formula for consistency with the numerator.
Per the discussion below, there was an even more basic question that I should clarify. A random variable is a function from the sample space $\Omega$ to the real numbers. This means it assigns a real number to each element $\omega\in \Omega$. On the other hand, when we condition on an event we obtain a set function on $\Omega$, or in other words, a function that assigns values to subsets of $\Omega$ and not to individual elements of $\Omega$. In this case, being even more precise, we have a partially defined set function which means that not every subset is assigned a value - it is only those subsets which are measurable and are assigned a positive measure for which the conditional variance is defined.
To compare and contrast the two types of mathematical objects, conditional variance with respect to a random variable is a function from $\Omega$ to $\mathbb R$, whereas conditional variance with respect to an event is a partially defined function from $P(\Omega)$ to $\mathbb R$ (the power set of $\Omega)$.
Best Answer
Let's say that the result of an experiment is a n-tuple of real numbers. When we accept 1. as a model of our experiment, we have a probability space $\Omega$ and a random variable $$ X: \Omega \to \mathbb{R}^n $$ The outcome of an experiment corresponds to a $\omega \in \Omega$ and therefore to an n-tuple $(X_1(\omega), ..., X_n(\omega))$. This model allows us to ask if the elements of this n-tuple are independent and if not, what their joint distribution is.
If we accept 2. as a model, we have a probability space $\Omega$ and a tuple of random variables $$ X_i: \Omega \to \mathbb{R} $$ so that the n-tuple is a random variable of the probability space $\Omega^n$ (Cartesian product). So, in this case, the independence of the elements of the tuple is built into the model. If the elements of the tuple are supposed to be independent, it does not matter.
Note that in the first case we can set $X_i = X_j$, either strict or modulo a null set; in this case we will have a tuple of identically distributed random variables. Choice no.1 does not necessarily imply that the elements of the n-tuple are different random variables (either strictly different or modulo a null set).