A random sample is a collection of variables $X_1,\ldots,X_n$ that are independent and identically distributed. My question: Is a random sample made up of different random variables, or are they realizations of the same random variable? That is, different values that the function takes for different elements of the sample space. For example, if the random variable is the weight of university students in the United States, then the random sample is made up of only a few students from that sample space after applying the function to them.
Random Sample ¿Random variables or realizations of the same random variable
probabilityprobability theory
Related Solutions
You may think about $(x_1,\ldots,x_n)$ as a realization of $n$ independent copies of $X$. Basically, there is a probability space $(\Omega,\mathcal{F},\mathsf{P})$ in the background so that $(x_1,\ldots,x_n)=(X_1(\omega)\ldots,X_n(\omega))$ for some $\omega\in\Omega$, which is chosen randomly according to $\mathsf{P}$. Then the statement "independent and identically-distributed realizations of the same random variable" doesn't make sense. Although, sometimes $(x_1,\ldots,x_n)$ is referred to as a random sample from a particular distribution (e.g. $F_X$).
Motivation
Fix probability space $(\Omega, \mathcal F, \mathbb P)$.
Let's see how the definition of independence works in practice. Let $ f, g: \Omega \to \mathbb R$ be measurable functions (random variables). Denote $\mathcal A = \sigma(f) = \{ f^{-1}(E)| E\in \operatorname{Borel}(\mathbb R)\}$ and $\mathcal B = \sigma(g)$. $\sigma$-algebras $\mathcal A$ and $\mathcal B$ are independent, if $$ \forall A \in \mathcal A, B \in \mathcal B \quad \mathbb P (A\cap B) = \mathbb P(A)\mathbb P(B). $$ But elements of $\mathcal A$ are of the form $f^{-1}(E)$, for some Borel sets $E$. By convention, we write them $\{f \in E \}$. So our definition of independent variables is $$ \forall E_1, E_2 \in \operatorname{Borel}(\mathbb R) \quad \mathbb P (f\in E_1, g\in E_2) = \mathbb P(f\in E_1)\mathbb P(g\in E_2). $$ where again we use some notation conventions (ommiting $\{$, $\}$ and replacing $\cap$ with $,$).
This shows what the definition is saying: any event that we could come up with which regards $f$, is independent with any event regarding $g$. This is fairly strong condition, which comes in handy in many situations.
Examples
1. Independent but not identically distributed.
Very silly example could be $f(\omega) = 1$, and $g(\omega) = 0$ for all $\omega \in \Omega$. Those functions are measurable in any probability space, and all constant variables are independent (because $\sigma(f) = \sigma(g) = \{\Omega, \emptyset\}$). They obviously have different distributions.
Something more useful could be for example $\Omega = \{0, 1, 2, 3\}$, with $$ f(x) = \begin{cases} 0 : \quad \text{when } x \leq 1\\ 1 : \quad \text{when } x > 1 \end{cases}, \quad g(x) = (x\!\!\!\! \mod 2) + 7 $$ where we deal with classical probability. Here again, distributions are quite similar (only shifted by $7$), but not the same.
Product space
In general, given 2 random variables defined on different probability spaces we can "produce probability space in which they are independent". Given $(\Omega_1, \mathcal F_1, \mathbb P_1), (\Omega_2, \mathcal F_2, \mathbb P_2), f:\Omega_1 \to \mathbb R$ and $g:\Omega_2 \to \mathbb R$ we define $\tilde f, \tilde g$ on product space $\Omega_1 \times \Omega_2$ as $$ \tilde f(\omega_1, \omega_2) = f(\omega_1), \quad \tilde g(\omega_1, \omega_2) = g(\omega_2). $$ It is quite easy to show, that $\tilde f$ has the same distribution as $f$, similarly for $g$. On top of that $\tilde f$ and $\tilde g$ are independent. So you could take any 2 variables in some model (different distribution or not) and make model in which they are independent.
2. Identically distributed but not independent.
Take any symmetrical distributed variable $f$. Define $g = -f$. Then $f$ and $g$ are not independent, but since distribution of $f$ was symmetrical, we have $g \overset D = f$.
Down-to-earth version of it can be for example $f(x) = 2x-1$ on $\Omega = [0, 1]$ with Lebesgue measure as probability. $f$ has uniform distribution $U([-1, 1])$. Then $g(x) = 1 - 2x$ has the same uniform distribution. Knowing value of $f$ gives immediately value of $g$, so they are not independent (you could check that generated $\sigma$-algebras are the same, which is contradiction to $\sigma(f) \cap \sigma(g) = \{\Omega, \emptyset\}$ for any independent $f, g$).
Best Answer
It is possible to set up our model either way. Personally I like to think of a random variable as a measurement resulting from an experiment. In your example, I would consider $n$ different (independent) experiments consisting of measuring the weight of a random university student and each random variable will be associated with a single experiment. So the random sample will be made up of different random variables.