What's the difference between product measure and independence in probability?
$$
(\mu_1\times\mu_2)(B_1\times B_2)= \mu_1(B_1)\mu_2(B_2)
$$
$$
P(A_m\cap A_k)= P(A_m)P(A_k)
$$
Are product measure it self independent?
functional-analysismeasure-theoryprobabilityreal-analysis
What's the difference between product measure and independence in probability?
$$
(\mu_1\times\mu_2)(B_1\times B_2)= \mu_1(B_1)\mu_2(B_2)
$$
$$
P(A_m\cap A_k)= P(A_m)P(A_k)
$$
Are product measure it self independent?
Events $B_1,B_2,\ldots,B_n$ are mutually independent provided $$ P[\cap_{k=1}^n B_k^*]=\prod_{k=1}^n P[B^*_k] $$ for all possible choices of each $B_k^*$ as either $B_k$ or $B_k^c$. This entail $2^n$ identities that must hold.
Notice that if $B_1,B_2,B_3$ are independent in this sense, then
$\begin{align*}P[B_1\cap B_2] &=P[B_1\cap B_2\cap B_3]+P[B_1\cap B_2\cap B_3^c]\\ &=P[B_1]P[B_2]P[B_3]+P[B_1]P[B_2]P[B_3^c]\\ &=P[B_1]P[B_2]\left\{P[B_3]+ P[B_3^c]\right\}\\ &=P[B_1]P[B_2]\end{align*} $
etc., so that $B_1$ and $B_2$ are independent, yielding the subsequence property you mention.
Also, events $B_1,B_2,\ldots,B_n$ are mutually independent in the above sense if and only if the $\sigma$-algebras $\mathcal B_1,\ldots,\mathcal B_n$ are independent, where $\mathcal B_k=\{\emptyset,\Omega,B_k,B_k^c\}$.
Let $(\Omega_1, F_1, P_1)$ and $(\Omega_2, F_2, P_2)$ be two probability spaces. That is, $\Omega_1$ and $\Omega_2$ are nonempty sets, $F_1$ is a sigma-algebra on $\Omega_1$, $F_2$ is a sigma-algebra on $\Omega_2$, and $P_1$ and $P_2$ are functions \begin{align*} P_1: F_1 \rightarrow\mathbb{R}\\ P_2:F_2 \rightarrow \mathbb{R} \end{align*} that satisfy the 3 probability axioms with respect to $(\Omega_1, F_1)$ and $(\Omega_2, F_2)$, respectively. Let \begin{align} X_1:\Omega_1 \rightarrow\mathbb{R}\\ X_2:\Omega_2 \rightarrow\mathbb{R} \end{align} be functions such that $X_1$ is measurable with respect to $(\Omega_1, F_1)$ and $X_2$ is measurable with respect to $(\Omega_2, F_2)$.
Define $$\Omega = \Omega_1 \times \Omega_2 = \{(\omega_1, \omega_2) : \omega_1 \in \Omega_1, \omega_2 \in \Omega_2\}$$ Also define $F$ as the smallest sigma-algebra on $\Omega$ that contains all sets of the form $A_1 \times A_2$ such that $A_1 \in F_1$, $A_2 \in F_2$. (Note 1: Here we define $\phi \times A_2=A_1\times \phi=\phi$. Note 2: $F \neq F_1 \times F_2$, see example below).
Recall that $\Omega =\Omega_1 \times \Omega_2$. Does there exist a function $P:F\rightarrow\mathbb{R}$ that satisfies $$P[A_1 \times A_2] = P_1[A_1]P_2[A_2] \quad \forall A_1 \in F_1, \forall A_2 \in F_2 \quad (*)$$ and that also satisfies the three axioms of probability with respect to $(\Omega, F)$?
This is a deep and hard question, the answer is not obvious. Fortunately, the answer is "yes." Further, the function is unique. This is due to the Hahn-Kolmogorov theorem: https://en.wikipedia.org/wiki/Product_measure
Once we have such a function $P:F\rightarrow\mathbb{R}$, we have a legitimate new probability space $(\Omega, F, P)$. We can define new functions $X_1^{new}:\Omega\rightarrow\mathbb{R}$ and $X_2^{new}:\Omega\rightarrow\mathbb{R}$ by \begin{align} X_1^{new}(\omega_1, \omega_2) &= X_1(\omega_1) \quad \forall (\omega_1, \omega_2) \in \Omega \\ X_2^{new}(\omega_1, \omega_2) &= X_2(\omega_2)\quad \forall (\omega_1, \omega_2) \in \Omega \end{align} It can be shown that $X_1^{new}$ and $X_2^{new}$ are both measurable with respect to $(\Omega, F, P)$. Thus, they can be called random variables with respect to $(\Omega, F, P)$.
We can prove that $X_1^{new}$ and $X_2^{new}$ are independent: Fix $x_1, x_2 \in \mathbb{R}$. Define \begin{align} A_1 &= \{\omega_1 \in \Omega_1 : X_1(\omega_1) \leq x_1\}\\ A_2 &=\{\omega_2 \in \Omega_2 : X_2(\omega_2) \leq x_2\} \end{align} Then \begin{align} &P[X_1^{new} \leq x_1, X_2^{new}\leq x_2] \\ &=P\left[\{\omega \in \Omega: X_1^{new}(\omega) \leq x_1\}\cap \{\omega \in \Omega: X_2^{new}(\omega) \leq x_2\}\right]\\ &= P\left[\{(\omega_1, \omega_2)\in \Omega : X_1(\omega_1)\leq x_1, X_2(\omega_2) \leq x_2\} \right] \\ &= P\left[ A_1 \times A_2 \right]\\ &\overset{(a)}{=} P_1[A_1]P_2[A_2]\\ &\overset{(b)}{=} \left(P_1[A_1]P_2[\Omega_2]\right)\left( P_1[\Omega_1]P_2[A_2]\right)\\ &\overset{(c)}{=} P[A_1 \times \Omega_2]P[\Omega_1 \times A_2]\\ &=P[X_1^{new} \leq x_1]P[X_2^{new}\leq x_2] \end{align} where (a) and (c) hold by the property (*) of the $P$ function; (b) holds because $P_1[\Omega_1]=1$ and $P_2[\Omega_2]=1$. This holds for all $x_1,x_2 \in \mathbb{R}$. Thus, $X_1^{new}$ and $X_2^{new}$ are independent.
Define \begin{align} \Omega_1 &= \{1,2,3\}\\ \Omega_2 &= \{a,b,c\} \\ \Omega &= \Omega_1 \times \Omega_2 \end{align} Define $F_1$ and $F_2$ as the power sets of $\Omega_1$ and $\Omega_2$, respectively \begin{align} F_1 &= \{\phi, \{1\}, \{2\}, \{3\}, \{1,2\}, \{1,3\}, \{2,3\}, \{1,2,3\}\}\\ F_2 &= \{\phi, \{a\}, \{b\}, \{c\}, \{a,b\}, \{a,c\}, \{b,c\}, \{a,b,c\}\} \end{align} It can be shown that $F$ is the power set of $\Omega$. Thus
$|F_1 \times F_2| = 8^2 = 64$.
$|\Omega| = 3^2 = 9$.
$|F| = 2^9 = 512$.
So $F$ has more elements than $F_1 \times F_2$. The structure of the set $F_1 \times F_2$ is also different from that of $F$:
Elements of $F_1 \times F_2$ include $(\phi, \{a\})$ and $(\phi, \{b\})$ and $(\{1\}, \{a\})$ and $(\{2\}, \{b\})$.
Elements of $F$ include $\phi$ and $\{(1,a), (2,b)\}$.
The set $F$ is sometimes called $F_1 \otimes F_2$. This is quite different from $F_1 \times F_2$, and also different from $\sigma(F_1 \times F_2)$.
As in my above comments on the question, usually we do not concern ourselves with this deep extension theory.
If we have a probability experiment that involves random variables $Y$ and $Z$, we implicitly assume there is a single probability space $(\Omega, F, P)$ and $Y:\Omega\rightarrow\mathbb{R}$ and $Z:\Omega\rightarrow\mathbb{R}$ are measurable functions on this space. Thus, for all $y,z \in \mathbb{R}$ we know that $\{Y \leq y\} \in F$ and $\{Z \leq z\} \in F$. Since $F$ is a sigma-algebra, this implies that $\{Y \leq y\}\cap \{Z \leq z\} \in F$ (for all $y, z\in \mathbb{R}$).
The random variables $Y:\Omega\rightarrow\mathbb{R}$ and $Z:\Omega\rightarrow\mathbb{R}$ are defined to be independent if $$ P[Y \leq y, Z\leq z] = P[Y\leq y]P[Z\leq z] \quad \forall y, z \in \mathbb{R}$$
Notice that the definition of independent requires $\{Y \leq y\} \cap \{Z \leq z\} \in F$ for all $y, z \in \mathbb{R}$, which of course requires $Y$ and $Z$ to be defined on the same space.
Best Answer
There is an equivalence in the following sense.
If $J$ is finite then: $\{X_j:j\in J\}$ is independent (e.i, for any $J'\subset J$ and sets $B_j\in\mathscr{S_j}$, $j\in J'$, $$\mathbb{P}\Big[\bigcap_{j\in J'}\{X_j\in B_j\}\Big]=\prod_{j\in J'}\mathbb{P}[X_j\in B_j]\tag{1}\label{one}$$ ) if and only if the (joint) law of $X=\{X_j:j\in J\}$ is the product measure $\bigotimes_{j\in J}\mu_j$.
If $J$ is an arbitrary set (finite or infinite), and each space $(S_j,\mathscr{S}_j)$ is nice (a separable metric space with its Borel $\sigma$--algebra for instance; $\mathbb{R}$ with its Borel $\sigma$--algebra for example), then $\{X_j:j\in J\}$ is independent (i.e. $\eqref{one}$ holds for ay finite collection $J'\subset J$) then the law of $X=\{X_j:j\in J\}$ is the product measure $\bigotimes_{j\in J}\mu_j$ .
The proof of (2) follows from Kolmogorov's extension theorem. The "nice" condition is needed in order to appeal to measure theoretic results that guarantee existence of a unique measure in an arbitrary product of probability spaces that satisfy a projection property.
Leo Breiman's on Probability is an excellent source where the details are explained in a very elegant way.