Originally we have two different spaces
Let $(\Omega_1, F_1, P_1)$ and $(\Omega_2, F_2, P_2)$ be two probability spaces. That is, $\Omega_1$ and $\Omega_2$ are nonempty sets, $F_1$ is a sigma-algebra on $\Omega_1$, $F_2$ is a sigma-algebra on $\Omega_2$, and $P_1$ and $P_2$ are functions
\begin{align*}
P_1: F_1 \rightarrow\mathbb{R}\\
P_2:F_2 \rightarrow \mathbb{R}
\end{align*}
that satisfy the 3 probability axioms with respect to $(\Omega_1, F_1)$ and $(\Omega_2, F_2)$, respectively. Let
\begin{align}
X_1:\Omega_1 \rightarrow\mathbb{R}\\
X_2:\Omega_2 \rightarrow\mathbb{R}
\end{align}
be functions such that $X_1$ is measurable with respect to $(\Omega_1, F_1)$ and $X_2$ is measurable with respect to $(\Omega_2, F_2)$.
Defining a single new space
Define
$$\Omega = \Omega_1 \times \Omega_2 = \{(\omega_1, \omega_2) : \omega_1 \in \Omega_1, \omega_2 \in \Omega_2\}$$
Also define $F$ as the smallest sigma-algebra on $\Omega$ that contains all sets of the form $A_1 \times A_2$ such that $A_1 \in F_1$, $A_2 \in F_2$. (Note 1: Here we define $\phi \times A_2=A_1\times \phi=\phi$. Note 2: $F \neq F_1 \times F_2$, see example below).
Fundamental question
Recall that $\Omega =\Omega_1 \times \Omega_2$. Does there exist a function $P:F\rightarrow\mathbb{R}$
that satisfies
$$P[A_1 \times A_2] = P_1[A_1]P_2[A_2] \quad \forall A_1 \in F_1, \forall A_2 \in F_2 \quad (*)$$
and that also satisfies the three axioms of probability with respect to $(\Omega, F)$?
This is a deep and hard question, the answer is not obvious. Fortunately, the answer is "yes." Further, the function is unique. This is due to the Hahn-Kolmogorov theorem:
https://en.wikipedia.org/wiki/Product_measure
Consequence of "yes"
Once we have such a function $P:F\rightarrow\mathbb{R}$, we have a legitimate new probability space $(\Omega, F, P)$. We can define new functions $X_1^{new}:\Omega\rightarrow\mathbb{R}$ and $X_2^{new}:\Omega\rightarrow\mathbb{R}$ by
\begin{align}
X_1^{new}(\omega_1, \omega_2) &= X_1(\omega_1) \quad \forall (\omega_1, \omega_2) \in \Omega \\
X_2^{new}(\omega_1, \omega_2) &= X_2(\omega_2)\quad \forall (\omega_1, \omega_2) \in \Omega
\end{align}
It can be shown that $X_1^{new}$ and $X_2^{new}$ are both measurable with respect to $(\Omega, F, P)$. Thus, they can be called random variables with respect to $(\Omega, F, P)$.
We can prove that $X_1^{new}$ and $X_2^{new}$ are independent: Fix $x_1, x_2 \in \mathbb{R}$. Define
\begin{align}
A_1 &= \{\omega_1 \in \Omega_1 : X_1(\omega_1) \leq x_1\}\\
A_2 &=\{\omega_2 \in \Omega_2 : X_2(\omega_2) \leq x_2\}
\end{align}
Then
\begin{align}
&P[X_1^{new} \leq x_1, X_2^{new}\leq x_2] \\
&=P\left[\{\omega \in \Omega: X_1^{new}(\omega) \leq x_1\}\cap \{\omega \in \Omega: X_2^{new}(\omega) \leq x_2\}\right]\\
&= P\left[\{(\omega_1, \omega_2)\in \Omega : X_1(\omega_1)\leq x_1, X_2(\omega_2) \leq x_2\} \right] \\
&= P\left[ A_1 \times A_2 \right]\\
&\overset{(a)}{=} P_1[A_1]P_2[A_2]\\
&\overset{(b)}{=} \left(P_1[A_1]P_2[\Omega_2]\right)\left( P_1[\Omega_1]P_2[A_2]\right)\\
&\overset{(c)}{=} P[A_1 \times \Omega_2]P[\Omega_1 \times A_2]\\
&=P[X_1^{new} \leq x_1]P[X_2^{new}\leq x_2]
\end{align}
where (a) and (c) hold by the property (*) of the $P$ function; (b) holds because $P_1[\Omega_1]=1$ and $P_2[\Omega_2]=1$. This holds for all $x_1,x_2 \in \mathbb{R}$. Thus, $X_1^{new}$ and $X_2^{new}$ are independent.
Example to show $F\neq F_1 \times F_2$.
Define
\begin{align}
\Omega_1 &= \{1,2,3\}\\
\Omega_2 &= \{a,b,c\} \\
\Omega &= \Omega_1 \times \Omega_2
\end{align}
Define $F_1$ and $F_2$ as the power sets of $\Omega_1$ and $\Omega_2$, respectively
\begin{align}
F_1 &= \{\phi, \{1\}, \{2\}, \{3\}, \{1,2\}, \{1,3\}, \{2,3\}, \{1,2,3\}\}\\
F_2 &= \{\phi, \{a\}, \{b\}, \{c\}, \{a,b\}, \{a,c\}, \{b,c\}, \{a,b,c\}\}
\end{align}
It can be shown that $F$ is the power set of $\Omega$. Thus
So $F$ has more elements than $F_1 \times F_2$. The structure of the set $F_1 \times F_2$ is also different from that of $F$:
Elements of $F_1 \times F_2$ include $(\phi, \{a\})$ and $(\phi, \{b\})$ and $(\{1\}, \{a\})$ and $(\{2\}, \{b\})$.
Elements of $F$ include $\phi$ and $\{(1,a), (2,b)\}$.
Caveat 1
The set $F$ is sometimes called $F_1 \otimes F_2$. This is quite different from $F_1 \times F_2$, and also different from $\sigma(F_1 \times F_2)$.
Caveat 2
As in my above comments on the question, usually we do not concern ourselves with this deep extension theory.
If we have a probability experiment that involves random variables $Y$ and $Z$, we implicitly assume there is a single probability space $(\Omega, F, P)$ and $Y:\Omega\rightarrow\mathbb{R}$ and $Z:\Omega\rightarrow\mathbb{R}$ are measurable functions on this space.
Thus, for all $y,z \in \mathbb{R}$ we know that $\{Y \leq y\} \in F$ and $\{Z \leq z\} \in F$. Since $F$ is a sigma-algebra, this implies that $\{Y \leq y\}\cap \{Z \leq z\} \in F$ (for all $y, z\in \mathbb{R}$).
The random variables $Y:\Omega\rightarrow\mathbb{R}$ and $Z:\Omega\rightarrow\mathbb{R}$ are defined to be independent if
$$ P[Y \leq y, Z\leq z] = P[Y\leq y]P[Z\leq z] \quad \forall y, z \in \mathbb{R}$$
Notice that the definition of independent requires $\{Y \leq y\} \cap \{Z \leq z\} \in F$ for all $y, z \in \mathbb{R}$, which of course requires $Y$ and $Z$ to be defined on the same space.
Best Answer
I think you are confused a bit.
If you want to create two independent copies of a given radom variable with distribution $F,$ do this: start with a distribution, say $F$ on $\mathbf{R}.$ This distribution function induces a measure $\mu_F$ on the Borel set of $\mathbf{R}.$ If you want a pair of random variables with distribution $F$ each of them and that are independendent, then you will consider the probability space $(\mathbf{R}^2, \mathscr{B}_{\mathbf{R}^2}, \mu_F \otimes \mu_F)$ and to define the random variables you will then set $X(\omega) = x$ and $Y(\omega) = y$ where $\omega = (x, y).$ By definition, $X$ and $Y$ are independent and they both have distribution $F.$ Mutatis mutandis you can do the same with two probability spaces $(\mathrm{X}, \mathscr{X}, \mu)$ and $(\mathrm{Y}, \mathscr{Y}, \nu)$ and consider the product space $\mathrm{Z} = \mathrm{X} \times \mathrm{Y},$ $\mathscr{Z} = \mathscr{X} \otimes \mathscr{Y}$ and $\rho = \mu \otimes \nu$ and the random object is $T(x, y) = (x, y)$ which is a $\mathrm{Z}$-valued random object with independent coordinates and whose marginal laws are $\mu$ and $\nu,$ respectively.
If you start with two random variables $X$ and $Y$ defined on respective probability spaces $(\Omega_X, \mathscr{F}_X, \mathbf{P}_X)$ and $(\Omega_Y, \mathscr{F}_Y, \mathbf{P}_Y)$: then you can construct the product space $$ (\Omega, \mathscr{F}, \mathbf{P}) = (\Omega_X \times \Omega_Y, \mathscr{F}_X \otimes \mathscr{F}_Y, \mathbf{P}_X \otimes \mathbf{P}_Y) $$ and on it then define a random vector $Z(\omega) = Z(\omega_x, \omega_y) = (X(\omega_x), Y(\omega_y)) \in \mathbf{R}^2.$ Consider the projections $\mathbf{R}^2 \to \mathbf{R}$ given by $\pi_1(x,y) = x$ and $\pi_2(x,y) = y.$ In this construction, it follows that $\pi_1(Z)$ and $\pi_2(Z)$ are independent random variables defined on $\Omega$ with respective probabilities $\mathbf{P}_X$ and $\mathbf{P}_Y.$ These are your $\bar{X}$ and $\bar{Y}.$
Do every pair of independent random variables come from a product space? No. Consider uniform distribution on $[0, 1].$ Given a random number of this distribution, it can be expanded uniquely as $\sum\limits_{k = 1}^\infty b_k 2^{-k}$ where $b_k$ is either $0$ or $1$ and the expansion is unique by considering finite representations (that is, there is no infinite sequence of contiguous 1s starting some index until infinity). The random bits $(b_k)$ are independent Bernoulli with parameter $\dfrac{1}{2}$ and they do not come from a product space. However, in general is good idea to think of independent random vectors as defined on product spaces.