Originally we have two different spaces
Let $(\Omega_1, F_1, P_1)$ and $(\Omega_2, F_2, P_2)$ be two probability spaces. That is, $\Omega_1$ and $\Omega_2$ are nonempty sets, $F_1$ is a sigma-algebra on $\Omega_1$, $F_2$ is a sigma-algebra on $\Omega_2$, and $P_1$ and $P_2$ are functions
\begin{align*}
P_1: F_1 \rightarrow\mathbb{R}\\
P_2:F_2 \rightarrow \mathbb{R}
\end{align*}
that satisfy the 3 probability axioms with respect to $(\Omega_1, F_1)$ and $(\Omega_2, F_2)$, respectively. Let
\begin{align}
X_1:\Omega_1 \rightarrow\mathbb{R}\\
X_2:\Omega_2 \rightarrow\mathbb{R}
\end{align}
be functions such that $X_1$ is measurable with respect to $(\Omega_1, F_1)$ and $X_2$ is measurable with respect to $(\Omega_2, F_2)$.
Defining a single new space
Define
$$\Omega = \Omega_1 \times \Omega_2 = \{(\omega_1, \omega_2) : \omega_1 \in \Omega_1, \omega_2 \in \Omega_2\}$$
Also define $F$ as the smallest sigma-algebra on $\Omega$ that contains all sets of the form $A_1 \times A_2$ such that $A_1 \in F_1$, $A_2 \in F_2$. (Note 1: Here we define $\phi \times A_2=A_1\times \phi=\phi$. Note 2: $F \neq F_1 \times F_2$, see example below).
Fundamental question
Recall that $\Omega =\Omega_1 \times \Omega_2$. Does there exist a function $P:F\rightarrow\mathbb{R}$
that satisfies
$$P[A_1 \times A_2] = P_1[A_1]P_2[A_2] \quad \forall A_1 \in F_1, \forall A_2 \in F_2 \quad (*)$$
and that also satisfies the three axioms of probability with respect to $(\Omega, F)$?
This is a deep and hard question, the answer is not obvious. Fortunately, the answer is "yes." Further, the function is unique. This is due to the Hahn-Kolmogorov theorem:
https://en.wikipedia.org/wiki/Product_measure
Consequence of "yes"
Once we have such a function $P:F\rightarrow\mathbb{R}$, we have a legitimate new probability space $(\Omega, F, P)$. We can define new functions $X_1^{new}:\Omega\rightarrow\mathbb{R}$ and $X_2^{new}:\Omega\rightarrow\mathbb{R}$ by
\begin{align}
X_1^{new}(\omega_1, \omega_2) &= X_1(\omega_1) \quad \forall (\omega_1, \omega_2) \in \Omega \\
X_2^{new}(\omega_1, \omega_2) &= X_2(\omega_2)\quad \forall (\omega_1, \omega_2) \in \Omega
\end{align}
It can be shown that $X_1^{new}$ and $X_2^{new}$ are both measurable with respect to $(\Omega, F, P)$. Thus, they can be called random variables with respect to $(\Omega, F, P)$.
We can prove that $X_1^{new}$ and $X_2^{new}$ are independent: Fix $x_1, x_2 \in \mathbb{R}$. Define
\begin{align}
A_1 &= \{\omega_1 \in \Omega_1 : X_1(\omega_1) \leq x_1\}\\
A_2 &=\{\omega_2 \in \Omega_2 : X_2(\omega_2) \leq x_2\}
\end{align}
Then
\begin{align}
&P[X_1^{new} \leq x_1, X_2^{new}\leq x_2] \\
&=P\left[\{\omega \in \Omega: X_1^{new}(\omega) \leq x_1\}\cap \{\omega \in \Omega: X_2^{new}(\omega) \leq x_2\}\right]\\
&= P\left[\{(\omega_1, \omega_2)\in \Omega : X_1(\omega_1)\leq x_1, X_2(\omega_2) \leq x_2\} \right] \\
&= P\left[ A_1 \times A_2 \right]\\
&\overset{(a)}{=} P_1[A_1]P_2[A_2]\\
&\overset{(b)}{=} \left(P_1[A_1]P_2[\Omega_2]\right)\left( P_1[\Omega_1]P_2[A_2]\right)\\
&\overset{(c)}{=} P[A_1 \times \Omega_2]P[\Omega_1 \times A_2]\\
&=P[X_1^{new} \leq x_1]P[X_2^{new}\leq x_2]
\end{align}
where (a) and (c) hold by the property (*) of the $P$ function; (b) holds because $P_1[\Omega_1]=1$ and $P_2[\Omega_2]=1$. This holds for all $x_1,x_2 \in \mathbb{R}$. Thus, $X_1^{new}$ and $X_2^{new}$ are independent.
Example to show $F\neq F_1 \times F_2$.
Define
\begin{align}
\Omega_1 &= \{1,2,3\}\\
\Omega_2 &= \{a,b,c\} \\
\Omega &= \Omega_1 \times \Omega_2
\end{align}
Define $F_1$ and $F_2$ as the power sets of $\Omega_1$ and $\Omega_2$, respectively
\begin{align}
F_1 &= \{\phi, \{1\}, \{2\}, \{3\}, \{1,2\}, \{1,3\}, \{2,3\}, \{1,2,3\}\}\\
F_2 &= \{\phi, \{a\}, \{b\}, \{c\}, \{a,b\}, \{a,c\}, \{b,c\}, \{a,b,c\}\}
\end{align}
It can be shown that $F$ is the power set of $\Omega$. Thus
So $F$ has more elements than $F_1 \times F_2$. The structure of the set $F_1 \times F_2$ is also different from that of $F$:
Elements of $F_1 \times F_2$ include $(\phi, \{a\})$ and $(\phi, \{b\})$ and $(\{1\}, \{a\})$ and $(\{2\}, \{b\})$.
Elements of $F$ include $\phi$ and $\{(1,a), (2,b)\}$.
Caveat 1
The set $F$ is sometimes called $F_1 \otimes F_2$. This is quite different from $F_1 \times F_2$, and also different from $\sigma(F_1 \times F_2)$.
Caveat 2
As in my above comments on the question, usually we do not concern ourselves with this deep extension theory.
If we have a probability experiment that involves random variables $Y$ and $Z$, we implicitly assume there is a single probability space $(\Omega, F, P)$ and $Y:\Omega\rightarrow\mathbb{R}$ and $Z:\Omega\rightarrow\mathbb{R}$ are measurable functions on this space.
Thus, for all $y,z \in \mathbb{R}$ we know that $\{Y \leq y\} \in F$ and $\{Z \leq z\} \in F$. Since $F$ is a sigma-algebra, this implies that $\{Y \leq y\}\cap \{Z \leq z\} \in F$ (for all $y, z\in \mathbb{R}$).
The random variables $Y:\Omega\rightarrow\mathbb{R}$ and $Z:\Omega\rightarrow\mathbb{R}$ are defined to be independent if
$$ P[Y \leq y, Z\leq z] = P[Y\leq y]P[Z\leq z] \quad \forall y, z \in \mathbb{R}$$
Notice that the definition of independent requires $\{Y \leq y\} \cap \{Z \leq z\} \in F$ for all $y, z \in \mathbb{R}$, which of course requires $Y$ and $Z$ to be defined on the same space.
By definition of a random variable it maps to $\mathbb{R}$ so that is where $Z(\omega)$ lands. And yes, when you look at the sum of two random variables you look at a new variable whose domain is a product set of the original $\Omega$. Measurability is defined in a broad way relating image-subsets to domain-subsets. There is no need at all to re-evaluate the well-definedness but rather simply to prove measurability of $Z$. Which you can do like it is done in this post:
Proving that sum of two measurable functions is measurable.
Edit:
In general measurability is defined like this:
https://en.wikipedia.org/wiki/Measurable_function
In the case of random variables the definition is that X needs to be measurable w.r.t. $(\mathbb{R},\mathfrak{B})$ where $\mathfrak{B}$ is the Borel-$\sigma$-Algebra. This means that for any $A \in \mathfrak{B}$ we have that
$X^{-1}(A) \in \mathfrak{F}$
where $\mathfrak{F}$ is the $\sigma$-Algebra of your probability space $(\Omega, \mathfrak{F}, P)$. A prime reason why we demand this property for random variables is that it allows us to use a Pushforward-Measure on $\mathbb{R}$:
https://en.wikipedia.org/wiki/Pushforward_measure
That means we can assign probabilities to random events in $\mathbb{R}$ (e.g. the sum of two throws of a dice) using the probability measure we already have on $\Omega$.
Best Answer
Sure it's possible. That $X$ and $Y$ has the same distribution means that their respective probability distributions (probability measures) $P_X:=P_1\circ X^{-1}$ and $P_Y:=P_2\circ Y^{-1}$ on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ are the same. To prove it, you have to show that $$ P_X(B)=P_1(X\in B)=P_2(Y\in B)=P_Y(B),\quad B\in\mathcal{B}(\mathbb{R}), $$ where $\mathcal{B}(\mathbb{R})$ is the Borel sigma-algebra on $\mathbb{R}$. Since $\mathcal{B}(\mathbb{R})$ is generated by the sets of the form $(-\infty,a]$ for $a\in\mathbb{R}$, it is actually enough to show that $$ P_X((-\infty,a])=P_1(X\leq a)=P_2(Y\leq a)=P_Y((-\infty,a]),\quad a\in\mathbb{R}. $$ Note that this corresponds to checking if their respective cumulative distribution functions $F_X$ and $F_Y$ are the same.
Other methods also apply: $X$ and $Y$ have the same distribution, i.e. $P_X=P_Y$ if
their characteristic functions agree, i.e. ${\rm E}[e^{itX}]={\rm E}[e^{itY}]$, $t\in\mathbb{R}$,
their moment-generating functions agree, i.e. ${\rm E}[e^{tX}]={\rm E}[e^{tY}]$, $t\in\mathbb{R}$,
their densities agree, i.e. $f_X(t)=f_Y(t)$, $t\in\mathbb{R}$, provided that both $X$ and $Y$ are absolutely continuous,
their probability mass functions agree, i.e. $p_X(t)=p_Y(t)$, $t\in\mathbb{R}$, provided that both $X$ and $Y$ are discrete variables,
under additional assumptions, that ${\rm E}[X^n]={\rm E}[Y^n]$ for all $n\in\mathbb{N}$.