If I didn't make a mistake, such a random variable must always exist.
Let $[-\infty,+\infty]$ be the set of real numbers along with the two infinities $\pm\infty$ in the order topology and with the Borel $\sigma$-algebra. Throughout this proof, random variables will be allowed to take values in $[\infty,+\infty]$.
Terminology: Suppose that for each $n\ge 1$, $\pi_n$ is a partition of some set fixed set $V$. We say that $\pi_n$ is an "increasingly fine" sequence iff $\pi_{n+1}$ is finer than $\pi_n$ for all $n$.
Recall the proposed proposition: Suppose $(\Omega,\Sigma,\mathbb{P})$ is a probability space, $X_n$ is a sequence of random variables, the cumulative distribution function of $X_n$ is $F_n$, and $F$ is some cumulative distribution function such that $F_n \to F$. (That is, at each point of continuity $x$ of $F$, $\lim_{n\to \infty} F_n(x) = F(x)$.)
Claim There is a random variable $X$ on the same probability space $(\Omega,\Sigma,\mathbb{P})$ with cumulative distribution function $F$.
Proof
For each integer $n\ge 1$, let $\mathcal{I}_n$ be the finite partition of $[-\infty,+\infty]$ consisting of
the set $[-\infty,-n)$, the set $[n,+\infty]$, and the set $\left[i/2^n,(i+1)/2^n\right)$ for each integer $i\in [- 2^n n , 2^n n - 1]$. Loosely speaking, this is just a sequence of increasingly fine finite partitions that becomes arbitrarily fine at any fixed real number.
For each integer $n \ge 1$, let $\mathcal{P}_n$ be the finite partition of $\Omega$ consisting of sets of the form $X_1^{-1}(I_1)\cap \ldots \cap X_n^{-1}(I_n)$ where $I_1\in\mathcal{I}_1,\ldots,I_n\in\mathcal{I}_n$. Observe that $\mathcal{P}_n$ is a sequence of increasingly fine partitions of $\Omega$, each $\mathcal{P}_n$ is finite, and any $P\in\mathcal{P}_n$ is measurable.
We will consider the "limit" $\mathcal{P}$ of the sequence $\mathcal{P}_n$, which is the coarsest partition of $\Omega$ that is finer than any $\mathcal{P}_n$. The family $\mathcal{P}$ consists of all non-empty sets of the form
$$
(1) \ \ \ \bigcap_{n\ge 1} P_n
$$
for some sequence $P_n$ where $P_n\in\mathcal{P}_n$ for all $n$. That is,
$$
\mathcal{P} = \left\{ \bigcap_{n\ge 1} P_n : P_n\in\mathcal{P}_n \mbox{ for all } n \right\}\setminus\{\emptyset\}.
$$
Now let $\mathcal{A}$ consist of sets in $\mathcal{P}$ with positive measure:
$$
\mathcal{A} = \{P\in\mathcal{P}: \mathbb{P}(P) > 0\}.
$$
Loosely speaking, the sets in $\mathcal{A}$ are "atoms": nowhere in this proof will we encounter a nonempty proper subset of an atom. The number of atoms must be countable (otherwise, we'd have infinite measure), so we will write
$$
\mathcal{A} = \{A_1, A_2,\ldots\}.
$$
Next, define $A$ and $S$ as
$$
A = \bigcup_{n\ge 1} A_n
$$
and
$$
S = \Omega \setminus A.
$$
$A$ and $S$ are measurable. Basically, any random variable on $(\Omega,\Sigma,\mathbb{P})$ can be continuous on $S$ but $A$ will be responsible for jumps in the RV's cdf.
For each $n\ge 1$, construct the partition $\mathcal{S}_n$ of $S$ defined by
$$
\mathcal{S}_n = \{ S \cap P : P\in \mathcal{P}_n \}.
$$
Observe $\mathcal{S}_n$ is a sequence of increasingly fine finite partitions of $S$. Also, for any sequence $S_n$ with $S_n\in\mathcal{S}_n$,
$$
(*) \ \ \ \ \mathbb{P}\left(\bigcap_{n\ge 1} S_n \right) = 0.
$$
This is a key fact used below. To prove it, note
$$
\mathbb{P}\left(\bigcap_{n\ge 1} S_n \right) = \mathbb{P}\left(S\cap \bigcap_{n\ge 1} P_n \right)
$$
for some sequence $P_n\in\mathcal{n}$. If $\mathbb{P}\left(\bigcup_{n\ge 1} P_n\right) = 0$, then $(*)$ obviously holds. If $\bigcup_{n\ge 1} P_n$ has positive probability, then it must be $A_k$ for some $k$, by definition. But $S \cap A_k=\emptyset$, so $(*)$ again holds.
Now that some basic constructions and notation are in place, let's start proving things!
Lemma 1. For any $A_k$, there is an $\alpha_k\in [-\infty,+\infty]$ and a subsequence $X_{m_1},X_{m_2},\ldots$ of the sequence $X_1,X_2,\ldots$ such that
$$
\lim_{n\to \infty} X_{m_n}(\omega) = \alpha_k \mbox{ for all } \omega\in A_k.
$$
Proof. Fix $A_k$ and choose a single $x_n\in X_n(A_k)$ for each $n$. Since $[-\infty,+\infty]$ is compact, the sequence $x_1,x_2,\ldots$ must have a limit point $\alpha_k$ in $[-\infty,+\infty]$. Thus, there is a subsequence $x_{m_1}, x_{m_2},\ldots$ such that
$$
(2)\ \ \lim_{i\to\infty} x_{m_i} = \alpha_k.
$$
Recall from $(1)$ that there are sets $I_1\in\mathcal{I}_1,I_2\in\mathcal{I}_2,\ldots$ such that
$$
A_k = \bigcap_{n\ge 1} X_n^{-1}(I_n),
$$
so $x_n\in X_n(A_k)\subset I_n$ for all $n$. Suppose $\alpha_k$ is real and fix some $\omega\in A_k$. By $(2)$ and the construction of sets in $\mathcal{I}_n$, there has to be an $N$ such that $I_n$ is an interval of length $2^{-n}$ for all $n>N$. Thus,
$$
|X_{m_n}(\omega) - x_{m_n}| \le 2^{-m_n}
$$
for $n$ big enough. In light of this fact and (2), we have $X_{m_n}(\omega) \to \alpha_k$.
A similar argument shows the same limit holds if $\alpha_k\in\{\pm\infty\}$.
Lemma 2. There are constants $\alpha_k\in[-\infty,+\infty]$ for $k\ge 1$ and a subsequence $X_{m_1},X_{m_2},\ldots$ of $X_1,X_2,\ldots$ such that for all $k$,
$$
\lim_{n\to\infty} X_{m_n}(\omega) = \alpha_k \mbox{ for all } \omega\in A_k.
$$
Proof Use the previous lemma and a diagonalization argument. First choose a subsequence $X_{11}, X_{12},\ldots$ of $X_1,X_2,\ldots$ such that $X_{1n}(\omega)\to\alpha_1$ for some $\alpha_1$ and all $\omega\in A_1$. Then choose a subsequence $X_{11}, X_{22}, X_{23}, \ldots$ of $X_{11},X_{12},\ldots$ such that
$X_{2n}(\omega)\to\alpha_2$ for some $\alpha_2$ and all $\omega\in A_2$. At stage $k$, choose a subsequence $X_{11}, X_{22},\ldots, X_{kk},X_{k,k+1},\ldots$ such that $X_{kn}(\omega)\to \alpha_k$ for all $\omega\in A_k$. The subsequence $X_{11},X_{22},X_{33},\ldots$ is well-defined and has the desired property.
Important From now on, we will assume without loss of generality that for all $k$,
$$
(3) \ \ \ \ \ \lim_{n\to\infty} X_n(\omega) = \alpha_k \mbox{ for all } \omega\in A_k.
$$
We will now begin to construct a random variable $X$ with cumulative distribution function $F$. To do so, first define the following functions:
\begin{equation}
(4) \ \ \ F_A(t) = \sum_{n\ge 1,\ \ \alpha_n \le t} \alpha_n \\
(5) \ \ \ F_S(t) = F(t) - F_A(t)
\end{equation}
for all $t\in[-\infty,+\infty]$. We will construct random variables $X_A$ and $X_S$ such that
$$
(6) \ \ \ \ \mathbb{P}(A \cap X_A \le t) = F_A(t) \\
(7) \ \ \ \ \mathbb{P}(S \cap X_S \le t) = F_S(t).
$$
At that point, the proof will be complete, since we can let $X = 1_A X_A + 1_S X_S$. The cumulative distribution
function of $X$ will then be $F_A + F_S = F$.
Let $X_A$ be a random variable with
\begin{equation}
X_A(\omega) = \alpha_k \mbox{ for all } k \mbox{ and for all } \omega\in A_k.
\end{equation}
The value of $X_A$ on $S$ is unimportant. One can set $X_A(S) = 0$, say, to ensure $X_A$ is measurable.
By construction of $X_A$, condition $(6)$ holds. Now now consider $X_S$.
Lemma 3 $F_S$ is increasing, right continuous, $F_S(-\infty) \ge 0$, and $F_S(\infty) = \mathbb{P}(S) $.
Proof
By definition of $F_S$ and $F_A$, $F_S(-\infty) = 0$ and $F_S(+\infty) = \mathbb{P}(S)$. Consider $F(t)$. Since $X_n$ converges in distribution to the cumulative distribution function $F$, we can write
$$
F(t) = \lim_{\epsilon\downarrow 0}\liminf_n \mathbb{P}(X_n \le t+\epsilon).
$$
(Ask me why if you're not sure.) We can then write
$$
F(t) = \lim_{\epsilon\downarrow 0}\liminf_n \mathbb{P}(A \cap X_n\le t+\epsilon) +
\lim_{\epsilon\downarrow 0}\liminf_n \mathbb{P}(S \cap X_n\le t+\epsilon).
$$
By $(3)$ and $(4)$, the first term is exactly $F_A(t)$. Thus,
$$
F_S(t) = \lim_{\epsilon\downarrow 0}\liminf_n \mathbb{P}(S \cap X_n\le t+\epsilon).
$$
This function is obviously increasing and right continuous.
I will use the following Lemma, which is really pretty easy to prove by direct construction. The proof is a little too tedious for me to include here. If you need more info, let me know.
Lemma 4 Suppose $U\subset\Omega$ is measurable and $\pi_1,\pi_2,\ldots$ is a sequence of increasingly fine finite partitions of $U$. Suppose for each $\epsilon>0$, there is an $n$ such that $\mathbb{P}(B)<\epsilon$ for all $B\in\pi_n$. Suppose $F:[-\infty,+\infty]\to[0,1]$ is increasing, right continuous, and $F(+\infty)=\mathbb{P}(U)$. Then there is a random variable $X$ on $(\Omega,\Sigma,\mathbb{P})$ such that $\mathbb{P}(U \cap X\le t) = F(t)$ for all $t\in[-\infty,+\infty]$.
Now apply Lemma 4 to construct $X_S$:
Lemma 5 There is a random variable $X_S$ such that $(7)$ holds, i.e., $\mathbb{P}(S \cap X_S \le t) = F_S(t)$.
Proof We will apply Lemma 4 with the set $S$, the sequence of partitions $\mathcal{S}_n$, and the function $F_S$. By Lemma 3, the conditions on $F_S$ required by Lemma 4 are met. We already observed $\mathcal{S}_n$ is a sequence of increasingly fine finite partitions. It only remains to prove that for any $\epsilon>0$, there is an $n$ such that $\mathbb{P}(B)<\epsilon$ for all $B\in\mathcal{S_n}$.
Fix $\epsilon>0$. For each $B\in\mathcal{S}_n$, let $c(B)$ represent the number of elements in the set
$$
\{B'\in\mathcal{S}_{n+p} : B' \subset B,\ \ \mathbb{P}(B')>\epsilon, \ \ \ \ p\ge 0\}.
$$
Informally, $c(B)$ is the number of elements "below" $B$ in the sequence of partitions $\mathcal{S}_n$ with probability greater than $\epsilon$. Observe $c(B)=0$ if and only if $\mathbb{P}(B)\le\epsilon$. Also, for any set $B\in\mathcal{S}_n$, if $\mathbb{P}(B) > \epsilon$, then
$$
(8)\ \ \ c(B) = 1 + \sum_{B'\in\mathcal{S}_{n+1},\ \ B'\subset B} c(B').
$$
Fix some $B_0\in\mathcal{n}$ and suppose $c(B_0)=+\infty$. By $(8)$, there is a $B_1\in\mathcal{S}_{n+1}$ with $B_1\subset B_0$ and $c(B_1)=+\infty$. Continuing this process, we see there is a sequence $B_0 \supset B_1 \supset B_2 \supset\cdots$ such that $B_p\in\mathcal{S}_{n+p}$ and $\mathbb{P}(B_p) > \epsilon$ for all $p\ge 0$. But then
$$
\mathbb{P}\left(\bigcap_{p\ge 0} B_{n+p}\right) \ge \epsilon > 0.
$$
This contradicts $(*)$. Thus, $c(B)$ is finite for all $B\in \mathcal{S}_n$. By $(8)$, it is clear that there is an $n$ with $c(B)=0$ for all $B\in\mathcal{S}_n$. This proves the result.
Originally we have two different spaces
Let $(\Omega_1, F_1, P_1)$ and $(\Omega_2, F_2, P_2)$ be two probability spaces. That is, $\Omega_1$ and $\Omega_2$ are nonempty sets, $F_1$ is a sigma-algebra on $\Omega_1$, $F_2$ is a sigma-algebra on $\Omega_2$, and $P_1$ and $P_2$ are functions
\begin{align*}
P_1: F_1 \rightarrow\mathbb{R}\\
P_2:F_2 \rightarrow \mathbb{R}
\end{align*}
that satisfy the 3 probability axioms with respect to $(\Omega_1, F_1)$ and $(\Omega_2, F_2)$, respectively. Let
\begin{align}
X_1:\Omega_1 \rightarrow\mathbb{R}\\
X_2:\Omega_2 \rightarrow\mathbb{R}
\end{align}
be functions such that $X_1$ is measurable with respect to $(\Omega_1, F_1)$ and $X_2$ is measurable with respect to $(\Omega_2, F_2)$.
Defining a single new space
Define
$$\Omega = \Omega_1 \times \Omega_2 = \{(\omega_1, \omega_2) : \omega_1 \in \Omega_1, \omega_2 \in \Omega_2\}$$
Also define $F$ as the smallest sigma-algebra on $\Omega$ that contains all sets of the form $A_1 \times A_2$ such that $A_1 \in F_1$, $A_2 \in F_2$. (Note 1: Here we define $\phi \times A_2=A_1\times \phi=\phi$. Note 2: $F \neq F_1 \times F_2$, see example below).
Fundamental question
Recall that $\Omega =\Omega_1 \times \Omega_2$. Does there exist a function $P:F\rightarrow\mathbb{R}$
that satisfies
$$P[A_1 \times A_2] = P_1[A_1]P_2[A_2] \quad \forall A_1 \in F_1, \forall A_2 \in F_2 \quad (*)$$
and that also satisfies the three axioms of probability with respect to $(\Omega, F)$?
This is a deep and hard question, the answer is not obvious. Fortunately, the answer is "yes." Further, the function is unique. This is due to the Hahn-Kolmogorov theorem:
https://en.wikipedia.org/wiki/Product_measure
Consequence of "yes"
Once we have such a function $P:F\rightarrow\mathbb{R}$, we have a legitimate new probability space $(\Omega, F, P)$. We can define new functions $X_1^{new}:\Omega\rightarrow\mathbb{R}$ and $X_2^{new}:\Omega\rightarrow\mathbb{R}$ by
\begin{align}
X_1^{new}(\omega_1, \omega_2) &= X_1(\omega_1) \quad \forall (\omega_1, \omega_2) \in \Omega \\
X_2^{new}(\omega_1, \omega_2) &= X_2(\omega_2)\quad \forall (\omega_1, \omega_2) \in \Omega
\end{align}
It can be shown that $X_1^{new}$ and $X_2^{new}$ are both measurable with respect to $(\Omega, F, P)$. Thus, they can be called random variables with respect to $(\Omega, F, P)$.
We can prove that $X_1^{new}$ and $X_2^{new}$ are independent: Fix $x_1, x_2 \in \mathbb{R}$. Define
\begin{align}
A_1 &= \{\omega_1 \in \Omega_1 : X_1(\omega_1) \leq x_1\}\\
A_2 &=\{\omega_2 \in \Omega_2 : X_2(\omega_2) \leq x_2\}
\end{align}
Then
\begin{align}
&P[X_1^{new} \leq x_1, X_2^{new}\leq x_2] \\
&=P\left[\{\omega \in \Omega: X_1^{new}(\omega) \leq x_1\}\cap \{\omega \in \Omega: X_2^{new}(\omega) \leq x_2\}\right]\\
&= P\left[\{(\omega_1, \omega_2)\in \Omega : X_1(\omega_1)\leq x_1, X_2(\omega_2) \leq x_2\} \right] \\
&= P\left[ A_1 \times A_2 \right]\\
&\overset{(a)}{=} P_1[A_1]P_2[A_2]\\
&\overset{(b)}{=} \left(P_1[A_1]P_2[\Omega_2]\right)\left( P_1[\Omega_1]P_2[A_2]\right)\\
&\overset{(c)}{=} P[A_1 \times \Omega_2]P[\Omega_1 \times A_2]\\
&=P[X_1^{new} \leq x_1]P[X_2^{new}\leq x_2]
\end{align}
where (a) and (c) hold by the property (*) of the $P$ function; (b) holds because $P_1[\Omega_1]=1$ and $P_2[\Omega_2]=1$. This holds for all $x_1,x_2 \in \mathbb{R}$. Thus, $X_1^{new}$ and $X_2^{new}$ are independent.
Example to show $F\neq F_1 \times F_2$.
Define
\begin{align}
\Omega_1 &= \{1,2,3\}\\
\Omega_2 &= \{a,b,c\} \\
\Omega &= \Omega_1 \times \Omega_2
\end{align}
Define $F_1$ and $F_2$ as the power sets of $\Omega_1$ and $\Omega_2$, respectively
\begin{align}
F_1 &= \{\phi, \{1\}, \{2\}, \{3\}, \{1,2\}, \{1,3\}, \{2,3\}, \{1,2,3\}\}\\
F_2 &= \{\phi, \{a\}, \{b\}, \{c\}, \{a,b\}, \{a,c\}, \{b,c\}, \{a,b,c\}\}
\end{align}
It can be shown that $F$ is the power set of $\Omega$. Thus
So $F$ has more elements than $F_1 \times F_2$. The structure of the set $F_1 \times F_2$ is also different from that of $F$:
Elements of $F_1 \times F_2$ include $(\phi, \{a\})$ and $(\phi, \{b\})$ and $(\{1\}, \{a\})$ and $(\{2\}, \{b\})$.
Elements of $F$ include $\phi$ and $\{(1,a), (2,b)\}$.
Caveat 1
The set $F$ is sometimes called $F_1 \otimes F_2$. This is quite different from $F_1 \times F_2$, and also different from $\sigma(F_1 \times F_2)$.
Caveat 2
As in my above comments on the question, usually we do not concern ourselves with this deep extension theory.
If we have a probability experiment that involves random variables $Y$ and $Z$, we implicitly assume there is a single probability space $(\Omega, F, P)$ and $Y:\Omega\rightarrow\mathbb{R}$ and $Z:\Omega\rightarrow\mathbb{R}$ are measurable functions on this space.
Thus, for all $y,z \in \mathbb{R}$ we know that $\{Y \leq y\} \in F$ and $\{Z \leq z\} \in F$. Since $F$ is a sigma-algebra, this implies that $\{Y \leq y\}\cap \{Z \leq z\} \in F$ (for all $y, z\in \mathbb{R}$).
The random variables $Y:\Omega\rightarrow\mathbb{R}$ and $Z:\Omega\rightarrow\mathbb{R}$ are defined to be independent if
$$ P[Y \leq y, Z\leq z] = P[Y\leq y]P[Z\leq z] \quad \forall y, z \in \mathbb{R}$$
Notice that the definition of independent requires $\{Y \leq y\} \cap \{Z \leq z\} \in F$ for all $y, z \in \mathbb{R}$, which of course requires $Y$ and $Z$ to be defined on the same space.
Best Answer
I cannot comment yet, so I'm posting this as an answer.$\def\ci{\perp\!\!\!\perp}$
This is probably not what you were asking, but I think it's interesting and relevant enough to post.
It's known possible to construct arbitrary distributions from uniform variables. Furthermore, given a $\mathcal U[0,1]$ variable, it's possible to produce from it an i.i.d sequence of such variables, which can then be used to obtain more general distributions. We can always extend a space to obtain such variables by$$\hat{\Omega}=\Omega\times[0,1]\text{, }\hat{\mathscr{A}}=\mathscr{A}\otimes\mathscr{B}\text{, }\hat{P}=P\otimes\lambda $$ in which case $\vartheta(\omega,t):= t$ is $\mathcal{U}[0,1]$ and $\vartheta\ci \mathscr{A}$.
For more details see Kallenberg - Foundations of Modern Probability (2002), in particular the discussion before Theorem 6.10 (transfer).