A simple example is $X$ uniformly distributed on the usual Cantor set, in other words, $$X=\sum_{n\geqslant1}\frac{Y_n}{3^n},$$ for some i.i.d. sequence $(Y_n)$ with uniform distribution on $\{0,2\}$.
Other examples are based on binary expansions, say, $$X=\sum_{n\geqslant1}\frac{Z_n}{2^n},$$ for some i.i.d. sequence $(Z_n)$ with any nondegenerate distribution on $\{0,1\}$ except the uniform one.
These distributions have no density with respect to Lebesgue measure, even partly, since $P(X\in C)=1$ for some Borel set $C$ with zero Lebesgue measure. They have no atom either, in the sense that $P(X=x)=0$ for every $x$.
If you are studying elementary probability theory, allow me to reformulate your question as "how can I represent a random variable $X$ with a given CDF $F_X$ in terms of a uniform random variable $U$ on $(0,1)$?" The answer to that is the quantile function: you define
$$G_X(p)=\inf \{ x : F_X(x) \geq p \}$$
and then define $X$ to be $G_X(U)$.
Note that if $F_X$ is invertible then $G_X=F_X^{-1}$, otherwise this is "the right generalization". One can see this by looking at the discrete case: if $P(X=x)=p$ then $P(G_X(U)=x)=p$. This is because a jump of height $p$ in $F_X$ corresponds to a flat region of length $p$ in $G_X$, and the uniform distribution on $(0,1)$ assigns each interval a probability equal to its length.
The natural question is now "what's a uniform random variable on $(0,1)$?" Well, it has $F_U(x)=\begin{cases} 0 & x<0 \\ x & x \in [0,1] \\ 1 & x>1 \end{cases}$. But otherwise such a thing is a black box from the elementary point of view.
If you are studying measure-theoretic probability theory then the answer is a bit more explicit. A random variable with CDF $F_X$ is given by $G_X : \Omega \to \mathbb{R}$ where $G_X$ is the quantile function as defined before, $\Omega=(0,1)$, $\mathcal{F}$ is the Borel $\sigma$-algebra on $(0,1)$, and $\mathbb{P}$ is the Lebesgue measure. Note that on this space the identity function is a uniform random variable on $(0,1)$, so this is really the same construction as the one described above.
In any case these constructions can be generalized to finitely many random variables by looking at the uniform distribution on $(0,1)^n$ instead of $(0,1)$.
Best Answer
This is known as the probability integral transform. For $0<t<1$ we have \begin{align} F_Y(t) &= \mathbb P(Y\leqslant t)\\ &= \mathbb P(F_X(X)\leqslant t)\\ &= \mathbb P(X\leqslant F_X^{-1}(t))\\ &= F_X(F_X^{-1}(t))\\ &= t, \end{align} so that $Y$ is uniformly distributed over $(0,1)$. Note that when $X$ is not continuous the map $F_X^{-1}$ is not a true inverse, and must instead be defined as the quantile function $F_X^{-1}(t) = \inf\{x:F_X(x)>t\}$.