This segregation for random variables is based on the Lebesgue decomposition Theorem for measures (you apply here to the law of your random variable).
More precisely, a random variable $X$ taking values in $\mathbb{R}^d$ is singular if its law is purely singular continuous, that is it satisfies $P(X=a)=0$ for any $a\in\mathbb{R}^d$, but is orthogonal to the Lebesgue measure $\lambda$ of $\mathbb{R}^d$, namely there exists $A\subset\mathbb{R}^d$ such that $P(X\in A)=1$ and $\lambda(A)=0$.
A classical example for singular random variable on $\mathbb{R}$ is indeed the one which has for distribution function the Cantor ladder, but on $\mathbb{R}^2$ just take $(X,0)$, where $X$ is a random variable on $\mathbb{R}$ satisfying $P(X=a)=0$ for any $a\in \mathbb R$.
The answer is no in general.
In the language of characteristic functions, you have $\phi_Z(t)=\phi^2_X(t)=\phi_Y^2(t)$. Taking the square root implies $\phi_Y(t)=\pm\phi_X(t)$ for every $t$, in particular with $\phi_X(t)=\phi_Y(t)$ for $t$ in a neighborhood of 0 (since $\phi(0)=1$ for any characteristic function and by uniform continuity).
Thus, this is true if you require that $\phi_X$ and $\phi_Y$ are analytic, as they would necessarily agree on an interval, and therefore everywhere.
Otherwise it is not true, meaning that $|\phi_X(t)|=|\phi_Y(t)|$, such that there are $t$ where $\phi_X(t)\neq \phi_Y(t)$. A classic example of this is as follows:
Let $P(X=x)=\frac{2}{\pi^2(2k-1)^2}$ whenever $x=\pm (2k-1)\pi$ for $k=1,2\cdots$ , $P(X=0)=1/2$, and $P(X=x)=0$ otherwise.
Let $P(Y=y)=\frac{4}{\pi^2(2k-1)^2}$ whenever $y=\pm (2k-1)\pi/2$ and $k=1,2,\cdots$ and $P(Y=y)=0$ otherwise.
Both these give rise to "triangle-wave" characteristic functions that are periodic outside the interval specified:
$$\phi_X(t)=1-|t|, t\in[-1,1], \mbox{ periodic otherwise}$$
$$\phi_Y(t)=1-|t|, t\in[-2,2], \mbox{ periodic otherwise},$$
which look like:
Best Answer
Let $Y_i\sim U(0,1)$ be IID. Now let: $$ \mathcal{Y}=\prod_{i=1}^n Y_i $$ Next, define: $$ \mathcal{Y} = \exp(\ln(\mathcal{Y})) = \exp\left(\sum_{i=1}^n \ln(Y_i)\right)=\exp(\mathfrak{X}) $$ where we let $X_i=\ln(Y_i)$ and defined $$ \mathfrak{X}=\sum_{i=1}^n \ln(Y_i) $$ Next, we can assume $X_i$ has mean $\mu=\mathbb{E}[X_i]$ and variance $\sigma^2=\mathbb{V}[X_i]$. Now we use the central limit theorem, which says: $$ \lim_{n\rightarrow\infty}\sum_{i=1}^n X_i \,\xrightarrow[\,]{\;d\;}\, \mathcal{N}(n\mu,n\sigma^2) $$ which means that $$ \mathfrak{X}\,\xrightarrow[n\rightarrow\infty]{\;d\;}\,\mathcal{N}(n\mu,n\sigma^2)$$ So, for large enough $n$, we get $\mathcal{Y}=e^\mathfrak{X}$, where $\mathfrak{X}$ is approximately normally distributed with mean $\mathbb{E}[\mathfrak{X}]=n\mu$ and standard deviation $\mathbb{V}[\mathfrak{X}]=\sigma\sqrt{n}$
See also this question, for more on the exact distribution of $X_i$.