It depends on your model / views. For a given time series with a timespan $T$, you can consider that you observe $T$ realizations of a given random variable $X$, or you can consider that you observe one realization of a stochastic process that is one path among many others. If you consider an independent identically distributed random process, these are the same.
It is not clear whether $Y_1,Y_2,\ldots,Y_n$ represents the $n$ variables of your random process and thus a single time series, or $n$ time series each represented by a single random variable $Y_i$, and therefore your time series data is a matrix $n \times T$, i.e. a time series for each $Y_i$ with $T$ realizations.
As long as you are consistent, it is up to you to choose your model.
Independence of random variables $U$ and $V$ implies the distribution of $U$ is the same regardless of what value $V$ might have.
In some cases, checking independence requires working out the joint distribution of $(U,V)$. But if you suspect there might be lack of independence, it suffices to find enough values of $V$ for which the conditional distribution of $U$ differs.
("Enough" means there has to be nonzero probability of achieving values of $V$ where the conditional distribution of $U$ varies.)
In this case, algebra tells us that
$$Y^2 = V(X^2 + a),$$
whence
$$\frac{1}{U} = \frac{Y^2+a}{X^2} = \frac{V(X^2+a)+a}{X^2} = V + a \frac{V+1}{X^2}.\tag{1}$$
With the Rayleigh distribution, $X^2$ has positive probability density for all $X^2 \gt 0.$ As $X^2$ ranges through all positive numbers, the right hand side of $(1)$ ranges over the interval $(V, \infty)$ when $a(V+1)\gt 0$, over the interval $(-\infty,V)$ when $a(V+1) \lt 0$, and otherwise is fixed at $V$. This immediately implies that the range of values of $U$ that have some chance of happening depends on $V$, and that we cannot get rid of this problem by eliminating a set of $V$ having just zero probability.
Because the ranges of possible $U$ differs with $V$, the conditional probability distribution of $U$ obviously varies with $V$, too. Therefore $U$ and $V$ are not independent.
The "other man" can be confuted by considering a simplified version of his assertion where there is just one variable, say $X$. We may "independently" construct many random variables from $X$, such as $U=2X$ and $V=4X$, but I hope it's obvious the resulting variables are not themselves independent. In this example, for instance, $V=2U$ exhibits the dependence explicitly. The same argument applies to multivariate random variables and for the same reasons.
Finally, there are some special cases where sets of variables constructed from the same "core" of independent variables are independent. The best-known (and arguably most important) example consists of an orthogonal transformation of independent and identically distributed Normal variables: the resulting variables are still independent and identically distributed.
Best Answer
When modelling a sample $(x_1,\ldots,x_n)$ as an $i.i.d$ sample from a given distribution $F$, the correct way of modelling is to see this sample as the realisation of $n$ random variables $(X_1,\ldots,X_n)$ made of $n$ independent random variables identically distributed from $F$:
$$(x_1,\ldots,x_n)=(X_1,\ldots,X_n)(\omega)\qquad\omega\in\Omega$$
The concept of $n$ realizations of a single random variable is a shortcut that is not well-defined because one cannot handle independence with a single random variable.