The simplest form of a white noise process is where its observations are uncorrelated. We can check this by applying e.g. a portmanteau test such as Lung – Box or Box – Pierce. The series might be Gaussian white noise where the observations are uncorrelated and also normally distributed and hence independent. We can test this with a normality test and a portmanteau test. As far as I know there is a third case where the observations are uncorrelated and independent without being normally distributed. In that case how can we test whether the observations are independent? Is there a statistical test for this?
Solved – Testing normality and independence of time series residuals
time serieswhite noise
Related Solutions
As far as I know "portmanteau test" is synonymous with "omnibus test". Either term gets used in two cases:
(1) When the null hypothesis specifies values for a vector of parameters that are thought of as being on an equal footing, & the alternative is that at least one parameter value is different from that specified by the null. So the null for the ANOVA F-test is that all treatment means are zero; for the Ljung-Box test, that all autocorrelations up to a given lag are zero; &c.
(2) When a test has decent power against a wide range of alternative hypotheses: contrasted with a "directional test" with high power against a narrow range of alternatives, but low power against others. This is typically in the context of goodness of fit.
Don't get your hopes up for more exact definitions—after all, it doesn't really matter what you call a test.
My interpretation of the (slightly paraphrased) statement that the OP is reading, viz.
"If $\{Y_n\}$ is a sequence of (serially) uncorrelated random variables, then $\{Y_n\}$ is not necessarily a sequence of independent random variables, because (emphasis added) they are not necessarily normally distributed"
is that the author is asserting (quite correctly) that uncorrelated random variables need not be independent but the reason for this failure of uncorrelatedness to imply independence is that we cannot assert that the uncorrelated random variables are normally distributed. If the author is not implying that uncorrelated random variables that are normally distributed are independent random variables, then he is certainly begging the reader to jump to that (false) conclusion, by musing in the the subjunctive mood: "Gol dang it, if only those pesky $Y_n$'s were normally distributed in addition to being uncorrelated, then we could take those uncorrelated (and normal) $Y_n$'s to be independent random variables and avoid a lot of headaches." However, Moderator @whuber has stated (in a comment on the main question) that he does not interpret that sentence that way, and that the statement quoted by the OP is perfectly accurate.
In my opinion, the second sentence quoted by the OP.
If in addition to being serially uncorrelated, the $\{Y_n\}$ are serially independent, then we say $\{Y_n\}$ is independent white noise.
also incorrect. Independent random variables are always uncorrelated and it is unnecessary to start with uncorrelated random variables and then impose the additional constraint that they are independent random variables. Furthermore, if by serially independent it is meant that for all $n\neq m$, $Y_m$ and $Y_n$ are independent random variables (that is, only pairwise independence is required), then I disagree vehemently with the assertion that $\{Y_n\}$ is a white noise process. For a random process to be called a white noise process, the random variables need to be mutually independent, not just pairwise independent, and most people, upon encountering the phrase white noise process, are likely to assume that the the random variables constituting the white noise process also are zero-mean random variables with common finite variance $\sigma^2$. This property of the $Y_n$'s is nowhere mentioned in the paragraph fragment quoted by the OP.
Finally, turning to the OP's complaint
I cannot understand the link between the normal distribution and being serially independent
I say that it is a red herring. Uncorrelated (marginally) normal random variables are not necessarily independent random variables while uncorrelated jointly normal random random variables are always independent random variables.
It is not true that if $\{Y_n\}$ is a sequence of normally distributed random variables that happen to be uncorrelated, then the random variables are independent.
A standard counterexample begins with $X \sim N(0,1)$ and an independent random variable $Z$ that takes on values $+1$ and $-1$ with equal probability. Then $Y = XZ$ is also a standard normal random variable since \begin{align}P\{Y \leq x\} &= P\{X \leq x\mid Z=+1\}P\{Z=+1\} + P\{X \geq -x\mid Z=-1\}P\{Z=-1\}\\ &= \frac 12 \Phi(x) + \frac 12 (1 - \Phi(-x))\\ &= \Phi(x). \end{align} Also, $\quad\operatorname{cov}(X,Y) = E[XY]-E[X]E[Y]= E[X^2Z]=E[X^2]E[Z]=0$ showing that $X$ and $Y$ are are uncorrelated random variables. But $X$ and $Y$, although they are uncorrelated normal random variables, are not independent random variables but instead very much dependent random variables since given that $X=x$, $Y$ takes on values $\pm x$ with equal probability $\frac 12$.
Now, with $\{Z_n\}$ being a sequence of independent random variables with the same distribution as $Z$ (and all independent of $X$ also), set $Y_n = XZ_n$. It follows from our construction that $Y_n \sim N(0,1)$. But, $$E[Y_nY_{m}] = E[X^2Z_nZ_m] = E[X^2]E[Z_n]E[Z_m] = 0~\text{provided that} ~ n \neq m.$$ Thus, the $\{Y_n\}$ are uncorrelated and normal. But they are not independent because if we know that $Y_m = y$, then we know that $Y_n$ is necessarily either $y$ or $-y$.
Normal random variables need not be jointly normal random variables and it is only in the case of joint normality that one can assert that uncorrelated (jointly) normal random variables are independent.
Best Answer
Notwithstanding IrishStat's comments, you could use a Breusch-Godfrey test. It is used to test for a lack of correlation among the residuals of a regression model.
First, you perform your regression. Get the residuals. Run a regression of the residuals on all the variables from your regression of interest from step 1 plus some number of lagged residuals. You can guess how many lags you should include by looking at the autocorrelation function. You can test for a lack of serial correlation by testing that the coefficients on the lags of the residuals are jointly 0 by using an F test or a version of a Lagrange multiplier test (the test statistic is the number of observations in the second, auxiliary regression times the $R^2$ from that regression; the test statistic is distributed as a $\chi^2_l$, where $l$ is the number of lags, under the null of no serial correlation).