I really think this is a good question and deserves an answer. The link provided is written by a psychologist who is claiming that some home-brew method is a better way of doing time series analysis than Box-Jenkins. I hope that my attempt at an answer will encourage others, who are more knowledgeable about time series, to contribute.
From his introduction, it looks like Darlington is championing the approach of just fitting an AR model by least-squares. That is, if you want to fit the model
$$z_t = \alpha_1 z_{t-1} + \cdots + \alpha_k z_{t-k} + \varepsilon_t$$
to the time series $z_t$, you can just regress the series $z_t$ on the series with lag $1$, lag $2$, and so on up to lag $k$, using an ordinary multiple regression. This is certainly allowed; in R, it's even an option in the ar
function. I tested it out, and it tends to give similar answers to the default method for fitting an AR model in R.
He also advocates regressing $z_t$ on things like $t$ or powers of $t$ to find trends. Again, this is absolutely fine. Lots of time series books discuss this, for example Shumway-Stoffer and Cowpertwait-Metcalfe. Typically, a time series analysis might proceed along the following lines: you find a trend, remove it, then fit a model to the residuals.
But it seems like he is also advocating over-fitting and then using the reduction in the mean-squared error between the fitted series and the data as evidence that his method is better. For example:
I feel correlograms are now obsolescent. Their primary purpose was to
allow workers to guess which models will fit the data best, but the
speed of modern computers (at least in regression if not in
time-series model-fitting) allows a worker to simply fit several
models and see exactly how each one fits as measured by mean squared
error. [The issue of capitalization on chance is not relevant to this
choice, since the two methods are equally susceptible to this problem.]
This is not a good idea because the test of a model is supposed to be how well it can forecast, not how well it fits the existing data. In his three examples, he uses "adjusted root mean-squared error" as his criterion for the quality of the fit. Of course, over-fitting a model is going to make an in-sample estimate of error smaller, so his claim that his models are "better" because they have smaller RMSE is wrong.
In a nutshell, since he is using the wrong criterion for assessing how good a model is, he reaches the wrong conclusions about regression vs. ARIMA. I'd wager that, if he had tested the predictive ability of the models instead, ARIMA would have come out on top. Perhaps someone can try it if they have access to the books he mentions here.
[Supplemental: for more on the regression idea, you might want to check out older time series books which were written before ARIMA became the most popular. For example, Kendall, Time-Series, 1973, Chapter 11 has a whole chapter on this method and comparisons to ARIMA.]
I doubt there are strict, formal definitions that a wide range of data analysts agree on.
In general however, time series connotes a single study unit observed at regular intervals over a very long period of time. A prototypical example would be the annual GDP growth of a country over decades or even more than a hundred years. For an analyst working for a private company, it might be monthly sales revenues over the life of the company. Because there are so many observations, the data are analyzed in great detail, looking for things like seasonality over different periods (e.g., monthly: more sales at the beginning of a month just after people have been paid; yearly: more sales in November and December, when people are shopping for the Christmas season), and possibly regime shifts. Forecasting is often very important, as @StephanKolassa notes.
Longitudinal typically refers to fewer measurements over a larger number of study units. A prototypical example might be a drug trial, where there are hundreds of patients measured at baseline (before treatment), and monthly for the next 3 months. With just 4 observations of each unit in this example, it is not possible to try to detect the kinds of features time series analysts are interested in. On the other hand, with patients presumably randomized into treatment and control arms, causality can be inferred once the non-independence has been addressed. As that suggests, often the non-independence is considered almost a nuisance, rather than the primary feature of interest.
Best Answer
As a signal is by definition a time series, there is significant overlap between the two.
I would expect a book on time-series analysis to be either a mathematical treatment, or a business/commercial treatment, while a book on statistical signal processing is likely to make heavy use of mathematics, but interested in the problems of signal analysis, classification, noise reduction, and other problems relevant to engineering / applied science.
Statistical signal processing uses the language and techniques of mathematical time-series analysis, but also introduces into the problem domain many concepts and techniques out of electrical engineering: signal to noise, dynamic range, and time/frequency domain transforms.
In my view, time-series analysis is a mathematical field, which then has applications wherever time series tend to crop up. Those fields then develop techniques that are specialised for those problem domains, with a specialised body of knowledge.
As time series arise in business and economics, there is an industry of material on time-series forecasting, trend analysis, etc. Much of this 'commercial' application is not present in the material on statistical signal processing, in part because the nature of the two time series is very different: signals are continuous over both time and measurement variables (e.g. voltage, intensity, etc.) Whereas most business time-series are taken over a discrete time domain (days, weeks, months, quarters, years).