I'd like to informally try to approach a few of these.
1) Are spectral decompositions useful for modeling/forecasting, or are they typically used only for analysis purposes.
1A) When modelling, I use the spectrum to give information about the seasonal components of my data. Simplistically, I might consider a model of the form:
$$
x_{t} = m_{t} + \sum_{i=1}^{S} s_{t}^{(i)} + Y_{t}
$$
Where you would have a mean function ($m_{t}$), $S$ seasonal components (sinusoids) ($s_{t}^{(i)}$), and a zero-mean random process $Y_{t}$.
I use the spectrum to estimate the seasonal components amplitudes and phases and then an ARMA (ARIMA?) to model $Y_{t}$.
2) Are the forecast of spectral decompositions always some repeated periodic series?
2A) As far as I'm aware, yes. The motivation for the theory makes the assumption that the process of interest is a discrete parameter stochastic process of the form:
$$
X_{t} = \sum_{l=1}^{L} D_{l}\cos(2\pi f_{l}t + \phi_{l})
$$
We let $L \rightarrow \infty$ in a "nice" way.
I believe we would also say, plus noise?
This is on page 127 of Spectral Analysis for Physical Applications: Multitaper and Conventional Univariate Techniques by Percival and Walden.
The only non-sinusoidal part is at $f = 0$.
3) Would using a seasonal ARIMA likely outperform (in terms of forecasting) a spectral decomposition, even with a ARIMA model on the residuals of the spectral model? (assuming data with strong seasonal/periodic trends)
3A) My intuition is that I would be doubtful that the ARIMA would perform better than spectral decomposition, however without any concrete proof. The reasoning is that you should get a much better estimate of the frequencies of interest from a spectral decomposition. I'd like to reiterate: I'm not sure though.
I'm not too sure about 4), again my intuition would be that you would need to recalculate the spectrum using the new data as opposed to being able to update the existing spectrum.
Best Answer
One main use is forecasting. I have been feeding my family for over a decade now by forecasting how many units of a specific product a supermarket will sell tomorrow, so he can order enough stock, but not too much. There is money in this.
Other forecasting use cases are given in publications like the International Journal of Forecasting or Foresight. (Full disclosure: I'm an Associate Editor of Foresight.)
Yes, sometimes the prediction-intervals are huge. (I assume you mean PIs, not confidence-intervals. There is a difference.) This simply means that the process is hard to forecast. Then you need to mitigate. In forecasting supermarket sales, this means you need a lot of safety stock. In forecasting sea level rises, this means you need to build higher levees. I would say that a large prediction interval does provide useful information.
And for all forecasting use cases, time-series analyis is useful, though forecasting is a larger topic. You can often improve forecasts by taking the dependencies in your time series into account, so you need to understand them through analysis, which is more specific than just knowing dependencies are there.
Plus, people are interested in time series even if they do not forecast. Econometricians like to detect change points in macroeconomic time series. Or assess the impact of an intervention, such as a change in tax laws, on GDP or something else. You may want to skim through your favorite econometrics journal for more inspiration.