Solved – How to calculate the lag of a prediction of a time series

autocorrelationcorrelationlagsneural networkstime series

I am trying to learn a time series (Mackey-Glass) using a neural net.
In order to see if there has been success in the learning process, I am looking at the correlations between the predicted and real values.

The following table shows these correlations:

Correlations

The correlation between the predicted value and the actual real value real_lag_0 is 0.986. Now comparing the predicted value with the real value shifted backwards by one real_lag_1 yields 0.993.

It seems like the highest correlation occurs at lag 1. Compare this source

If the lag is 1, then the neural net is worthless?

How could I improve the model to obtain the highest correlation for actual values?

Best Answer

It looks like your forecasts are better at predicting lagged values, rather than the actual values you are trying to predict.

If these are one-step-ahead forecasts, then the value with lag 1 is the last historical observation. In this case, your forecast is a better estimate of the last observation than of the future value. If you think about it, this is not very surprising. After all, the last observation should carry a lot of information about your future time series, and in contrast to the actual future, it is known.

I suspect this is your key question:

How could I improve the model to obtain the highest correlation for actual values?

Unfortunately, we can't tell you that. You will need to dig into your data and understand whether there are any unmodeled drivers you could include. Or conversely, perhaps you have modeled weak drivers, which may make your forecast worse through the bias-variance tradeoff.

This earlier thread may be helpful: How to know that your machine learning problem is hopeless?

Related Question