Time Series – Estimating When Random Spikes Cross a Threshold for the First Time

anomaly detectiongarchmonte carlotime serieswavelet

tl;dr Is there a way to estimate when a random spike in a time series would cross a threshold for the first time?

The following is data of my performance in the game Super Hexagon, whose goal is to move a small piece without touching any moving walls for 60 seconds. If you touch a wall, you start over at 0 seconds.

enter image description here

It is difficult to see in the above line-plot, but the data generally shows that my performance floor does not really change, but my ceiling gradually gets higher, however these high-performance attempts are a small minority. Here is a histogram of my attempts.

enter image description here

Let's say during my play, at about attempt 800, I wanted to know how much longer I need to play in order to beat the game. How could I estimate when one of my performance spikes would go over a certain threshold, in this case 60 seconds?

I imagine this is some kind of ARCH model, but I'm having trouble figuring out which one. I've also been told by a professor that a wavelet might be helpful for this problem, but I haven't had someone explain to me specifically how to use wavelets for a problem like this.

My intuition would say to filter out the spikes as noise and then model the resulting series. Then at each period, make a random draw from the distribution of the noise that I filtered away to simulate that noise. Then, use Monte Carlo simulation to see where the density of passing that threshold is high and report a range subjectively from that Monte Carlo simulation.

I am using Python for this, so if anyone has any suggestions on the practical side of doing this in Python it would be greatly appreciated.

Update 1

I have posted my data here so that anyone can take a crack at this problem.

Best Answer

Your "toy" problem (opportunity) arises naturally in real life when companies need to make available sufficient capacity to deal with possible extraordinary demand. I have been involved with a number of communications/power companies in this regard ...thus the historical and ever-evolution of AUTOBOX to meet critical planning/forecasting requirements including incorporating the uncertainty in user-specified predictor series that need to be forecasted and used in a SARMAX model https://autobox.com/pdfs/SARMAX.pdf

At the heart of the issue is a forecasting problem. Your approach was to implicitly assume 1100 independent values with a constant mean and some (many) one-time pulses. In general these 1100 observations may be serially related thus the correct forecasting model may be something different than white noise , after spikes/pulses have been removed.

You say " Let's say during my play, at about attempt 1100 , I wanted to know how much longer I need to play in order to beat the game. How could I estimate when one of my performance spikes would go over a certain threshold, in this case 60 seconds?"

I say "This is unanswerable because you have not specified a level of confidence BUT what is answerable is "what is the probablity of exceeding a specific threshhold value" for any future period (trial #) . To do so one needs to predict the future probablity density function for each period in the future and examine it to determine the probability of exceeding the threshold value." Essentially you select the level of confidence and you obtain the forecast period value and then you compare it your aforementioned critical value ( say 60 ) and determine if the threshold value has been crossed at that level of confidence.

You say "My intuition would say to filter out the spikes as noise and then model the resulting series."

I say "you need to filter out the spikes and then model the resulting/adjusted series to obtain a prediction based upon evidented recursive relationships (signal) yielding an adequate noise series" . Thus a distribution of possible values ( allowing for spikes) can be made for each forecasted period in the future

You say " Then at each period, make a random draw from the distribution of the noise that I filtered away to simulate that noise. Then, use Monte Carlo simulation to see where the density of passing that threshold is high and report a range subjectively from that Monte Carlo simulation."

I say "Then at each period, make a random draw from the probability density function predicted for each future period that was based on the deterministically adjusted series Then, review these Monte Carlo simulations to see where the density of passing that threshold is and report that probability .

Your approach used all 1100 as the basis for the simulation, assuming that the distribution of 1100 had one and only 1 mean. I say that after adjusting for the spikes, observations 1-389 had a mean and observations 390-1100 had a significantly different mean thus only the last 701 values should be used. The two means differed by 1.8868 ( see the coefficient for the level/step shift below ).

With that said ... I now report the results of using AUTOBOX to analyze your 1100 observations

Your 1100 observations yielded an ARIMA model (slight adjustment for memory ) along with a level shift and a number of spikes. Here is the Actual,Fit and Forecast for the next 50 periods (trials) showing 95% prediction limits for the forecasting horizon 1101-1150.

The identified model is here enter image description here and hereenter image description here . The residual plot is here showing the effect of memory , a constant, a level shift and numerous spikes/pulses. enter image description here suggesting an adequate extraction of noise.

enter image description here

The forecasting equation is then used to obtain 1000 simulations for the next period explicitely allowing for spikes/pulses to be present while incorporating changing uncertainty as we go further into the future ( not really important for your data as you have no trends , or lots of autoregressive memory , or seasonal pulses. Hereis the histogram of the 1000 monte vcarlo simulations for period 1101

enter image description here and period 1102 enter image description here and period 1150 enter image description here

I would rate your intuition as " very high" and your professor will be joyed by your findings . You didn't consider the possible time series forecasting complications and possible spikes in the future and the need to incorporate uncertainties in possible user-specified predictor series. There was little time-series complications as the lag 3 effect (.0994) is possibly/probably spurious and certainly was small. Additionally you ignored the shift in the mean as you got better with more experience after 390 tries. That would have been a bias in your approach as you just adjusted for the one-time anomalies (spikes) and ignored the statistically significant sequential "spikes" (read:level/step shift ) starting at period 391 . N.B. The leve/step shift is now "visually obvious" after it has been pointed out by analytics having "sharper eyes" .

Finally a picture of the 1000 simulations for forecast period 1150 .enter image description here