Change point detection

algorithmsdecision-problemslog likelihood

I am studying this book. I want to learn change detection algorithms. There is the first approach called "Limit Checking Detectors and Shewhart ControlCharts" on the page 26. On the page 28 there is the derivation of the algorithm. Did I understand right approach?

  1. I will take two consecutive samples of size N
  2. Calculate the decision function as log ratio of two distributions
  3. Compare a threshold with the result of the decision function
  4. Decide one of the hypothesis

Want to know from experts how this threshold is found. There is not much information about this topic in the Internet.
The threshold appears as "h" in the (2.1.11).
How (2.1.13) relates to all this algorithm? Is this the threshold value?
I tried already this algorithm using python and as a threshold I used right-hand side of the inequality (2.1.13), but I didn't get any good results.

Best Answer

You can try and determine a suitable threshold empirically. Take a test case and note where you estimate that there is a change to be detected. Then instrument your code to show the value compared to the threshold, at the moment you reach the first change.

Now you can assign the threshold this value, and continue the processing until the next change, and so on. It can be useful to set the threshold every time because this can influence the evolution of the decision variable. (I don't think that you should also adjust the threshold after a false detection.) Finally, observe the variations of the threshold.

Related Question