Solved – way to remember the definitions of Type I and Type II Errors

terminologytype-i-and-ii-errors

I'm not a statistician by education, I'm a software engineer. Yet statistics comes up a lot. In fact, questions specifically about Type I and Type II error are coming up a lot in the course of my studying for the Certified Software Development Associate exam (mathematics and statistics are 10% of the exam). I'm having trouble always coming up with the right definitions for Type I and Type II error – although I'm memorizing them now (and can remember them most of the time), I really don't want to freeze up on this exam trying to remember what the difference is.

I know that Type I Error is a false positive, or when you reject the null hypothesis and it's actually true and a Type II error is a false negative, or when you accept the null hypothesis and it's actually false.

Is there an easy way to remember what the difference is, such as a mnemonic? How do professional statisticians do it – is it just something that they know from using or discussing it often?

(Side Note: This question can probably use some better tags. One that I wanted to create was "terminology", but I don't have enough reputation to do it. If someone could add that, it would be great. Thanks.)

Best Answer

Since type two means "False negative" or sort of "false false", I remember it as the number of falses.

  • Type I: "I falsely think the alternate hypothesis is true" (one false)
  • Type II: "I falsely think the alternate hypothesis is false" (two falses)
Related Question