Confidence Interval – How to Understand the Calculation Method

ab-testconfidence intervalstandard error

I am currently reading Math behind A/B testing written by Amazon and got stuck. At some point they say:

To determine the 95% confidence interval on each side of conversion
rate, we multiply the standard error with the 95th percentile (one
tailed) of a standard normal distribution (a constant value equal to
1.65).

Then they use that constant to calculate the confidence interval:

range = conversion rate +- (1.65 x Standard Error)

I read somewhere to get the aforementioned constant value from the following table:

http://www.sjsu.edu/faculty/gerstman/StatPrimer/t-table.pdf
enter image description here

The problem is that I can't see 1.65 anywhere for 95% and the closest value is 1.960, hence my confusion.

Could someone explain me where the 1.65 is coming from?

Best Answer

I think it's a mistake. For a two-sided confidence interval the two-sided test is appropriate - for a 95% interval your value of 1.96 is correct. The one-sided value (1.65) would be appropriate only if you wanted to calculate a confidence region from $-\infty$ to $c$ or from $c$ to $\infty$.

Related Question