Literature: Most of the answer you need are certainly in the book by Lehman and Romano. The book by Ingster and Suslina treats more advanced topics and might give you additional answers.
Answer: However, things are very simple: $L_1$ (or $TV$) is the "true" distance to be used. It is not convenient for formal computation (especially with product measures, i.e. when you have iid sample of size $n$) and other distances (that are upper bounds of $L_1$) can be used.
Let me give you the details.
Development: Let us denote by
- $g_1(\alpha_0,P_1,P_0)$ the minimum type II error with type I error$\leq\alpha_0$ for $P_0$ and $P_1$ the null and the alternative.
- $g_2(t,P_1,P_0)$ the sum of the minimal possible $t$ type I + $(1-t)$ type II errors with $P_0$ and $P_1$ the null and the alternative.
These are the minimal errors you need to analyze. Equalities (not lower bounds) are given by theorem 1 below (in terms of $L_1$ distance (or TV distance if you which)). Inequalities between $L_1$ distance and other distances are given by Theorem 2 (note that to lower bound the errors you need upper bounds of $L_1$ or $TV$).
Which bound to use then is a matter of convenience because $L_1$ is often more difficult to compute than Hellinger or Kullback or $\chi^2$. The main example of such a difference appears when $P_1$ and $P_0$ are product measures $P_i=p_i^{\otimes n}$ $i=0,1$ which arise in the case when you want to test $p_1$ versus $p_0$ with a size $n$ iid sample. In this case $h(P_1,P_0)$ and the others are obtained easely from $h(p_1,p_0)$ (same for $KL$ and $\chi^2$) but you can't do that with $L_1$ ...
Definition: The affinity $A_1(\nu_1,\nu_0)$ between two measures $\nu_1$ and $\nu_2$ is defined as $$A_1(\nu_1,\nu_0)=\int \min(d\nu_1,d\nu_0) $$.
Theorem 1 If $|\nu_1-\nu_0|_1=\int|d\nu_1-d\nu_0|$ (half the TV dist), then
- $2A_1(\nu_1,\nu_0)=\int (\nu_1+\nu_0)-|\nu_1-\nu_0|_1$.
- $g_1(\alpha_0,P_1,P_0)=\sup_{t\in [0,1/\alpha_0]} \left ( A_1(P_1,tP_0)-t\alpha_0 \right )$
- $g_2(t,P_1,P_0)=A_1(t P_0,(1-t)P_1)$
I wrote the proof here.
Theorem 2 For $P_1$ and $P_0$ probability distributions:
$$\frac{1}{2}|P_1-P_0|_1\leq h(P_1,P_0)\leq \sqrt{K(P_1,P_0)} \leq \sqrt{\chi^2(P_1,P_0)}$$
These bounds are due to several well known statisticians (LeCam, Pinsker,...) . $h$ is the Hellinger distance, $K$ KL divergence and $\chi^2$ the chi-square divergence. They are all defined here. and the proofs of these bounds are given (further things can be found in the book of Tsybacov). There is also something that is almost a lower bound of $L_1$ by Hellinger ...
There is a purely statistical approach to Kullback-Leibler divergence: take a sample $X_1,\ldots,X_n$ iid from an unknown distribution $p^\star$ and consider the potential fit by a family of distributions, $$\mathfrak{F}=\{p_\theta\,,\ \theta\in\Theta\}$$The corresponding likelihood is defined as
$$L(\theta|x_1,\ldots,x_n)=\prod_{i=1}^n p_\theta(x_i)$$
and its logarithm is
$$\ell(\theta|x_1,\ldots,x_n)=\sum_{i=1}^n \log p_\theta(x_i)$$
Therefore, $$\frac{1}{n} \ell(\theta|x_1,\ldots,x_n) \longrightarrow
\mathbb{E}[\log p_\theta(X)]=\int \log p_\theta(x)\,p^\star(x)\text{d}x$$
which is the interesting part of the Kullback-Leibler divergence between $p_\theta$ and $p^\star$ $$\mathfrak{H}(p_\theta|p^\star)\stackrel{\text{def}}{=}\int \log \{p^\star(x)/p_\theta(x)\}\,p^\star(x)\text{d}x$$the other part$$\int \log \{p^\star(x)\}\,p^\star(x)\text{d}x$$being there to have the minimum [in $\theta$] of $\mathfrak{H}(p_\theta|p^\star)$ equal to zero.
A book that connects divergence, information theory and statistical inference is Rissanen's Optimal estimation of parameters, which I reviewed here.
Best Answer
A (metric) distance $D$ must be symmetric, i.e. $D(P,Q) = D(Q,P)$. But, from definition, $KL$ is not.
Example: $\Omega = \{A,B\}$, $P(A) = 0.2, P(B) = 0.8$, $Q(A) = Q(B) = 0.5$.
We have:
$$KL(P,Q) = P(A)\log \frac{P(A)}{Q(A)} + P(B) \log \frac{P(B)}{Q(B)} \approx 0.19$$
and
$$KL(Q,P) = Q(A)\log \frac{Q(A)}{P(A)} + Q(B) \log \frac{Q(B)}{P(B)} \approx 0.22$$
thus $KL(P,Q) \neq KL(Q,P)$ and therefore $KL$ is not a (metric) distance.