Assuming that $X>0$, a lognormal distribution for such a random variable is considered right-tailed because it skews towards the right.
Out of curiosity, is there a well-known PDF that is left-tailed, i.e., skews to the left for $X>0$?
Thanks.
distributionsprobability
Assuming that $X>0$, a lognormal distribution for such a random variable is considered right-tailed because it skews towards the right.
Out of curiosity, is there a well-known PDF that is left-tailed, i.e., skews to the left for $X>0$?
Thanks.
Let's construct all possible examples of random variables $X$ for which $E[X]E[1/X]=1$. Then, among them, we may follow some heuristics to obtain the simplest possible example. These heuristics consist of giving the simplest possible values to all expressions that drop out of a preliminary analysis. This turns out to be the textbook example.
This requires only a little bit of analysis based on definitions. The solution is of only secondary interest: the main objective is to develop insights to help us understand the results intuitively.
First observe that Jensen's Inequality (or the Cauchy-Schwarz Inequality) implies that for a positive random variable $X$, $E[X]E[1/X] \ge 1$, with equality holding if and only if $X$ is "degenerate": that is, $X$ is almost surely constant. When $X$ is a negative random variable, $-X$ is positive and the preceding result holds with the inequality sign reversed. Consequently, any example where $E[1/X]=1/E[X]$ must have positive probability of being negative and positive probability of being positive.
The insight here is that any $X$ with $E[X]E[1/X]=1$ must somehow be "balancing" the inequality from its positive part against the inequality in the other direction from its negative part. This will become clearer as we go along.
Consider any nonzero random variable $X$. An initial step in formulating a definition of expectation (at least when this is done in full generality using measure theory) is to decompose $X$ into its positive and negative parts, both of which are positive random variables:
$$\eqalign{ Y &= \operatorname{Positive part}(X) = \max(0, X);\\ Z &= \operatorname{Negative part}(X) = -\min(0, X). }$$
Let's think of $X$ as a mixture of $Y$ with weight $p$ and $-Z$ with weight $1-p$ where $$p=\Pr(X\gt 0),\ 1-p = \Pr(X \lt 0).$$ Obviously $$0 \lt p \lt 1.$$ This will enable us to write expectations of $X$ and $1/X$ in terms of the expectations of the positive variables $Y$ and $Z$.
To simplify the forthcoming algebra a little, note that uniformly rescaling $X$ by a number $\sigma$ does not change $E[X]E[1/X]$--but it does multiply $E[Y]$ and $E[Z]$ each by $\sigma$. For positive $\sigma$, this simply amounts to selecting the units of measurement of $X$. A negative $\sigma$ switches the roles of $Y$ and $Z$. Choosing the sign of $\sigma$ appropriately we may therefore suppose $$E[Z]=1\text{ and }E[Y] \ge E[Z].\tag{1}$$
That's it for preliminary simplifications. To create a nice notation, let us therefore write
$$\mu = E[Y];\ \nu = E[1/Y];\ \lambda=E[1/Z]$$
for the three expectations we cannot control. All three quantities are positive. Jensen's Inequality asserts
$$\mu\nu \ge 1\text{ and }\lambda \ge 1.\tag{2}$$
The Law of Total Probability expresses the expectations of $X$ and $1/X$ in terms of the quantities we have named:
$$E[X] = E[X\mid X\gt 0]\Pr(X \gt 0) + E[X\mid X \lt 0]\Pr(X \lt 0) = \mu p - (1-p) = (\mu + 1)p - 1$$
and, since $1/X$ has the same sign as $X$,
$$E\left[\frac{1}{X}\right] = E\left[\frac{1}{X}\mid X\gt 0\right]\Pr(X \gt 0) + E\left[\frac{1}{X}\mid X \lt 0\right]\Pr(X \lt 0) = \nu p - \lambda(1-p) = (\nu + \lambda)p - \lambda.$$
Equating the product of these two expressions with $1$ provides an essential relationship among the variables:
$$1 = E[X]E\left[\frac{1}{X}\right] = ((\mu +1)p - 1)((\nu + \lambda)p - \lambda).\tag{*}$$
Suppose the parts of $X$--$Y$ and $Z$--are any positive random variables (degenerate or not). That determines $\mu, \nu,$ and $\lambda$. When can we find $p$, with $0 \lt p \lt 1$, for which $(*)$ holds?
This clearly articulates the "balancing" insight previously stated only vaguely: we are going to hold $Y$ and $Z$ fixed and hope to find a value of $p$ that appropriately balances their relative contributions to $X$. Although it's not immediately evident that such a $p$ need exist, what is clear is that it depends only on the moments $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$. The problem thereby is reduced to relatively simple algebra--all the analysis of random variables has been completed.
This algebraic problem isn't too hard to solve, because $(*)$ is at worst a quadratic equation for $p$ and the governing inequalities $(1)$ and $(2)$ are relatively simple. Indeed, $(*)$ tells us the product of its roots $p_1$ and $p_2$ is
$$p_1p_2 = (\lambda - 1)\frac{1}{(\mu+1)(\nu+\lambda)} \ge 0$$
and the sum is
$$p_1 + p_2 = (2\lambda + \lambda \mu + \nu)\frac{1}{(\mu+1)(\nu+\lambda)} \gt 0.$$
Therefore both roots must be positive. Furthermore, their average is less than $1$, because
$$ 1 - \frac{(p_1+p_2)}{2} = \frac{\lambda \mu + \nu + 2 \mu \nu}{2(\mu+1)(\nu+\lambda)} \gt 0.$$
(By doing a bit of algebra, it's not hard to show the larger of the two roots does not exceed $1$, either.)
Here is what we have found:
Given any two positive random variables $Y$ and $Z$ (at least one of which is nondegenerate) for which $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$ exist and are finite. Then there exist either one or two values $p$, with $0 \lt p \lt 1$, that determine a mixture variable $X$ with weight $p$ for $Y$ and weight $1-p$ for $-Z$ and for which $E[X]E[1/X]=1$. Every such instance of a random variable $X$ with $E[X]E[1/X]=1$ is of this form.
That gives us a rich set of examples indeed!
Having characterized all examples, let's proceed to construct one that is as simple as possible.
For the negative part $Z$, let's choose a degenerate variable--the very simplest kind of random variable. It will be scaled to make its value $1$, whence $\lambda=1$. The solution of $(*)$ includes $p_1=0$, reducing it to an easily solved linear equation: the only positive root is
$$p = \frac{1}{1+\mu} + \frac{1}{1+\nu}.\tag{3}$$
For the positive part $Y$, we obtain nothing useful if $Y$ is degenerate, so let's give it some probability at just two distinct positive values $a \lt b$, say $\Pr(X=b)=q$. In this case the definition of expectation gives
$$\mu = E[Y] = (1-q)a + qb;\ \nu = E[1/Y] = (1-q)/a + q/b.$$
To make this even simpler, let's make $Y$ and $1/Y$ identical: this forces $q=1-q=1/2$ and $a=1/b$. Now
$$\mu = \nu = \frac{b + 1/b}{2}.$$
The solution $(3)$ simplifies to
$$p = \frac{2}{1+\mu} = \frac{4}{2 + b + 1/b}.$$
How can we make this involve simple numbers? Since $a\lt b$ and $ab=1$, necessarily $b\gt 1$. Let's choose the simplest number greater than $1$ for $b$; namely, $b=2$. The foregoing formula yields $p = 4/(2+2+1/2) = 8/9$ and our candidate for the simplest possible example therefore is
$$\eqalign{ \Pr(X=2) = \Pr(X=b) = \Pr(Y=b)p = qp = \frac{1}{2}\frac{8}{9} = \frac{4}{9};\\ \Pr(X=1/2) = \Pr(X=a) = \Pr(Y=a)p = qp = \cdots = \frac{4}{9};\\ \Pr(X=-1) = \Pr(Z=1)(1-p) = 1-p = \frac{1}{9}. }$$
This is the very example offered in the textbook.
Best Answer
In general, distributions with tails are defined base on the definitions of the random variable in question. For example, for a random variable $X$ representing the amount of rainfall or revenue or population income. These all take only positive values such that $X \ge 0$ and they tend to follow very right-tailed distributions, such as Log-Normal, Gamma, Weibull and other common distributions.
However there are much fewer examples in common literature for random variables that are left-tailed and take values that skew to the left. There is a wide variety of data that exhibits this property, such as survival of human life (age of mortality) or scores of easy tests or simply flipping a weighted coin. It is generally tricky to parameterise a distribution which has a cutoff $<\infty$ so we don't really see too many formal well-known distributions that satisfy the left-skew. For another random variable, say $Y \ge 0$, as mentioned in the comment above, the Beta , Binomial, Dirichlet and even a Generalised Extreme Value distributions can be left tailed.
In fact for some random variable $Z \in \Bbb R$ there are negatively-skewed distributions without placing a restriction on the strictly positive conditions used in economics for historical analysis of efficiency and by big banks and financial institutions for expected profits and regulatory requirements (skewed Normal).