You know that in a logit:
$$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$
After some tedious calculus and simplification, the partial of that with respect to $x$ becomes:
$$ \frac{\partial Pr[y=1 \vert x,z]}{\partial x} = \frac{\beta}{x} \cdot p \cdot (1-p). $$
This is (sort of) equivalent to
$$\frac{\Delta p}{\Delta x}=\frac{\beta}{x} \cdot p \cdot (1-p),$$
which can be re-written as
$$\frac{\Delta p}{100 \cdot \frac{ \Delta x}{x}}= \frac{\beta \cdot p \cdot (1-p)}{100}.$$
This is the definition of semi-elasticity, and can be interpreted as the change in probability for a 1% change in $x$.
Here's an example in Stata.* Note that I am using margins
instead of the out-of-date mfx
to get the average marginal effect of $x$, $\frac{1}{N}\Sigma_{i=1}^N\frac{\beta \cdot p_i \cdot (1-p_i)}{100}$:
. sysuse auto, clear
(1978 Automobile Data)
. gen ln_price = ln(price)
. logit foreign ln_price mpg weight, nolog
Logistic regression Number of obs = 74
LR chi2(3) = 57.69
Prob > chi2 = 0.0000
Log likelihood = -16.185932 Pseudo R2 = 0.6406
------------------------------------------------------------------------------
foreign | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
ln_price | 6.851215 2.11763 3.24 0.001 2.700737 11.00169
mpg | -.0880842 .1031317 -0.85 0.393 -.2902186 .1140503
weight | -.0062268 .0017269 -3.61 0.000 -.0096115 -.0028422
_cons | -41.32383 16.24003 -2.54 0.011 -73.15371 -9.493947
------------------------------------------------------------------------------
. margins, expression(_b[ln_price]*predict()*(1-predict())/100)
Predictive margins Number of obs = 74
Model VCE : OIM
Expression : _b[ln_price]*predict()*(1-predict())/100
------------------------------------------------------------------------------
| Delta-method
| Margin Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
_cons | .0046371 .0007965 5.82 0.000 .003076 .0061982
------------------------------------------------------------------------------
This means that for a 1% increase in price, the probability that a car is foreign increases by 0.005 on a [0,1] scale. Or a 10% increase in price gives you a 0.05 increase. In this date, about 0.3 of the cars are foreign, so these are economically and statistically significant.
Edit:
A good way to do this in Stata 10 is to install the user-written command margeff
:
. margeff, dydx(ln_price) replace
Average partial effects after margeff
y = Pr(foreign)
------------------------------------------------------------------------------
variable | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
ln_price | .4637103 .0796514 5.82 0.000 .3075964 .6198241
mpg | -.0059616 .006781 -0.88 0.379 -.0192522 .007329
weight | -.0004214 .0000417 -10.11 0.000 -.0005031 -.0003398
------------------------------------------------------------------------------
. lincom _b[ln_price]/100
( 1) .01*ln_price = 0
------------------------------------------------------------------------------
variable | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
(1) | .0046371 .0007965 5.82 0.000 .003076 .0061982
------------------------------------------------------------------------------
*This is actually not a great empirical example since the relationship in the data has an inverted-U shape.
Best Answer
Stata is smart enough to ignore the
at()
assignment for x when you calculate the AME for x (since otherwise you would get a zero). In the end, you have asked Stata to calculate this average of finite differences:$$AME_x =\sum_{i=1}^N \left[ \hat p(x=1,y=1,z=z_i)-\hat p(x=0,y=1,z=z_i) \right],$$
where $\hat p(.)$ is the predicted probability from the logit model. Stata used differences here rather than derivatives since all your regressors are binary/categorical.
This is probably not a very sensible AME, but perhaps you have your reasons for doing it this way. I am calling this an AME, but it is actually a hybrid of AME and MER (marginal effect at representative values).
Here's a toy example showing the margins calculation by hand:
According to this model, when all cars are assumed to be heavy, but have their actual in-sample values of high repair record as they are observed. the probability of the car being foreign falls by 5.3 percentage points when it is high MPG (relative to low MPG).
Stata Code: