You know that in a logit:
$$Pr[y = 1 \vert x,z] = p = \frac{\exp (\alpha + \beta \cdot \ln x + \gamma z)}{1+\exp (\alpha + \beta \cdot \ln x + \gamma z )}. $$
After some tedious calculus and simplification, the partial of that with respect to $x$ becomes:
$$ \frac{\partial Pr[y=1 \vert x,z]}{\partial x} = \frac{\beta}{x} \cdot p \cdot (1-p). $$
This is (sort of) equivalent to
$$\frac{\Delta p}{\Delta x}=\frac{\beta}{x} \cdot p \cdot (1-p),$$
which can be re-written as
$$\frac{\Delta p}{100 \cdot \frac{ \Delta x}{x}}= \frac{\beta \cdot p \cdot (1-p)}{100}.$$
This is the definition of semi-elasticity, and can be interpreted as the change in probability for a 1% change in $x$.
Here's an example in Stata.* Note that I am using margins
instead of the out-of-date mfx
to get the average marginal effect of $x$, $\frac{1}{N}\Sigma_{i=1}^N\frac{\beta \cdot p_i \cdot (1-p_i)}{100}$:
. sysuse auto, clear
(1978 Automobile Data)
. gen ln_price = ln(price)
. logit foreign ln_price mpg weight, nolog
Logistic regression Number of obs = 74
LR chi2(3) = 57.69
Prob > chi2 = 0.0000
Log likelihood = -16.185932 Pseudo R2 = 0.6406
------------------------------------------------------------------------------
foreign | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
ln_price | 6.851215 2.11763 3.24 0.001 2.700737 11.00169
mpg | -.0880842 .1031317 -0.85 0.393 -.2902186 .1140503
weight | -.0062268 .0017269 -3.61 0.000 -.0096115 -.0028422
_cons | -41.32383 16.24003 -2.54 0.011 -73.15371 -9.493947
------------------------------------------------------------------------------
. margins, expression(_b[ln_price]*predict()*(1-predict())/100)
Predictive margins Number of obs = 74
Model VCE : OIM
Expression : _b[ln_price]*predict()*(1-predict())/100
------------------------------------------------------------------------------
| Delta-method
| Margin Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
_cons | .0046371 .0007965 5.82 0.000 .003076 .0061982
------------------------------------------------------------------------------
This means that for a 1% increase in price, the probability that a car is foreign increases by 0.005 on a [0,1] scale. Or a 10% increase in price gives you a 0.05 increase. In this date, about 0.3 of the cars are foreign, so these are economically and statistically significant.
Edit:
A good way to do this in Stata 10 is to install the user-written command margeff
:
. margeff, dydx(ln_price) replace
Average partial effects after margeff
y = Pr(foreign)
------------------------------------------------------------------------------
variable | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
ln_price | .4637103 .0796514 5.82 0.000 .3075964 .6198241
mpg | -.0059616 .006781 -0.88 0.379 -.0192522 .007329
weight | -.0004214 .0000417 -10.11 0.000 -.0005031 -.0003398
------------------------------------------------------------------------------
. lincom _b[ln_price]/100
( 1) .01*ln_price = 0
------------------------------------------------------------------------------
variable | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
(1) | .0046371 .0007965 5.82 0.000 .003076 .0061982
------------------------------------------------------------------------------
*This is actually not a great empirical example since the relationship in the data has an inverted-U shape.
It is easier to think about interpreting your dichotomous predictors by using the concept of the odds ratio.
Let me give you an example: Imagine you are trying to predict smoking status where our smoking variable is a 1 if you smoke and and 0 if you don't smoke (so a dichotomous outcome and so we can use logistic regression). Now, as in your case, imagine that you have a predictor variable called white where the variable is 1 if you are white or 0 if you are not white. In this example, you can fit a logistic regression model that looks something like this:
$$\text{logit}(p)=\beta_0+\beta_1\times \text{white}$$
And now, lets assume that you get an estimate of $\beta_1=-0.5108256$. Now, converting the estimate onto the odds ratio scale is as simple as exponentiating the parameter estimate, i.e, on the odds ratio scale we have
$$e^{\beta_1}=e^{-0.5108256}=0.6$$. And so finally what this tells us is that if you are white, you are expected to be 60% less likely to be a smoker as compared to someone who is not white.
And so to answer your direct question, you wouldn't say that "a 1% increase in being white affect your probability of the dependent variable by x amount", but rather that, you are "y" times more likely to observe the dependent variable given that you are white as compared to not being white.
Best Answer
This has been answered before but I will try to include a very simple explanation which can hopefully get you on the right track.
A logit regression model, linking the probability of a dependent variable $y$ to some vector of independent variables $X$ is written as follows
$$Pr(y=1) = \Lambda(X\beta)$$ where $\Lambda()$ represents a logistic c.d.f.
The marginal effect can be though of as the impact a change in some variable $x_j$ has on the response probability $Pr(y=1)$ and can be written as.
$$\frac{\partial Pr(y=1)}{\partial x_j} = \beta_j \lambda(X\beta) $$ where $\lambda$ is the p.d.f of a logistic function (the first derivative of $\Lambda$ w.r.t its argument)
Notice that for different values of X, you get a different values of $\lambda(XB)$, giving you different marginal effects.
To calculate the average marginal effect, you take the average of the logistic p.d.f for all the values of X in your sample and multiply it by your coefficient $\beta_j$.
$$\frac{\partial Pr(y=1)}{\partial x_j} = \beta_j E[\lambda(X\beta)] $$
Aside Note: This is different than the marginal effect at the average.