Solved – Interpreting Marginal Effect Results

marginal-effectordered-probitregression

I'm having a little bit of difficulty interpreting the marginal effects results I obtained since I am quite new to this sort of stuff. For my model I'm running an ordered probit regression where there are 6 different levels to measure customer satisfaction, where 1 is very high and 6 is very low. The Conditions A, B, C, and D are all different things which can affect Customer Satisfaction levels, and I have the marginal effects results given below:

                                   Customer Satisfaction Level
                      1   |    2    |    3    |    4    |    5    |    6    
Condition   A     0.0003    0.0007    0.0055    0.0013    -0.0020   -0.0059
            B    -0.0016   -0.0024   -0.0327   -0.0020    -0.0066    0.0455
            C    -0.0357   -0.0487   -0.3493   -0.0334    -0.0561    0.5234
            D     0.0940    0.1320    1.3187    0.1834     0.4578   -2.1861

Basically, I'm having a bit of difficult interpreting what these results actually mean. I was reading through this question (Average Marginal Effects interpretation) and I was trying to apply it to my results but I'm not sure how to. Like, what does the 0.0055 mean in row "Condition A" column "Customer Satisfaction Level 3", and what does the -2.1851 mean in row "Condition D" column "Customer Satisfaction Level 6"?

If anyone could help me with this then I'd appreciate it, thanks!

EDIT: I've been reading more about marginal effects in ordered probit regressions and by the look of it marginal effects appear to show the percentage point changes of variables when it comes to probit regressions. If that's the case then isn't it possible for variables to change by over 100 percentage points? I'm just a bit confused about it, as a (really bad) example of my problem I've included a bit of code here which should generate the marginal effects of a data set in R using the oglmx package:

set.seed(10)
n = 1000
x = rnorm(n, mean = 0, sd = 0.002)
y = ifelse(pnorm(1 + 0.5*x + rnorm(n))>0.5, 1, 0)
data = data.frame(y,x)
margins.oglmx(oglmx(y~x, data=data, link="probit",constantMEAN=FALSE, constantSD=FALSE,delta=0,threshparam=NULL), AME=TRUE)

This code outputs the following marginal effects table:

Marginal Effects on Pr(Outcome==0)
  Marg. Eff   Std. error  t value Pr(>|t|)  
x -11.95828594   5.88897011 -2.03062 0.042293 *
------------------------------------ 
Marginal Effects on Pr(Outcome==1)
  Marg. Eff  Std. error t value Pr(>|t|)  
x 11.95828594  5.88897011 2.03062 0.042293 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Now, if we look at the x variable we see that it varies quite a lot – the minimum value is -0.0060243275671 while the maximum value is 0.0070822805553 so is this why the marginal effects coefficients are so large? There are also values such as 0.0003295985976000 inbetween the minimum and maximum values so the values also do get quite small. This does seem to me that marginal effects are showing percentage point changes rather than marginal probabilities, but can anyone explain to me if I'm right or wrong? Thanks!

Best Answer

There is no problem with the code, the marginal effect is not bounded between 0 and 1, or -1 and 1. The marginal effect measures the slope of the probability at a particular point. For an example that illustrates that the marginal effect is unbounded, suppose we have a continuous variable that perfectly predicts the outcome, so if x>0.5 then the outcome is 1, otherwise 0. In this case the slope of the probability function evaluated at 0.5 would be infinite.

In your example the estimated parameter associated with x is 47, this very imprecise estimate is the reason why the marginal effect is so large in this case.

Related Question