Solved – How to interpret p-values of a summary output in R when testing for a one-sided hypothesis

hypothesis testingmultiple regressionp-valueregressionstatistical significance

I'm currently doing research for my thesis and have conducted a multiple regression to test a couple of hypotheses. One of the hypothesis is one-sided and reads like this:
The higher variable d is, the higher the return of the stock. I built a regular multiple linear regression model using the lm function and produced a summary output and got this.

Coefficients:
                       Estimate Std. Error t value Pr(>|t|)   
(Intercept)           -0.226753   0.819065  -0.277  0.78231   
a                      0.617556   0.217732   2.836  0.00524 **
b                     -0.009962   0.018424  -0.541  0.58955   
c                      0.228283   0.101857   2.241  0.02658 * 
d                      0.075328   0.050703  -1.486  0.09610 .   

To my knowledge, these p-values are based on a two-sided test and would need to be divided by 2 to get to the p-value for a one-sided test. This would get me a p-value of 0.04805. If I set α = 5% to reject H0 hypotheses, does this mean I can reject the H0 hypothesis stating that d has no or a negative impact on the stock return and decide in favor of my alternative hypothesis that d has a positive impact on the stock? Or do I still base my decision on the p-values stated in the output? If was to produce a regular latex regression output table like in many scientific journals, would I base the stars in that table indicating the significance on a two- or one-sided test?

Many thanks!

Best Answer

I'd like to refine and comment on some of your statements.

First, to start with the mechanics, depending on the sign of the difference between the estimate and the mean under the null hypothesis, the one-sided p-value is either the two-sided pvalue divided by 2, or the complement of that value. For your case, the null is that the mean is $0$, and your estimate of $d$ is positive, and so you indeed take the pvalue divided by two as you did. Mechanically, that is fine.

Now to answer

If I set α = 5% to reject H0 hypotheses, does this mean I can reject the H0 hypothesis stating that d has no or a negative impact on the stock return and decide in favor of my alternative hypothesis that d has a positive impact on the stock?... If was to produce a regular latex regression output table like in many scientific journals, would I base the stars in that table indicating the significance on a two- or one-sided test?

the answer is that you are mostly correct, but only conditional on satisfying assumptions required for your test. These assumptions include some statistical ones relating to using a t-test and your linear regression model, and I won't go into these because it's standard to assume them in most cases. However, another key assumption is that you did not choose to do a one-sided test conditional on seeing that the two-sided test is not significant at your chosen level. If you were to do that, then the p-value you get loses much meaning, and you would certainly not be able to conclude what you said about the estimate.

In general, it is quite unconventional to perform one-sided tests, and it is especially concerning when the two-sided test fails to reject the null at your given significance level, but the one-sided test does reject the null. If you were to report the one-sided test in a table, you would have to make it extremely clear that you are indeed performing a one-sided test, and I guarantee most scientific journals will question that decision, and be further critical when they realize that the two-sided test is not significant. Why are you using a one-sided test? Your question is about stock returns, and they can easily be negative. I would be extremely careful and wary about performing a one-sided test here...

EDIT:

To answer your comment, you can typically make the same conclusions rejecting the null under a two-sided test as you would with rejecting the null under a one-sided test. Under a two-sided test, if you reject the null, then you conclude the effect is significantly different from the null value, and the effect is in the direction of the estimate. So in your case, compared to $0$, a positive value being significant using a two-sided test would let you conclude exactly what you wanted.

Think of a one-sided test as 'buying information' and the cost is that you cannot detect any difference on the other side of what you posit with the one-sided test. Recall that you need to come up with your hypothesis before observing the data, so in your case, if had decided to do a one-sided test of the effect being positive and you observed a negative effect, you would not be able to say anything about it, because by starting off with a one-sided positive test, you already assumed that a negative value is impossible! And modifying the test after the fact to be one-sided negative (or even two-sided) is wrong, and you lose the ability to read into your p-value. Since it is very rare to truly know beforehand the sign of the estimate (intuition/experience is not good enough, because then you will just be confirming your biases without ever testing them), you should almost always avoid one-sided tests. But rejecting the null of a two-sided test corresponds to what you would expect: you reject the null, and the effect is in the direction that you observe (so in your case, greater than the null of $0$).