Let's think about regular linear regression, and to make it concrete, let's say we are trying to predict height of people. When you regress heights against just an intercept term and no predictors, the intercept term will be be the height averaged over all the people in your sample. Lets call this term $\beta_0^{\text{no predictor}}$
Now, we want to add a predictor for sex, so we create and indicator variable that takes a 0 when the sampled person is male and 1 when the person is a female. When we regress against this model, we will get an estimates for an intercept term, $\beta_0^{\text{male reference}}$ and coefficent of the sex variable $\beta_1^{\text{male reference}}$. The estimated intercept is no longer the average height of everybody, but the average height of males, the coefficient of the sex variable is the difference in the average height between males and females.
Consider if we decided to code our indicator variable differently, so that the sex variable took the value 0 if the person was a female and 1 if the person was a male, in this specification of the model we get the estimates of the intercept and coefficient $\beta_0^{\text{female reference}}, \beta_1^{\text{female reference}}$. Now $\beta_0^{\text{female reference}}$, the intercept term, is the average height of females, and the coefficient is the difference in average height between females and males. So
$$
\begin{align}
\beta_1^{\text{male reference}} &= -\beta_1^{\text{female reference}}\\
\beta_0^{\text{male reference}} + \beta_1^{\text{male reference}} &= \beta_0^{\text{female reference}}\\
\beta_0^{\text{female reference}} + \beta_1^{\text{female reference}} &= \beta_0^{\text{male reference}}
\end{align}
$$
So, by changing how we coded the indicator variable we changed both the value of the intercept term the coefficient term, and this is exactly what we should want. When we have a multivalue indicator, you will see the same kinds of changes as you specify difference reference levels, i.e. when the indicators take on the value of 0.
In the binary indicator case the p-value of the $\beta_1$ term should not change depending on how we code, but in the multivalue indicator case it will, because p-value is a function of the size of the effect, and the average differences between groups and a reference group will likely change dependent upon the reference group. For example, we have three groups, babies, teenagers, and adults, the average height difference between adults and teenagers will be smaller than between adults and babies, and so the p-value for the coefficient for the indicator of being an adult versus a teenager should be greater than an indicator of being an adult versus a baby.
Regarding the model: Don't make dummies out of your ordinal dependent. You need to use an ordinal logistic regression model. Its hard to fully answer without more details on your data or which statistical package you use. If your dependent was categorical you would use a multinominal logistic regression model. This is a decent tutorial on fitting and interpreting the ordinal model in R.
Edit: Ordinal logistic regression with SAS, and Interpreting ordinal logistic output in SAS.
Regarding stepwise regression: Note that in order to find which of the covariates best predicts the dependent variable (or the relative importance of the variables) you don't need to perform a stepwise regression. You need standardized coefficients. In R you can do it using the scale()
function on your data set, but all statistical packages have equal (or easier) mechanisms. Comparing the size of the standardized coefficients will give you the answer. Using stepwise regression will help you understand which model is most economic in that it incorporates only those which benefit the model. However, it is not a very recommended method as it may not find the best model. You might prefer to use theoretical considerations.
Edit: regarding explained percent variance: If the previous method of finding relative importance is not good enough and you need the explained percent of the variance per variable, you are sadly out of luck. $R^2$ does not exist for logistic models. See This explanation for more details on pseudo $R^2$ From the UCLA stat help (from which all links here are taken):
The model estimates from a logistic regression are maximum likelihood estimates arrived at through an iterative process. They are not calculated to minimize variance, so the OLS approach to goodness-of-fit does not apply. However, to evaluate the goodness-of-fit of logistic models, several pseudo R-squareds have been developed. These are "pseudo" R-squareds because they look like R-squared in the sense that they are on a similar scale, ranging from 0 to 1 (though some pseudo R-squareds never achieve 0 or 1) with higher values indicating better model fit, but they cannot be interpreted as one would interpret an OLS R-squared and different pseudo R-squareds can arrive at very different values
Best Answer
It's very important to first define the nature of your dependent variable. If qualitative ordinal, then an ordinal probit(or logit) model is the right choice. With this model you will have a unique slope parameter per explanatory variable whatever the category as only the constant changes with categories. If your dependent variable is social status then it can be easily considered as ordinal. Thus, inference on an independent variable effect becomes straighforward.