Interpreting regression output is quite easy if you know what you are looking at. Let's take a hypothetical example, because you have not presented your data or coefficients. For simplicity's sake we will work with purely binary variables.
This is a stupid example, and not correct in any way shape or form. Let's say that you are interested in the number of people eaten by sharks on a given day. You want to know if:
$ H_01:$ Bathing suit colour (red = 0, blue = 1) affects rate of predation
$H_02:$ Whether the day is sunny (0) or cloudy (1) affects rate of predation
$H_03 :$ Whether weather (sunny or cloudy) affects the effect of bathing suit colour on predation. (this is our interaction term).
We run our regression and find the following, with 1 being our intercept
$$
Predation = 15*Bathing Suit-25*Weather+5*Bathingsuit*Weather + 1
$$
Let's just assume all significant. What you'll find is that we can actually read off the interaction term of our covariate here without having to report both situations.
When we have $BS = 0$, $W = 0$ (Red and sunny),
$$
Predation = (15*0) -(10*0) + (5*0*0) + 1 =1
$$
Thus it is equal to our base rate
When we have $BS = 1$, $W = 0$ (Blue and sunny),
$$
Predation = (15*1) -(10*0) + (5*1*0) + 1 = 16
$$
Thus you add the term for BS to the intercept.
This is true for the weather as well.
However, when both terms are present, the interaction term comes into play
$$
Predation = (15*1) -(10*1) + (5*1*1) + 1 = (15-10+5+1) = 11
$$
This can be applied to any regression coefficients. The coefficient, either a slope coefficient (m), a beta coefficient, a standardized m, or an OR, represents to increase in the response from one level of independent.
Thus we return to your question. It is necessary and preferable to only report one OR, which can then be interpreted to both situations by including the magnitude of the interaction.
Hope that helps.
Approaches that naively select model terms based on some p-values or some AIC cut-offs (either in a multivariate model via some kind of stepwise or other selection or by looking at lots of univariate models) lead to extremely problematic fits that may fit the particular dataset well, but will otherwise not be useful. Models constructed in such a fashion tend to wrongly identify variables as relevant that are not (while not identifying truly relevant variables - if we assume the used model is some reasonable approximation to nature, in which some variables are relevant and some are not) and have poor predictive properties on new datasets. Nevertheless such approaches are still often used and one can even occasionally get such work published in some well-respected journals, but are quite thoroughly discredited in the statistical community. There are a lot of more appropriate approaches, e.g. bootstrapping naive model building approaches, cross-validation, random forests, model averaging, variable selection priors etc. that should be used instead.
Best Answer
A non-statistically solid rule used in epidemiology is called "The 10% rule". It states that when the Odds Ratio (OR) changes by 10% or more upon including a confounder in your model, the confounder must be controlled for by leaving it in the model. If a 10% change in OR is not observed, you can remove the variable from your model, as it does not need to be controlled for.
EDIT:
What I think you are asking is:
1.) How to adjust for confounding via analysis
2.) If interaction is different than confounding.
An important thing to understand about confounding is that it is generally on a dataset by dataset basis. This works with the 10% rule. Essentially, if the OR of your exposure/outcome relationship does not change by 10% or more after adding the third variable into the model, there is not good enough evidence of confounding to keep it in the model. This is the case even if there is evidence in other literature that the third variable may be a confounder. You could hypothetically put in your model one exposure and 10 extra variables, but if none of the variables change the exposure/outcome relationship (OR), then you should not keep them in the model. Although leaving the variables in the model will control for them, but if it does not change your exposure/outcome relationship, it is unnecessary and will only cloud your conclusions. This article gives some other methods for addressing confounding, including the 10% rule (Just click on view PDF): Hernan 2002
Interaction differs from confounding in that it your exposure/outcome relationship is different on different levels of a third variable. Essentially, rather than the third variable influencing the OR like a confounder does, the third variable will have different ORs for different categories. An easy example is using gender (Male/Female). Say you are interested in the effect of cigarette smoking on cancer. You conclude that smoking gives an OR of 3.0 for cancer. However, you wish to examine if age is an interactive term. After entering the interactive term into your model, you find that the OR for cigarette smoking and cancer among males is 2.0 and among females is 4.0. In this case, reporting an OR of 3.0 would be misleading, as there is a clear difference between males in females.
Please let me know if this has clarified any issues you have.