It depends whether you are doing...
a) predictive research, where you don't care about what is causally responsible, only what serves as an efficient set of indicators, or
b) explanatory research, where you want to disentangle causal relationships as much as you can.
In the latter, when multiple correlated predictors vie for a role in your equation, you would care about such things as giving "causal credit" to earlier factors over later ones, since what comes later could never cause what came before, but sometimes the reverse is true. You would care about giving more "credit" to relatively objective, relatively fixed variables such as marital status or ethnicity than to relatively subjective, changeable ones such as attitudes and opinions. And (and here I'm paraphrasing James Davis's The Logic of Causal Order) you would want to choose more generative factors such as socioeconomic status over less generative ones such as what brand of toothpaste a person uses.
When your candidate predictors are correlated, no statistical algorithm (such as a stepwise regression) can deal with these issues of explanation. It is up to you as a researcher to think through your candidate variables and choose those that will best serve your purpose. It is only in pure predictive research that you can ignore such issues and simply choose those predictors that account for the most variance in the outcome--or, in your case, produce the highest pseudo-r-squared.
Your question gets to the heart of important issues in multivariate modelling of many types, and if more than 5 tags were allowed I would have also listed multicollinearity, model-building, and/or variable selection.
Best Answer
In general if a test for the regression coefficient being statistically significantly different from 0 cannot be rejected it suggests that the variable has little influence on the outcome and hence should not be included. However, it could be that the sample size is small and hence the variance of the estimate of the regression coefficient is too large to exclude the possiblity that it is zero. So the question then becomes how much bigger than 0 do I need the coefficient to be for me to want to include it in the model. Then figure out how large a sample size you need to detect a difference from 0 of at least that magnitude. If the sample size is large and all the estimates are close to 0 then exclude them and look for other predictors that might be better. But if the sample size is small and say a 95% confidence interval for a predictor is wide enough to include 0 and your significantly high slope parameter then consider increasing the sample size before you decide.