It's generally unwise to throw away information, which is what you do with complete-cases analysis or by throwing out predictors.
One of the advantages of multiple imputation instead of a single imputation of missing data is that the result incorporates the variability introduced by the imputation process while in principle using all the available information. Coefficients associated with the variable having 30% missing values thus may have larger standard errors than coefficients from variables with few missing values, but there is no a priori reason to omit such a variable. It might be worse to omit such a variable, as information in the cases having values for that variable might improve the imputations for other variables. Even if for some reason you don't keep it as a predictor variable, it can still be included as part of the imputation process.
The link above provides a simple introduction to the process of generating and using the multiple sets of imputations. You draw the imputations from a probability distribution, perform your regressions on each of the imputation sets, and then pool the results among the sets. With this number of predictors it might be best to do the imputations first and then do feature selection if feature selection is really necessary. With only 25 predictors you might be better off doing a ridge regression that keeps all the predictors, with appropriate penalization, and tends to treat collinear predictors together.
The mice package in R provides the tools that you need. The chained-equation approach makes it straightforward to deal with imputations of several variables at a time. You should devote some effort to setting up the structure of the imputations in a way that makes sense based on your understanding of the subject matter.
Two warnings. First, if one of your predictors is really "missing not at random" (MNAR) in the technical sense, then you will need to use special care and develop a joint model of the outcome variable and the predictor. It's possible, however, to think that data are MNAR when they really might be MAR, as this question illustrates. MAR only requires "given the observed data, [missingness] does not depend on the unobserved data". So consider carefully whether your predictor really threatens to be MNAR.
Second, you should think about how you will be using this model for prediction. If there are some predictors that are likely to be missing in many cases going forward, not just frequently omitted from your present data set, and you are going to be making predictions on a case-by-case basis, then you have to consider carefully how you would make your predictions in such cases and whether that variable should be included in your model.
Best Answer
If you are referring to a multivariate analysis, the approach of "dropping mostly incomplete factors" may be called a complete factor analysis. Here, inclusion of a variable in a model is conditional upon the completeness of its observations. If, for instance, 20% or more of the values for a variable are missing, we might make a rule to omit that variable.
Getting more n into the analysis sample may tighten CIs which is only reason not to consider complete case analysis/listwise deletion, or it may not. Dropping factors will change the interpretation and long-run behavior of the estimates, however. There are reasons to consider complete factor analysis for a sensitivity analysis, but not as a primary analysis.
We generally don't do complete factor analyses because such analyses are biased. The factor you chose to omit was part of a prespecified analysis plan, and thus served an important role in the model for two functions: 1) It is prognostic of the outcome and/or 2) It stratifies or reduces confounding. Because of the non-collapsibility of the logit link, you cannot simply "throw measures out" because they do not have the data properties you desire, the inference and estimates will consequently change and you wind up answering a different question. If this omitted factor is a confounder, these problems are moreso problematic.
In general, it is preferable to conduct--as a main analysis--inference which is inefficient as opposed to biased.
By contrast, if your analysis proposes several separate comparisons, I frequently see uneven n's between those comparisons. This is mainly due to complete case analysis, AKA listwise deletion. That approach is generally considered sane because the assumption that responders are the basis of a representative sample holds in each case. So one comparison having N=500 and another having N=450 simply means there were 500 responders in one analysis sample and 450 in the next. Describing missing data carefully helps the readers understand the meaning and impact of this approach.