It's generally unwise to throw away information, which is what you do with complete-cases analysis or by throwing out predictors.
One of the advantages of multiple imputation instead of a single imputation of missing data is that the result incorporates the variability introduced by the imputation process while in principle using all the available information. Coefficients associated with the variable having 30% missing values thus may have larger standard errors than coefficients from variables with few missing values, but there is no a priori reason to omit such a variable. It might be worse to omit such a variable, as information in the cases having values for that variable might improve the imputations for other variables. Even if for some reason you don't keep it as a predictor variable, it can still be included as part of the imputation process.
The link above provides a simple introduction to the process of generating and using the multiple sets of imputations. You draw the imputations from a probability distribution, perform your regressions on each of the imputation sets, and then pool the results among the sets. With this number of predictors it might be best to do the imputations first and then do feature selection if feature selection is really necessary. With only 25 predictors you might be better off doing a ridge regression that keeps all the predictors, with appropriate penalization, and tends to treat collinear predictors together.
The mice package in R provides the tools that you need. The chained-equation approach makes it straightforward to deal with imputations of several variables at a time. You should devote some effort to setting up the structure of the imputations in a way that makes sense based on your understanding of the subject matter.
Two warnings. First, if one of your predictors is really "missing not at random" (MNAR) in the technical sense, then you will need to use special care and develop a joint model of the outcome variable and the predictor. It's possible, however, to think that data are MNAR when they really might be MAR, as this question illustrates. MAR only requires "given the observed data, [missingness] does not depend on the unobserved data". So consider carefully whether your predictor really threatens to be MNAR.
Second, you should think about how you will be using this model for prediction. If there are some predictors that are likely to be missing in many cases going forward, not just frequently omitted from your present data set, and you are going to be making predictions on a case-by-case basis, then you have to consider carefully how you would make your predictions in such cases and whether that variable should be included in your model.
Is there a way to identify if your data is MNAR, MAR, or MCAR?
There is Little's MCAR test, which can evaluate if your missings are MCAR. More informations can be found here on page 12. As far as I know there is no test available, which differentiates between MAR and MNAR. In practice I would say that many people just assume MAR, since the treatment of NMAR is very difficult. However, some information about appropriate methods for MNAR can be found here.
And when performing multiple imputation, should you include all predictor variables even if only 1 or 2 variables have missing values?
That depends strongly on your specific data. For data consisting of few variables it is often a good approach to use all variables. With larger data, you should usually do a variable selection, mainly due to computational reasons and to exclude noisy predictors (see IWS' comment below). You can find some guidelines here on page 128. There are 3 groups of variables, which should be included into imputation models: variables that are used in later analyses of imputed data, variables that are related to the missingness structure, and variables that are strong predictors for the variable you want to impute.
Also once I run my MI and build my logistic model, how do I decide if it is better to go with a model that excludes all missing values through list-wise deletion or with my imputed model?
If done right, it should always be better to use the imputed data, since you are able to keep a larger data set and you will eventually be able to reduce bias, which results from the missingness.
Best Answer
There can be "a process behind the generation of the data that influence the missing values" while the data are nevertheless "missing at random" (MAR) in the technical sense (and thus suitable for multiple imputation). What's required for data to be MAR is that "the missingness can be explained by variables on which you have full information".
The problem with data "missing not at random" (MNAR) is that your data by themselves do not contain adequate information about the missingness. Data MNAR could be due to a relation between the probability of missingness and the "true" value itself, but they could also be due to a relation of missingness to some other variable that was not included in the data. That's also why it's impossible to prove that data are MAR; you never know about possible unknown unknowns.