Cronbach's $\alpha$ is only designed for measures that are essentially $\tau$-equivalent, which essentially means they contribute equally to the underlying construct. One way to test this is to see if they have the same factor loadings in a factor model. If your measures are not $\tau$-equivalent, then $\alpha$ will underestimate reliability, regardless if the data are continuous or dichotomous.
There are many indices of internal-consistency reliability. $\omega$ (omega) as identified by McDonald (1999) is one of the most flexible for unidimensional constructs, and it can be easily extended to a multidimensional construct. Here is a procedure I recommend taking to identify which measure of reliability to use:
1) First assess dimensionality. Do you have 1 construct or many? If there are many, then no measure of unidimensional reliability will be accurate. Do this with factor analysis, ideally confirmatory factor analysis (CFA), but if you don't have the knowledge or the software you can use exploratory factor analysis (EFA). If you have more than 1 factor that is substantive, then you have a multidimensional construct. If that's the case, look for a measure of multidimensional reliability (these exist for both $\alpha$ and $\omega$. See here. Alternatively, identify the items that don't fit your desired construct and remove them (though take caution here, there are a lot of other psychometric tests you should do as well).
2) Assess $\tau$-equivalence. Again doing this in a factor model may be easiest. Basically, you test to see if the loadings are all equal - in a CFA, you can constrain the loadings and test fit, in an EFA you just have to ballpark the loadings to see if they are reasonably close. If you have $\tau$ equivalence, go ahead and use $\alpha$. If not, use $\omega$.
From what I can tell, SPSS does not calculate $\omega$ (see here). In my view, R is one of the best packages out there for psychometrics because it has the flexibility to do all of this. If you don't know R and don't have the time/energy to learn it (it's a big leap from SPSS) then you can probably safely go with $\alpha$ if you construct is unidimensional, just keep in mind reliability will be higher than what $\alpha$ gives you.
Reference:
McDonald, R. P. (1999). Test Theory: A Unified Treatment. New York: Psychology Press.
Before you conclude that the factors are poor, check if any items are correlating negatively with the others. Factors will adjust for this automatically, but alpha will not.
Then, rather than trying to throw away whole factors, I would look at each item in each factor; examine its correlation. Check for poor item quality (e.g. everyone or almost everyone giving the same answer).
Best Answer
As mentioned in the comments by ttnphs, Cronbach's alpha $\alpha$ (a reliability measure of internal consistency) is not appropriate for ordinal and nominal data, as it was designed for scale (or metrical data). Factor analysis, however, can easily accommodate ordinal and nominal data. When using factor analysis omega $\omega$ is typically used as a measure of internal consistency. Unlike $\alpha$, $\omega$ is a model-based estimate of reliability - thus it can only be calculated after the factor analysis has been run (and is returned by default in many software packages) regardless of your item type (just make sure the appropriate link function is used for each item).
I am not sure what you mean by item selection if you could elaborate on the context I may be able to help.
Below are a couple of useful articles regarding the use of $\alpha$ in scenarios where factor analysis is the appropriate measurement model.
McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological methods, 23(3), 412.
Raykov, T., & Marcoulides, G. A. (2019). Thanks coefficient alpha, we still need you!. Educational and psychological measurement, 79(1), 200-210.