I disagree that a 50% cutoff is either inherently valid or supported by the literature. The only case where such a cut off might be justified is in a case-control design where the prevalence of the outcome is exactly 50%, but even then the choice would be subject to a few conditions. I think the principal rationale for the choice of cut-off is the desired operating characteristic of the diagnostic test.
A cut-off may be chosen to achieve a desired sensitivity or specificity. For an example of this, consult the medical devices literature. Sensitivity is often set to a fixed amount: examples include 80%, 90%, 95%, 99%, 99.9%, or 99.99%. The sensitivity/specificity tradeoff should be compared to the harms of Type I and Type II errors. Often times, as with statistical testing, the harm of a type I error is greater and so we control that risk. Still, these harms are rarely quantifiable. Because of that, I have major objections to cut-off selection methods which rely on a single measure of predictive accuracy: they convey, incorrectly, that harms can and have been quantified.
Your issue of too many false positives is an example of the contrary: Type II error may be more harmful. Then you may set the threshold to achieve a desired specificity, and report the achieved sensitivity at that threshold.
If you find both are too low to be acceptable for practice, your risk model does not work and it should be rejected.
Sensitivity and specificity are easily calculated or looked up from a table over an entire range of possible cut-off values. The trouble with the ROC is that it omits the specific cut-off information from the graphic. The ROC is therefore irrelevant for choosing a cutoff value.
Best Answer
That depends on what you mean by "optimal". You need to choose a loss function.
That said, as mentioned in the comments, logistic regression is a method for probabilistic classification rather than discrete classification, so if all you need as predicted output is a class, is logistic regression really what you want?