As I see it, the possibility to refuse classification as "too uncertain" is the whole point of choosing a threshold (as opposed to assigning the class with highest predicted probability).
Of course, you should have some justification for putting the threshold to 0.5: you may also put it up to 0.9 or any other value that is reasonable.
You describe a setup with mutually exclusive classes (closed-world problem). "No class reaches the threshold" can always happen as soon as that threshold is higher than 1/$n_{classes}$, i.e. the same problem occurs in a 2-class problem with threshold, say, 0.9. For threshold = 1/$n_{classes}$ it could happen in theory, but in practice it is highly unlikely.
So your problem is not related (just more pronounced) to the 3-class set-up.
To your second question: you can compute ROC curves for any kind of continuous output scores, they don't even need to claim that they are probabilities. Personally, I don't calibrate, because I don't want to waste another test set on that (I work with very restricted sample sizes). The shape of the ROC anyways won't change.
Answer to your comment:
The ROC conceptually belongs to a set-up that in my field is called single-class classification: does a patient have a particular disease or not. From that point of view, you can assign a 10% probability that the patient does have the disease. But this does not imply that with 90% probability he has something defined - the complementary 90% actually belong to a "dummy" class: not having that disease. For some diseases & tests, finding everyone may be so important that you set your working point at a threshold of 0.1. Textbook example where you choose an extreme working point is HIV test in blood donations.
So for constucting the ROC for class A (you'd say: the patient is A positive), you look at class A posterior probabilities only. For binary classification with probability (not A) = 1 - probability (A), you don't need to plot the second ROC as it does not contain any information that is not readily accessible from the first one.
In your 3 class set up you can plot a ROC for each class. Depending on how you choose your threshold, no classification, exactly one class, or more than one class assigned can result. What is sensible depends on your problem. E.g. if the classes are "Hepatitis", "HIV", and "broken arm", then this policy is appropriate as a patient may have none or all of these.
I'd like to highlight two possible options for multiclass performance metrics under class imbalance:
For the latter: as you have $N$ classes, and ROC/AUC are conceptually designed for 2-class-problems, you will likely need to calculate one ROC curve and AUC value per individual class. This could be done e.g. in a "1-vs-all" manner, where you test for each class how much it is confused with other classes. The thereby obtained $N$ metrics can be used to e.g. look at the distribution of AUC values (e.g. boxplots or similar) to compare and select a best suited model from multiple models. If this process needs to be done fully automated, consider computing the mean/median and sd/mad of AUC over all classes (the first indicates the "average" performance over classes, the latter the performance spread). By doing this for all models you obtain scalar values which you could use to select a model suited for your problem.
Best Answer
SVC's predict just uses its decision function, which is distance from the hyperplane.
According sklearn documentation, SVC's predict_proba does the following
according to their documentation here.
Much more details here. You will have to read Wu et al (2004) paper, mentioned in that section to figure out how exactly they did it. I am not familiar with it.