It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is parametric or non-parametric (but never both). A quick search gave this table, which I imagine represents a common practical distinction in some areas between parametric and non-parametric tests.
Just above the table referred to there is a remark:
"... parametric data has an underlying normal distribution .... Anything else is non-parametric."
It may be a accepted criterion in some areas that either we assume normality and use ANOVA, and this is parametric, or we don't assume normality and use non-parametric alternatives.
It's perhaps not a very good definition, and it's not really correct in my opinion, but it may be a practical rule of thumb. Mostly because the end goal in the social sciences, say, is to analyze data, and what good is it to be able to formulate a parametric model based on a non-normal distribution and then not be able to analyze the data?
An alternative definition, is to define "non-parametric tests" as tests that do not rely on distributional assumptions and parametric tests as anything else.
The former as well as the latter definition presented defines one class of tests and then defines the other class as the complement (anything else). By definition, this rules out that a test can be parametric as well as non-parametric.
The truth is that also the latter definition is problematic. What if there are certain natural "non-parametric" assumptions, such as symmetry, that can be imposed? Will that turn a test statistic that does otherwise not rely on any distributional assumptions into a parametric test? Most would say no!
Hence there are tests in the class of non-parametric tests that are allowed to make some distributional assumptions $-$ as long as they are not "too parametric". The borderline between the "parametric" and the "non-parametric" tests has become blurred, but I believe that most will uphold that either a test is parametric or it is non-parametric, perhaps it can be neither but saying that it is both makes little sense.
Taking a different point of view, many parametric tests are (equivalent to) likelihood ratio tests. This makes a general theory possible, and we have a unified understanding of the distributional properties of likelihood ratio tests under suitable regularity conditions. Non-parametric tests are, on the contrary, not equivalent to likelihood ratio tests per se $-$ there is no likelihood $-$ and without the unifying methodology based on the likelihood we have to derive distributional results on a case-by-case basis. The theory of empirical likelihood developed mainly by Art Owen at Stanford is, however, a very interesting compromise. It offers a likelihood based approach to statistics (an important point to me, as I regard the likelihood as a more important object than a $p$-value, say) without the need of typical parametric distributional assumptions. The fundamental idea is a clever use of the multinomial distribution on the empirical data, the methods are very "parametric" yet valid without restricting parametric assumptions.
Tests based on empirical likelihood have, IMHO, the virtues of parametric tests and the generality of non-parametric tests, hence among the tests I can think of, they come closest to qualify for being parametric as well as non-parametric, though I would not use this terminology.
The question implies that the standard deviation (SD) is somehow normalized so can be used to compare the variability of two different populations. Not so. As Peter and John said, this normalization is done as when calculating the coefficient of variation (CV), which equals SD/Mean. The SD is in in the same units as the original data. In contrast, the CV is a unitless ratio.
Your choice 1 (IQR/Median) is analogous to the CV. Like the CV, it would only make sense when the data are ratio data. This means that zero is really zero. A weight of zero is no weight. A length of zero is no length. As a counter example, it would not make sense for temperature in C or F, as zero degrees temperature (C or F) does not mean there is no temperature. Simply switching between using C or F scale would give you a different value for the CV or for the ratio of IQR/Median, which makes both those ratios meaningless.
I agree with Peter and John that your second idea (Range/IQR) would not be very robust to outliers, so probably wouldn't be useful.
Best Answer
The coefficient of variation is not strongly associated with the normal distribution at all. It is most obviously pertinent for distributions like the lognormal or gamma. See e.g. this thread.
Looking at ratios such as interquartile range/median is possible. In many situations that ratio might be more resistant to extreme values than the coefficient of variation. The measure seems neither common nor especially useful, but it certainly predates 2010. Tastes vary, but I see no reason to call that ratio nonparametric; it just uses different parameters.
A much better developed approach is to use the ratio of the second and first $L$-moment. The first $L$-moment is just the mean, but the second $L$-moment has more resistance than the standard deviation. Start (e.g.) here for more on $L$-moments.
Whenever the coefficient of variation seems natural, that's usually a sign that analyses should be conducted on a logarithmic scale. If CV is (approximately) constant, then SD is proportional to the mean, which goes with comparisons and changes being multiplicative rather than additive, which implies thinking logarithmically.
Note: The paper cited starts quite well, but then focuses on testing the CV when the distribution is normal. As above, if the distribution is normal, then the CV seems utterly uninteresting in practice, so the emphasis is puzzling to me. Your inclinations may differ.