The coefficient of variation is not strongly associated with the normal distribution at all. It is most obviously pertinent for distributions like the lognormal or gamma. See e.g. this thread.
Looking at ratios such as interquartile range/median is possible. In many situations that ratio might be more resistant to extreme values than the coefficient of variation. The measure seems neither common nor especially useful, but it certainly predates 2010. Tastes vary, but I see no reason to call that ratio nonparametric; it just uses different parameters.
A much better developed approach is to use the ratio of the second and first $L$-moment. The first $L$-moment is just the mean, but the second $L$-moment has more resistance than the standard deviation. Start (e.g.) here for more on $L$-moments.
Whenever the coefficient of variation seems natural, that's usually a sign that analyses should be conducted on a logarithmic scale. If CV is (approximately) constant, then SD is proportional to the mean, which goes with comparisons and changes being multiplicative rather than additive, which implies thinking logarithmically.
Note: The paper cited starts quite well, but then focuses on testing the CV when the distribution is normal. As above, if the distribution is normal, then the CV seems utterly uninteresting in practice, so the emphasis is puzzling to me. Your inclinations may differ.
As Johnnyboycurtis has answerd, non-parametric methods are those if it makes no assumption on the population distribution or sample size to generate a model.
A k-NN model is an example of a non-parametric model as it does not consider any assumptions to develop a model. A Naive Bayes or K-means is an example of parametric as it assumes a distribution for creating a model.
For instance, K-means assumes the following to develop a model
All clusters are spherical (i.i.d. Gaussian).
All axes have the same distribution and thus variance.
All clusters are evenly sized.
As for k-NN, it uses the complete training set for prediction. It calculates the nearest neighbors from the test point for prediction. It assumes no distribution for creating a model.
For more info:
- http://pages.cs.wisc.edu/~jerryzhu/cs731/stat.pdf
- https://stats.stackexchange.com/a/133841/86202
- https://stats.stackexchange.com/a/133694/86202
Best Answer
It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is parametric or non-parametric (but never both). A quick search gave this table, which I imagine represents a common practical distinction in some areas between parametric and non-parametric tests.
Just above the table referred to there is a remark:
"... parametric data has an underlying normal distribution .... Anything else is non-parametric."
It may be a accepted criterion in some areas that either we assume normality and use ANOVA, and this is parametric, or we don't assume normality and use non-parametric alternatives.
It's perhaps not a very good definition, and it's not really correct in my opinion, but it may be a practical rule of thumb. Mostly because the end goal in the social sciences, say, is to analyze data, and what good is it to be able to formulate a parametric model based on a non-normal distribution and then not be able to analyze the data?
An alternative definition, is to define "non-parametric tests" as tests that do not rely on distributional assumptions and parametric tests as anything else.
The former as well as the latter definition presented defines one class of tests and then defines the other class as the complement (anything else). By definition, this rules out that a test can be parametric as well as non-parametric.
The truth is that also the latter definition is problematic. What if there are certain natural "non-parametric" assumptions, such as symmetry, that can be imposed? Will that turn a test statistic that does otherwise not rely on any distributional assumptions into a parametric test? Most would say no!
Hence there are tests in the class of non-parametric tests that are allowed to make some distributional assumptions $-$ as long as they are not "too parametric". The borderline between the "parametric" and the "non-parametric" tests has become blurred, but I believe that most will uphold that either a test is parametric or it is non-parametric, perhaps it can be neither but saying that it is both makes little sense.
Taking a different point of view, many parametric tests are (equivalent to) likelihood ratio tests. This makes a general theory possible, and we have a unified understanding of the distributional properties of likelihood ratio tests under suitable regularity conditions. Non-parametric tests are, on the contrary, not equivalent to likelihood ratio tests per se $-$ there is no likelihood $-$ and without the unifying methodology based on the likelihood we have to derive distributional results on a case-by-case basis. The theory of empirical likelihood developed mainly by Art Owen at Stanford is, however, a very interesting compromise. It offers a likelihood based approach to statistics (an important point to me, as I regard the likelihood as a more important object than a $p$-value, say) without the need of typical parametric distributional assumptions. The fundamental idea is a clever use of the multinomial distribution on the empirical data, the methods are very "parametric" yet valid without restricting parametric assumptions.
Tests based on empirical likelihood have, IMHO, the virtues of parametric tests and the generality of non-parametric tests, hence among the tests I can think of, they come closest to qualify for being parametric as well as non-parametric, though I would not use this terminology.