Solved – “Changing” standard deviation

curvesnormal distributionstandard deviation

I will admit to being just a hair about a novice when it comes to stats, but I feel like I have a decent working knowledge of things such as normal distribution, linear curves, standard deviations, etc. That said, I have a colleague who is proposing something that just doesn't make sense to me, and I'd like some input from people who know better:

I work in education, and my department recently gave midterm exams. This year, we used new assessments, and as such, we didn't know exactly what to expect in terms of how the students would perform.

The grades for some classes were about what we might have anticipated, but for several classes, the students performed much lower than similar cohorts have on past midterms. So, as a department, we agreed to apply a curve to these scores to account for the different and give a boost to the groups that struggled.

My colleague took it upon herself to spearhead the curving. While I think applying a linear curve would have been best, she felt that applying a normal distribution curve made more sense. In explaining her methodology, she writes:

"I set the standard deviation on the curve to mirror how similarly each particular group performed on each exam. However, if the top score was pushed over 100, then I modified the number slightly until the top score was at or less than 100. This, too, can be altered. If we decide that the top student should always earn a perfect/nearly perfect score, then the standard deviation can be altered to push that top score closer to 100."

Am I missing something? I didn't think standard deviation was something that could be "set," as I understood it to be a reflection of the raw data available in the sample set.

I can't find evidence anywhere online of anyone changing or modifying a standard deviation to make a curve fit a desired outcome.

Can anyone clarify what might be going on here?

Best Answer

While I have no complete picture of the problem at hand, I believe the following comments may be of help:

  1. It is reasonable to assume a roughly normal distributed shape for the marks, at least if: (a) the sample sizes are large enough to allow an assessment by eye that supports this assumption, or at least does not glaringly contradict it, and (b) if some outliers are ignored.

  2. You write: "While I think applying a linear curve would have been best..." An affine-linear transformation actually blends very well with a normal distribution, since if X is normally distributed, then so is Y=a*X+b for any constants a,b. Here the (random) exam mark is modeled by X, and the "adjusted" mark by Y. So in this application we need a>0. The standard deviation of Y is then a times the standard deviation of X.

  3. Since Y does not represent the raw data, its mean and standard deviation can be changed freely via the choice of the above affine-linear transformation.

  4. If your colleague only used a (that is, if she took b=0) to transform the data, it would not raise the average mark and thus not remedy the problem of poor overall performance; it would merely widen the gap between strongest and weakest results (if a>1).

  5. If the marks were to be raised by changing the constant b, this may necessitate reducing one or more top marks to below the maximum of 100.