Since your response is ordinal then you should use ordinal regression. At a very high level, the main difference ordinal regression and linear regression is that with linear regression the dependent variable is continuous and ordinal the dependent variable is ordinal.
Now you can usually use linear regression with an ordinal dependent variable but you will see that the diagnostic plots do not look good. When you say SPSS won't run the linear regression what do you mean? Are you getting an error?
One way I approach this is to not take people's word for it, based on what appears to be either their beliefs, or precedent, but to try it out and see if (in your case) it matters in a way that you care about.
Here's a simple example: A 5 point Likert scale, with a uniform distribution. 100 people per group, and we'll do a two sample t-test. I'll repeat this 10000 times when the null hypothesis is true (i.e. there is no difference).
> mean(sapply(1:1000, function(x) {
t.test(sample(1:5, 100, TRUE), sample(1:5, 100, TRUE))$p.value
} ) < 0.05)
[1] 0.0499
It appears that I get a significant value 4.99% of the time. Given that I expect a significant value 5% of the time, it does not appear that violating the assumptions of normality and interval measurement has had any effect on my results - at least in terms of type I errors. (There might be power issues, of course.)
If someone has a specific criticism, you can investigate and see if it's an issue.
Here's another example: Now I have 5 people in one group, and 100 in the other.
> mean(sapply(1:10000, function(x) { t.test(sample(1:5, 5, TRUE), sample(1:5, 100, TRUE))$p.value } ) < 0.05)
[1] 0.0733
Now I have a 7.3% type I error rate. This is probably enough to worry about.
What about 5 per group?
mean(sapply(1:10000, function(x) { t.test(sample(1:5, 5, TRUE), sample(1:5, 5, TRUE))$p.value } ) < 0.05)
Now a 4.5% signifance rate - indicates a slight loss of power, but I prefer that (a lot) over an inflated type I error rate.
Best Answer
Entering your 7 levels as dummy variables would be more appropriate to the ordinal level of measurement, but there are a few caveats to consider:
Overfitting can be resisted and detection of a simple trend facilitated somewhat by using penalized regression to smooth the dummy coefficients (i.e., reducing differences in adjacent levels of TV). E.g., if the regression coefficients for the dummy variables corresponding to levels {3,4,5} are {.2,.5,.3} in a OLS model, they might be {.25,.43,.315} in a penalized regression model. If the reference level really is more different from level 4 than from levels 3 and 5, smoothing might not improve your predictions, but if the relationship between TV and WC in the population is actually monotonic, predictions would improve with reduction (if not elimination) of the spurious bump in the trend at level 4 (in this example). I always suggest Gertheiss and Tutz (2009) for an overview of penalized regression.
References
Bollen, K. A., & Barb, K. H. (1981). Pearson's $r$ and coarsely categorized measures. American Sociological Review, 46, 232–239. Retrieved from http://www.statpt.com/correlation/bollen_barb_1981.pdf.
Gertheiss, J., & Tutz, G. (2009). Penalized regression with ordinal predictors. International Statistical Review, 77(3), 345–365. Retrieved from http://epub.ub.uni-muenchen.de/2100/1/tr015.pdf.