As @gung says in the comment, your question title and text conflict. The F-test for joint significance of all parameters in a model is on a single model fit; it is displayed each time you do summary()
.
Comparisons of models is a whole different ball game -- as the models need to be nested for inference to be valid.
The lmtest adds a number of common econometrics tests for linear models. As an illustration, here is the beginning of examples(lrtest)
for using a likelihood-ratio test to compare two nested models:
R> ## with data from Greene (1993):
R> data("USDistLag")
R> usdl <- na.contiguous(cbind(USDistLag, lag(USDistLag, k = -1)))
R> colnames(usdl) <- c("con", "gnp", "con1", "gnp1")
R> fm1 <- lm(con ~ gnp + gnp1, data = usdl)
R> fm2 <- lm(con ~ gnp + con1 + gnp1, data = usdl)
R> lrtest(fm2, fm1)
Likelihood ratio test
Model 1: con ~ gnp + con1 + gnp1
Model 2: con ~ gnp + gnp1
#Df LogLik Df Chisq Pr(>Chisq)
1 5 -56.07
2 4 -65.87 -1 19.61 9.52e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
R>
"curvilinear" could mean anything geometrically not a straight line on the scale being used. So, that could mean many things, including behaviour best tackled with powers of another variable, exponentials, logarithms, trigonometric and hyperbolic functions, etc., etc.
Using logistic regression does not change what is standard in any kind of regression-like modelling: You can have whatever predictors (so-called independent variables) in your model that make sense, so long as there are sufficient data.
Those general statements aside, trying a quadratic term in your model as well as a linear term is often a good simple way of adding some curvature. Because you are using a logit scale, intuition needs refining here. In particular, if your coefficient on the squared term is negative, you are fitting a kind of bell shape on the probability scale. This is often a feature in e.g. ecology where probability of occurrence of organisms is greatest for some intermediate value of an environmental predictor. In simple terms, it can be too hot, about right, too cold, and so forth. See http://www.cambridge.org/gb/knowledge/isbn/item5708032/ for one good account.
I trust that others will add advice about SPSS.
Best Answer
To use SPSS for the Lack of fit test go to: Analyze>>Compare Means>>Means.
Then in the dialogue box that appears assign your Independent and Dependent Variables. Select Options and a new dialogue box will appear. Check the option at the bottom of the screen that says "Test for Linearity".