Regression – Linear Regression Not Fitting Well

rregression

I do a linear regression using R lm function:

x = log(errors)
plot(x,y)
lm.result = lm(formula = y ~ x)
abline(lm.result, col="blue") # showing the "fit" in blue

enter image description here

but it does not fit well. Unfortunately I can't make sense of the manual.

Can someone point me in the right direction to fit this better?

By fitting I mean I want to minimize the Root Mean Squared Error (RMSE).


Edit:
I have posted a related question (it's the same problem) here:
Can I decrease further the RMSE based on this feature?

and the raw data here:

http://tny.cz/c320180d

except that on that link x is what is called errors on the present page here, and there are less samples (1000 vs 3000 in the present page plot). I wanted to make things simpler in the other question.

Best Answer

One of the simplest solutions recognizes that changes among probabilities that are small (like 0.1) or whose complements are small (like 0.9) are usually more meaningful and deserve more weight than changes among middling probabilities (like 0.5).

For instance, a change from 0.1 to 0.2 (a) doubles the probability while (b) changing the complementary probability only by 1/9 (dropping it from 1-0.1 = 0.9 to 1-0.2 to 0.8), whereas a change from 0.5 to 0.6 (a) increases the probability only by 20% while (b) decreasing the complementary probability only by 20%. In many applications that first change is, or at least ought to be, considered to be almost twice as great as the second.

In any situation where it would be equally meaningful to use a probability (of something occurring) or its complement (that is, the probability of the something not occurring), we ought to respect this symmetry.

These two ideas--of respecting the symmetry between probabilities $p$ and their complements $1-p$ and of expressing changes relatively rather than absolutely--suggest that when comparing two probabilities $p$ and $p'$ we should be tracking both their ratios $p'/p$ and the ratios of their complements $(1-p)/(1-p')$. When tracking ratios it is simpler to use logarithms, which convert ratios into differences. Ergo, a good way to express a probability $p$ for this purpose is to use $$z = \log p - \log(1-p),$$ which is known as the log odds or logit of $p$. Fitted log odds $z$ can always be converted back into probabilities by inverting the logit; $$p = \exp(z)/(1+\exp(z)).$$ The last line of the code below shows how this is done.

This reasoning is rather general: it leads to a good default initial procedure for exploring any set of data involving probabilities. (There are better methods available, such as Poisson regression, when the probabilities are based on observing ratios of "successes" to numbers of "trials," because probabilities based on more trials have been measured more reliably. That does not seem to be the case here, where the probabilities are based on elicited information. One could approximate the Poisson regression approach by using weighted least squares in the example below to allow for data that are more or less reliable.)

Let's look at an example.

Figures

The scatterplot on the left shows a dataset (similar to the one in the question) plotted in terms of log odds. The red line is its ordinary least squares fit. It has a low $R^2$, indicating a lot of scatter and strong "regression to the mean": the regression line has a smaller slope than the major axis of this elliptical point cloud. This is a familiar setting; it is easy to interpret and analyze using R's lm function or the equivalent.

The scatterplot on the right expresses the data in terms of probabilities, as they were originally recorded. The same fit is plotted: now it looks curved due to the nonlinear way in which log odds are converted to probabilities.

In the sense of root mean squared error in terms of log odds, this curve is the best fit.

Incidentally, the approximately elliptical shape of the cloud on the left and the way it tracks the least squares line suggest that the least squares regression model is reasonable: the data can be adequately described by a linear relation--provided log odds are used--and the vertical variation around the line is roughly the same size regardless of horizontal location (homoscedasticity). (There are some unusually low values in the middle that might deserve closer scrutiny.) Evaluate this in more detail by following the code below with the command plot(fit) to see some standard diagnostics. This alone is a strong reason to use log odds to analyze these data instead of the probabilities.


#
# Read the data from a table of (X,Y) = (X, probability) pairs.
#
x <- read.table("F:/temp/data.csv", sep=",", col.names=c("X", "Y"))
#
# Define functions to convert between probabilities `p` and log odds `z`.
# (When some probabilities actually equal 0 or 1, a tiny adjustment--given by a positive
# value of `e`--needs to be applied to avoid infinite log odds.)
#
logit <- function(p, e=0) {x <- (p-1/2)*(1-e) + 1/2; log(x) - log(1-x)}
logistic <- function(z, e=0) {y <- exp(z)/(1 + exp(z)); (y-1/2)/(1-e) + 1/2}
#
# Fit the log odds using least squares.
#
b <- coef(fit <- lm(logit(x$Y) ~ x$X))
#
# Plot the results in two ways.
#
par(mfrow=c(1,2))
plot(x$X, logit(x$Y), cex=0.5, col="Gray",
     main="Least Squares Fit", xlab="X", ylab="Log odds")
abline(b, col="Red", lwd=2)

plot(x$X, x$Y, cex=0.5, col="Gray",
     main="LS Fit Re-expressed", xlab="X", ylab="Probability")
curve(logistic(b[1] + b[2]*x), col="Red", lwd=2, add=TRUE)