R-square is a concept generally associated with least squares. In least squares, R-square is the square of the correlation between the observed values and the fitted values. It may be possible to compute the fitted values from the robust fit and use this correlation definition. A consequence is that R-square for the robust fit will always be less than or equal to R-square for the least squares fit. Since the least squares fit maximizes R-square, so if the robust fit is different, its R-square must be less than the maximum value. Thus, it could be useful in measuring how different are the two fits.
x = (1:10)';
y = 10 - 2*x + randn(10,1);
y(10) = 0;
[b_ls,~,~,~,stats_linreg] = regress(y,[ones(size(x)) x]);
[b_rob, stats_rob] = robustfit(x,y);
scatter(x,y,'filled'); grid on; hold on
plot(x,b_ls(1)+b_ls(2)*x,'r','LineWidth',2);
plot(x,b_rob(1)+b_rob(2)*x,'g','LineWidth',2)
legend('Data','Ordinary Least Squares','Robust Regression')
rquare_linreg = stats_linreg(1)
rsquare_robustfit = corr(y,b_rob(1)+b_rob(2)*x)^2
Additionally, in least squares, R-square is the square of the correlation between the observed values and the fitted values. In least squares, the following also produces R^2:
sse = error sum of squares
ssr = sum of squares of fitted values around their mean
sst = sum of squares of observed values around their mean
sst = sse + ssr
R^2 = 1-sse/sst = 1-sse/(sse+ssr)
The following code demonstrates how one may compute a possible R-square value for the robust fit:
sse = stats_rob.dfe * stats_rob.robust_s^2;
phat = b_rob(1) + b_rob(2)*x;
ssr = norm(phat-mean(phat))^2;
possible_rsquare_robustfit = 1 - sse / (sse + ssr)
Best Answer