Yes, you're trying to calculate the Extra Sum of Squares. In short you are partitioning the regression sum of squares. Assume we have two $X$ variables, $X_1$ and $X_2$. The $SSTO$ (total sum of squares, made up of the SSR and SSE) is the same regardless of how many $X$ variables we have. Denote the $SSR$ and $SSE$ to indicate which $X$ variables are in the model: e.g.
$SSR(X_1,X_2) = 385$ and $SSE(X_1,X_2) = 110$
Now let's assume we did the regression just on $X_1$ e.g.
$SSR(X_1) = 352$ and $SSE(X_1) = 143$.
The (marginal) increase in the regression sum of squares in $X_2$ given that $X_1$ is already in the model is:
\begin{eqnarray} SSR(X_2|X_1)& = &SSR(X_1,X_2) - SSR(X_1)\\
& = & 385 - 352\\
& = & 33
\end{eqnarray}
or equivalently, the extra reduction in the error sum of squares associated with $X_2$ given that $X_1$ is already in the model is:
\begin{eqnarray} SSR(X_2|X_1) & = & SSE(X_1) - SSE(X_2,X_1)\\
&=& 143 - 110\\
&=& 33
\end{eqnarray}
In the same way we can find:
\begin{eqnarray} SSR(X_1|X_2) &=& SSE(X_2) - SSE(X_1,X_2)\\
&=& SSR(X_1,X_2) - SSR(X_2)
\end{eqnarray}
Of course, this also works for more $X$ variables as well.
I really think this is a good question and deserves an answer. The link provided is written by a psychologist who is claiming that some home-brew method is a better way of doing time series analysis than Box-Jenkins. I hope that my attempt at an answer will encourage others, who are more knowledgeable about time series, to contribute.
From his introduction, it looks like Darlington is championing the approach of just fitting an AR model by least-squares. That is, if you want to fit the model
$$z_t = \alpha_1 z_{t-1} + \cdots + \alpha_k z_{t-k} + \varepsilon_t$$
to the time series $z_t$, you can just regress the series $z_t$ on the series with lag $1$, lag $2$, and so on up to lag $k$, using an ordinary multiple regression. This is certainly allowed; in R, it's even an option in the ar
function. I tested it out, and it tends to give similar answers to the default method for fitting an AR model in R.
He also advocates regressing $z_t$ on things like $t$ or powers of $t$ to find trends. Again, this is absolutely fine. Lots of time series books discuss this, for example Shumway-Stoffer and Cowpertwait-Metcalfe. Typically, a time series analysis might proceed along the following lines: you find a trend, remove it, then fit a model to the residuals.
But it seems like he is also advocating over-fitting and then using the reduction in the mean-squared error between the fitted series and the data as evidence that his method is better. For example:
I feel correlograms are now obsolescent. Their primary purpose was to
allow workers to guess which models will fit the data best, but the
speed of modern computers (at least in regression if not in
time-series model-fitting) allows a worker to simply fit several
models and see exactly how each one fits as measured by mean squared
error. [The issue of capitalization on chance is not relevant to this
choice, since the two methods are equally susceptible to this problem.]
This is not a good idea because the test of a model is supposed to be how well it can forecast, not how well it fits the existing data. In his three examples, he uses "adjusted root mean-squared error" as his criterion for the quality of the fit. Of course, over-fitting a model is going to make an in-sample estimate of error smaller, so his claim that his models are "better" because they have smaller RMSE is wrong.
In a nutshell, since he is using the wrong criterion for assessing how good a model is, he reaches the wrong conclusions about regression vs. ARIMA. I'd wager that, if he had tested the predictive ability of the models instead, ARIMA would have come out on top. Perhaps someone can try it if they have access to the books he mentions here.
[Supplemental: for more on the regression idea, you might want to check out older time series books which were written before ARIMA became the most popular. For example, Kendall, Time-Series, 1973, Chapter 11 has a whole chapter on this method and comparisons to ARIMA.]
Best Answer
Running this regression in R:
Provides the $R^2$ and $R^2_{\text{adj}}$ of the regression:
Note that $R^2$ and $R^2_{\text{adj}}$ are related by the formula,
$$ R^2_{\text{adj}} = 1 - \frac{n-1}{n-p-1}(1-R^2), $$
with $n$ being the number of observations ($10$ in this case), and $p$ the number of regressors ($1$ in this case). We see that $R^2_{\text{adj}}$ penalizes models with large $p$, in contrast to $R^2$ which never decreases with the addition of a regressor.
See also: