This is true that $SS_{tot}$ will change ... but you forgot the fact that the regression sum of of squares will change as well. So let's consider the simple regression model and denote the Correlation Coefficient as $r_{xy}^2=\dfrac{S_{xy}^2}{S_{xx}S_{yy}}$, where I used the sub-index $xy$ to emphasize the fact that $x$ is the independent variable and $y$ is the dependent variable. Obviously, $r_{xy}^2$ is unchanged if you swap $x$ with $y$. We can easily show that $SSR_{xy}=S_{yy}(R_{xy}^2)$, where $SSR_{xy}$ is the regression sum of of squares and $S_{yy}$ is the total sum of squares where $x$ is independent and $y$ is dependent variable. Therefore: $$R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}=\dfrac{S_{yy}-SSE_{xy}}{S_{yy}},$$ where $SSE_{xy}$ is the corresponding residual sum of of squares where $x$ is independent and $y$ is dependent variable. Note that in this case, we have $SSE_{xy}=b^2_{xy}S_{xx}$ with $b=\dfrac{S_{xy}}{S_{xx}}$ (See e.g. Eq. (34)-(41) here.) Therefore: $$R_{xy}^2=\dfrac{S_{yy}-\dfrac{S^2_{xy}}{S^2_{xx}}.S_{xx}}{S_{yy}}=\dfrac{S_{yy}S_{xx}-S^2_{xy}}{S_{xx}.S_{yy}}.$$ Clearly above equation is symmetric with respect to $x$ and $y$. In other words: $$R_{xy}^2=R_{yx}^2.$$ To summarize when you change $x$ with $y$ in the simple regression model, both numerator and denominator of $R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}$ will change in a way that $R_{xy}^2=R_{yx}^2.$
$$R^2 = \dfrac{SSTot-SSRes}{SSTot}$$
Let's break down what those mean.
$$SSTot = \sum_i (y_i - \bar{y})^2$$
The regression that you're doing wants to predict the mean of the distribution of your response variable conditioned on some predictors. In the absence of knowing anything about how your data are generated, why not guess the overall mean? $SSTot$ is the total sum of squares and measures your error when you use the overall mean of $y$ as the prediction, no matter what predictors you have. This may be a naive approach, but it's a good baseline.
$$SSRes = \sum_i (y_i - \hat{y})^2$$
However, now that you've run your regression, you think you have more insight than you did when you were just guessing the overall mean of all $y$ values. Now see how much error you have when you use your predictions from the regression!
With those two values calculated, you can do the arithmetic to find $R^2$. Now, you want to do it on a subset of the data. I see two options. Let there be $j$-many observations in your subset.
1) $\dfrac{\sum_j (y_j - \bar{y})^2 - \sum_j (y_j - \hat{y})^2}{\sum_j (y_j - \bar{y})^2}$
2) $\dfrac{\sum_j (y_j - \bar{y}_{subset})^2 - \sum_j (y_j - \hat{y})^2}{\sum_j (y_j - \bar{y}_{subset})^2}$
The first option uses the same average value as you get when you look at the whole data set, while the second computes the average of your subset. I think I can squint and see a reason to do option #2, but I wouldn't do it. $R^2$ is a way of measuring how you do compared to naively guessing the overall mean, so I'd want to see how each subset does compared to guessing the overall mean.
Edit: Thinking about it more, I completely reject option #2. If you want to compare to the mean of the subset, rerun the regression on just the subset and calculate $R^2$ the usual way, but then you're not using the same regression equation or even the same significant parameters (it's a totally different problem).
Best Answer
"they say that it's range is [0,1]" and they are wrong as it can indeed be negative although to be significantly negative the model has to be intentionally bad and the max is indeed 1.0.