Because linear regression does not assume any distribution of predictors, as long as
- They are not perfectly collinear, and
- None of them is a constant, it should be fine.
Your example is just like using regression as an ANOVA sans interaction (aka, not a full-factorial design.) If additional effect due to co-influence by A & B is of your interest, compute an interaction term (by multiplying your two dummy variables) and include it as a predictor as well.
I hope I got you right: let $X$ be the covariates matrix and $y$ the response variable. The OLS coefficients estimate is defined as $\hat{\beta}=(X^TX)^{-1}X^Ty$ and the predicted values are defined $\hat{y}=X\hat{\beta}=X(X^TX)^{-1}X^Ty$, which is the projection of $y$ to the subspace spanned by $X$ columns.
Under the normal model you also get $\hat{\beta}\sim~N(\beta,\sigma^2(X^TX)^{-1})$ and $\hat{y}\sim~N(\mu,\sigma^2X(X^TX)^{-1}X^T)$.
When observing the marginal distributions, we get $\hat{\beta}_j\sim~N(\beta,\sigma^2(X^TX)^{-1}_{jj})$ and $\hat{y}_i\sim~N(\mu_i,\sigma^2x_i(X^TX)^{-1}x_i^T)$, but this does not mean the $\beta$ variances matrix is diagonal (and the same applies for the predictions. In fact, when discussing GLM submodels (linear regression included) it is highly unlikely to encounter diagonal covariance matrices.
Now, let $e$ be the residuals: $e=y-\hat{y}=y-X(X^TX)^{-1}X^Ty=(I-X(X^TX)^{-1}X^T)y$.
$SSE=e^Te=y^T(I-X(X^TX)^{-1}X^T)^T(I-X(X^TX)^{-1}X^T)y=
y^T(I^T-(X(X^TX)^{-1}X^T)^T)(I-X(X^TX)^{-1}X^T)y=
y^T(I-(X(X^TX)^{-1}X^T))(I-X(X^TX)^{-1}X^T)y=
y^T(I-X(X^TX)^{-1}X^T)y$
$R^2$ is defined as $R^2=1-\frac{SSE}{SST}$, where $SST=\sum_i{(y_i-\bar{y})^2}$.
Now to some intuitive handwaving: As you can see, $SSE$ "contains information" from the whole $\beta$ variances matrix (i.e, both unique and shared) and not just the diagonals (which stand for unique contributions). This explains how the shared contribution ends up in $R^2$.
Leaving the algebra aside, let me try and simplify the math: $SSE=\sum_i{(y_i-\hat{y}_i)^2}$, this is the sum of squared prediction error, so actually $R^2=1-\frac{SSE}{SST}$ is computed using the regression equation.
Furthermore, as $X$ is the predictors matrix ($x_1, x_2$, etc.) and the regression coefficients are computed as a bunch using the whole $X$ matrix, then each coefficient contains some covariance information. The situation where each coefficient contains only unique information can occur only if there's no covariance or if you compute separate regression for each coefficient which is very wrong.
Best Answer
The thing with an equation is that at each step you do things that preserve the equality*
* (loosely expressed as "do the same thing to both sides" - which is accurate if you don't do things that remove solutions nor add solutions that don't solve the original). For example, you can add some number to both sides.
$ 5.44 + 0.26X_1 - 3.19=10$
... now add or subtract the same numbers on both sides to isolate $X_1$:
$ 5.44 - 5.44 + 0.26X_1 - 3.19=10-5.44$ gives
$0.26X_1 - 3.19=10-5.44$
$0.26X_1 - 3.19+3.19=10-5.44+3.19$
$0.26X_1=10-5.44+3.19$ ... now divide both sides by the same (non-zero) quantity
$0.26X_1/0.26=(10-5.44+3.19)/0.26$
$X_1=(10-5.44+3.19)/0.26 = 29.80769$
You can do similar manipulations when $X_2$ takes a different value. Or you can keep $X_2$ just as $X_2$ until after it has been rearranged to make $X_1$ the subject.