I'm converting a previous comment to an answer, expanding a bit based on a follow-up comment from the OP. The original, unedited comment was:
There is no silver bullet for decomposing variation in that situation. One thing you can do with two collinear predictors, $x_1,x_2$, is fit a model $x_1 \sim x_2$, take the residuals from that model, $η$, and replace $x_1$ with $η$ in the model $y \sim x_1+x_2$. This way, you will, definitionally, have uncorrelated predictors and the contribution of η is thought of as the variance explained by $x_1$ that is not subsumed by $x_2$. Of course, which variable is $x_1$ and which is $x_2$ is a judgment call (though the overall model fit will be identical).
In response to the OP's comment:
@Macro, this is a nice thing... maybe worth posting an answer, so we can discuss it with more detail? This is very interesting, because then $x_1=x_2+η$, and if you replace the x1 with η in the original model, you get $y \sim η+x_2=x_1$, which means you loose $x_2$ for the overall fit of the model! And this is strange, paradox! Please post your comment as an answer to discuss it in more detail.
Be careful here, because $x_1 \sim x_2$ is R pseudo-code for the model $x_1 = \beta_0 + \beta_1 x_2 + \eta$, not $x_1 = x_2 + \eta$. So, by my back-of-the-envelope calculation, this means that the model $y \sim \eta + x_2$, which is short hand for $y = \alpha_0 + \alpha_1 \eta + \alpha_2 x_2 + \varepsilon$, can be written as
$$ y = (\alpha_0 - \alpha_1 \beta_0) + \alpha_1 x_1 + (\alpha_2 - \alpha_1 \beta_1) x_2 + \varepsilon $$
So $x_2$ does not drop out of the model. Indeed the model $y \sim \eta + x_2$ can be seen to have identical degrees of freedom, fit statistics, etc. to the model $y \sim x_1 + x_2$, but the predictors are now uncorrelated.
Do you have any values of the response that are exactly 0 or 1? (those will cause problems with a logit transform)
Have you tried plotting your data? What exploratory techniques have you used? What have other researchers in the area done?
You could try simulating some data that fits with a logit transform or a beta regression model (or anything else that you consider trying) and see how that compares to your data to get a better feel for which model may be more appropriate.
With what you have given us, we can only make suggestions, you need to decide on what makes the most sense based on your understanding of the data, the science behind it, and what questions you are trying to ask. You may also need to consult with an expert in the area and/or a professional statistician. Choosing to not do a beta regression because it is beyond you is like having your doctor say that you may need brain surgery, but he is going to take out your appendix instead because brains are beyond his experience, but he is good with appendixes.
Best Answer
I don't think beta regression, as suggested by @O_Devinyak, will work well for this case as there are exact 0s and 1s in the data and the beta distribution only works for values between, but not including, 0 and 1.
A solution that has become more popular in economics is the so-called fractional logit model, which economists tend to attribute to Papke and Wooldridge (1996), though the basic idea can be traced back to at least Wedderburn (1974). Nowadays it is fairly easy to estimate such models. For example in Stata (the statistical program I know best) you would use the
glm
program in combination with thelink(logit) family(binomial) vce(robust)
options.Wedderburn, R. W. 1974. Quasi-likelihood functions, generalized linear models, and the Gauss—Newton method. Biometrika, 61(3): 439-447.
Papke, Leslie E. and Jeffrey M. Wooldridge. 1996. Econometric methods for fractional response variables with an application to 401(k) Plan participation rates. Journal of Applied Econometrics, 11(6): 619-632.