[Math] get standard error from correlation coefficient

st.statistics

Hi, I have a paper that I'm reading and they propose an equation,

a = exp^{bT},

that is fitted to their measurements, and give the value of b as well as the coefficient of determination. Is this sufficient information to constrain the value of the standard error, and if so, how might I go about doing that? Could I just add values sampled from a normal distribution with mean of zero; play around with the standard deviation of this distribution until I get something that gives me the target R^2, and assume the standard error of that fit? (That's a form of bootstrapping?)

Thanks!

Best Answer

Look at the appropriate space in which your data would have a normal distribution, or look at the appropriate space in which your data set becomes linear.

This requires knowing the distribution of your experimental data as a prior value.

The correlation coefficient $R$ works best for linearly related data expressable as $y=mx+b$. While $R$ may have some informative value (positive or negative correlation) for non-linearly related variables, it really doesn't have a good clear meaning in non-linear cases.

Takes the logarithm of both sides and get

$$\log(a)=b \cdot \log(T)$$

and see if you can calculate the correlation and do linear regression to find the best fit linear relation between $\log(a)$ and $\log(T)$. This way, you can see if the correlation coefficient can be informative for you.

But the key thing is in knowing the distribution (or expected distribution) of your data. If it is not normally distributed in the space which you're working in, try to find a transform which moves the data into a space where it has a linear normal Gaussian distribution.

Related Question