Suppose that we have two measured values $x$ and $y$ with maximum absolute errors of $e_x$ and $e_y$.
Is there a formula to find a good upper bound for absolute and relative error of $x^y$?
error-propagation
Suppose that we have two measured values $x$ and $y$ with maximum absolute errors of $e_x$ and $e_y$.
Is there a formula to find a good upper bound for absolute and relative error of $x^y$?
The first assertion deals wtih mean squared errors, which in probabilistic terms translates into standard deviations.
Now, probability says that the variance of the sum of two independent variables is the sum of the variances. That is, if $z = x + y$ ($x$ and $y$ indep), then $\sigma_z^2 = \sigma_x^2 + \sigma_y^2 $ and $$e_z = \sigma_z = \sqrt{\sigma_x^2 + \sigma_y^2} = \sqrt{e_x^2 + e_y^2} $$
Knowing this, and knowing that $Var(a X) = a^2 Var(X)$, if $z = a x + (1-a) y$ (weighted mean, if $ 0\le a \le1$) we get:
$$\sigma_z^2 = a^2\sigma_x^2 + (1-a)^2\sigma_y^2 $$
$$e_z = \sqrt{a^2 e_x^2 + (1-a)^2 e_y^2} = a \sqrt{ e_x^2 + \left(\frac{1-a}{a}\right)^2 e_y^2} $$
In particular, if $a=1/2$ , then $e_z = \frac{1}{2}\sqrt{ e_x^2 + e_y^2} $
Another particular case: if $e_x = e_y$ then
$$e_z = e_x \sqrt{a^2 +(1-a)^2}$$
What you need is to figure out the distribution of $T_{xy}$ given distributions for $R$ and $\tan \theta$. I'll assume that $R$ and $\theta$ are Gaussian variables with standard deviations given by the stated errors or maybe half of the stated errors (you would have to try to puzzle it out from whatever product literature you have). Note that this is already demonstrably incorrect, since $\theta$ has a finite range and a Gaussian variable must have an infinite range.
There are at least three ways to go from there:
(1) Approximate $R \tan \theta$ as linear in $R$ and $\theta$. This is inaccurate because you're ignoring higher-order terms, and it probably becomes more inaccurate as $\theta$ is farther from 0. However, it does make the problem much simpler, and it may be the only way to get an exact formula. A linear combination of Gaussian variables is again Gaussian. (Even if you don't assume that $R$ and $\theta$ are Gaussian, assuming linearity is still a big simplification.) The mean of a linear combination of Gaussian variables is a linear combination of the means and the variance is a linear combination of the variances.
(2) Try harder to get an exact formula. Use the so-called change of variables method. That isn't guaranteed to work.
(3) Use a Monte Carlo method to approximate the distribution of $R \tan \theta$ with a histogram.
My advice is to try the Monte Carlo method first, in order to get a feeling for the problem, and then the linearization method. I can help you work out the formulas for the linearization if you want. If the linearization method is too inaccurate, consider using the Monte Carlo method -- does it matter how long it takes to compute the error estimate? If not, then generating a million numbers via Monte Carlo won't hold you back.
EDIT: Here's a derivation of a linearized approximation and Monte Carlo approximation for comparison. In summary, mean and standard deviation of the linearized Gaussian approximation are:
$$\mu_{T_{x,y}}=R_{0}\,\tan \theta_{0}$$
$$\sigma_{T_{x,y}}=\sqrt{\tan ^2\theta_{0}\,\sigma_{R}^2+R_{0}^2\, \tan ^4\theta_{0}\,\sigma_{\theta}^2+2\,R_{0}^2\,\tan ^2 \theta_{0}\,\sigma_{\theta}^2+R_{0}^2\,\sigma_{\theta}^2}$$
Maxima session:
(%i2) T[x,y]:R*tan(theta)
(%i3) foo:taylor(T[x,y],[R,theta],[R_0,theta_0],[1,1])
(%o3)/T/ tan(theta_0) R_0 + (tan(theta_0) (R - R_0)
2
+ (tan (theta_0) + 1) R_0 (theta - theta_0)) + . . .
(%i4) foo:subst([R_0 = R[0],theta_0 = theta[0]],expand(foo))
2 2
(%o4) tan(theta ) R + R tan (theta ) theta + R theta - theta R tan (theta )
0 0 0 0 0 0 0
- theta R
0 0
(%i5) A:coeff(foo,R)
(%o5) tan(theta )
0
(%i6) B:coeff(foo,theta)
2
(%o6) R tan (theta ) + R
0 0 0
(%i7) C:expand(-B*theta-A*R+foo)
2
(%o7) - theta R tan (theta ) - theta R
0 0 0 0 0
(%i8) mu[T[x,y]]:expand(C+B*theta[0]+A*R[0])
(%o8) R tan(theta )
0 0
(%i9) sigma[T[x,y]]:sqrt(expand(B^2*sigma[theta]^2+A^2*sigma[R]^2))
2 2 2 4 2
(%o9) sqrt(tan (theta ) sigma + R tan (theta ) sigma
0 R 0 0 theta
2 2 2 2 2
+ 2 R tan (theta ) sigma + R sigma )
0 0 theta 0 theta
(%i10) [R[0],sigma[R],theta[0],sigma[theta]]:[17.29,0.25,0.9,0.01]
(%i11) ev([A,B,C])
(%o11) [1.260158217550339, 44.7464980980593, - 40.27184828825338]
(%i12) load(distrib)
(%i13) load(descriptive)
(%i14) R_sample:random_normal(R[0],sigma[R],10000)
(%i15) theta_sample:random_normal(theta[0],sigma[theta],10000)
(%i16) T_xy_sample:R_sample*tan(theta_sample)
(%i17) histogram(T_xy_sample,nclasses = 20)
(%i18) [mean(T_xy_sample),ev(mu[T[x,y]])]
(%o18) [21.79822439822625, 21.78813558144536]
(%i19) [std(T_xy_sample),ev(sigma[T[x,y]])]
(%o19) [0.5516597186615362, 0.5472429351144614]
(%i20) tex('(mu[T[x,y]]) = mu[T[x,y]])
$$\mu_{T_{x,y}}=R_{0}\,\tan \vartheta_{0}$$
(%o20) false
(%i21) tex('(sigma[T[x,y]]) = sigma[T[x,y]])
$$\sigma_{T_{x,y}}=\sqrt{\tan ^2\vartheta_{0}\,\sigma_{R}^2+R_{0}^2\,
\tan ^4\vartheta_{0}\,\sigma_{\vartheta}^2+2\,R_{0}^2\,\tan ^2
\vartheta_{0}\,\sigma_{\vartheta}^2+R_{0}^2\,\sigma_{\vartheta}^2}$$
Maxima input script:
T[x, y] : R*tan(theta) $
foo : taylor (T[x, y], [R, theta], [R_0, theta_0], [1, 1]);
foo : subst ([R_0 = R[0], theta_0 = theta[0]], expand (foo));
A : coeff (foo, R);
B : coeff (foo, theta);
C : expand (foo - A*R - B*theta);
mu[T[x, y]] : expand (A*R[0] + B*theta[0] + C);
sigma[T[x, y]] : sqrt (expand (A^2*sigma[R]^2 + B^2*sigma[theta]^2));
[R[0], sigma[R], theta[0], sigma[theta]] : [17.29, 0.25, 0.9, 0.01] $
ev ([A, B, C]);
load(distrib) $
load(descriptive) $
R_sample : random_normal (R[0], sigma[R], 10000) $
theta_sample : random_normal (theta[0], sigma[theta], 10000) $
T_xy_sample : R_sample * tan(theta_sample) $
histogram (T_xy_sample, nclasses = 20) $
[mean(T_xy_sample), ev (mu[T[x,y]])];
[std(T_xy_sample), ev (sigma[T[x,y]])];
tex ('(mu[T[x, y]]) = mu[T[x, y]]);
tex ('(sigma[T[x, y]]) = sigma[T[x, y]]);
Best Answer
The notation gets confusing, so allow me to set $e_x=\epsilon_x$ and $e_y=\epsilon_y$. The trick here is to observe that $x^y = e^{y \log{x}}$ and Taylor expand as follows:
$$\begin{align}(x+\epsilon_x)^{y+\epsilon_y} &= e^{(y+\epsilon_y)\log{(x+\epsilon_x)}}\\ &=e^{y \log{(x+\epsilon_x)}} e^{\epsilon_y \log{(x+\epsilon_x)}} \\ &= e^{y \log{x} + y \log{(1+\epsilon_x/x)}}\left [1+ \epsilon_y \log{(x+\epsilon_x)}+ O(\epsilon_y^2)\right]\\ &= x^y e^{y[\epsilon_x/x + O(\epsilon_x^2)] } \left [1+ \epsilon_y \log{x} +\epsilon_y \log{(1+\epsilon_x/x)}+ O(\epsilon_y^2)\right]\\ &= x^y \left (1+ \epsilon_x\frac{y}{x} + O(\epsilon_x^2)\right )\left [1+ \epsilon_y \log{x} +\epsilon_y \frac{\epsilon_x}{x}+ O(\epsilon_y^2) + O(\epsilon_x^2 \epsilon_y)\right]\\ &= x^y \left (1 + \epsilon_x\frac{y}{x} + \epsilon_y \log{x} + \left( \frac{1+y \log{x}}{x}\right )\epsilon_x \epsilon_y+ O(\epsilon_y^2) + O(\epsilon_x^2 \epsilon_y) \right) \end{align}$$
If you wish to ignore all second-order error terms, then you have
$$(x+\epsilon_x)^{y+\epsilon_y} \approx x^y \left (1 + \epsilon_x\frac{y}{x} + \epsilon_y \log{x} \right )$$