Look at the simplest functions $x(t)$ and compute the exact expressions
$v(h,t) = \frac{x(t+h)-x(t)}{h}$.
For $x(t) = at$ you have $v(h,t) = \frac{x(t+h)-x(t)}{h} = a$ and therefore
the error term $O(h)$ is zero.
For $x(t) = at^2$ you have $v(h,t) = 2 at+ah$ and $O(h)=ah.\;$ Thus the
error is constant in time, it only depends on $a,h.$
For $x(t) = at^3$ you have $v(h,t) = 3 a t^2 + 3 a t h + a h^2$ and $O(h)=3 a
t h + a h^2.\;$ Once a again you have a constant term, here $a h^2,\;$ but now
the second error term $3 a t h$ is time-dependent ant its absolute value
increases.
In the two first cases you have an obvious error of the form $\pm x.y $ m/s but in the last case it is not so easy. You can give a maximum error for the
considered t range; or if you have to bound the error you have maximum
time interval for a valid approximation.
What you need is to figure out the distribution of $T_{xy}$ given distributions for $R$ and $\tan \theta$. I'll assume that $R$ and $\theta$ are Gaussian variables with standard deviations given by the stated errors or maybe half of the stated errors (you would have to try to puzzle it out from whatever product literature you have). Note that this is already demonstrably incorrect, since $\theta$ has a finite range and a Gaussian variable must have an infinite range.
There are at least three ways to go from there:
(1) Approximate $R \tan \theta$ as linear in $R$ and $\theta$. This is inaccurate because you're ignoring higher-order terms, and it probably becomes more inaccurate as $\theta$ is farther from 0. However, it does make the problem much simpler, and it may be the only way to get an exact formula. A linear combination of Gaussian variables is again Gaussian. (Even if you don't assume that $R$ and $\theta$ are Gaussian, assuming linearity is still a big simplification.) The mean of a linear combination of Gaussian variables is a linear combination of the means and the variance is a linear combination of the variances.
(2) Try harder to get an exact formula. Use the so-called change of variables method. That isn't guaranteed to work.
(3) Use a Monte Carlo method to approximate the distribution of $R \tan \theta$ with a histogram.
My advice is to try the Monte Carlo method first, in order to get a feeling for the problem, and then the linearization method. I can help you work out the formulas for the linearization if you want. If the linearization method is too inaccurate, consider using the Monte Carlo method -- does it matter how long it takes to compute the error estimate? If not, then generating a million numbers via Monte Carlo won't hold you back.
EDIT: Here's a derivation of a linearized approximation and Monte Carlo approximation for comparison. In summary, mean and standard deviation of the linearized Gaussian approximation are:
$$\mu_{T_{x,y}}=R_{0}\,\tan \theta_{0}$$
$$\sigma_{T_{x,y}}=\sqrt{\tan ^2\theta_{0}\,\sigma_{R}^2+R_{0}^2\,
\tan ^4\theta_{0}\,\sigma_{\theta}^2+2\,R_{0}^2\,\tan ^2
\theta_{0}\,\sigma_{\theta}^2+R_{0}^2\,\sigma_{\theta}^2}$$
Maxima session:
(%i2) T[x,y]:R*tan(theta)
(%i3) foo:taylor(T[x,y],[R,theta],[R_0,theta_0],[1,1])
(%o3)/T/ tan(theta_0) R_0 + (tan(theta_0) (R - R_0)
2
+ (tan (theta_0) + 1) R_0 (theta - theta_0)) + . . .
(%i4) foo:subst([R_0 = R[0],theta_0 = theta[0]],expand(foo))
2 2
(%o4) tan(theta ) R + R tan (theta ) theta + R theta - theta R tan (theta )
0 0 0 0 0 0 0
- theta R
0 0
(%i5) A:coeff(foo,R)
(%o5) tan(theta )
0
(%i6) B:coeff(foo,theta)
2
(%o6) R tan (theta ) + R
0 0 0
(%i7) C:expand(-B*theta-A*R+foo)
2
(%o7) - theta R tan (theta ) - theta R
0 0 0 0 0
(%i8) mu[T[x,y]]:expand(C+B*theta[0]+A*R[0])
(%o8) R tan(theta )
0 0
(%i9) sigma[T[x,y]]:sqrt(expand(B^2*sigma[theta]^2+A^2*sigma[R]^2))
2 2 2 4 2
(%o9) sqrt(tan (theta ) sigma + R tan (theta ) sigma
0 R 0 0 theta
2 2 2 2 2
+ 2 R tan (theta ) sigma + R sigma )
0 0 theta 0 theta
(%i10) [R[0],sigma[R],theta[0],sigma[theta]]:[17.29,0.25,0.9,0.01]
(%i11) ev([A,B,C])
(%o11) [1.260158217550339, 44.7464980980593, - 40.27184828825338]
(%i12) load(distrib)
(%i13) load(descriptive)
(%i14) R_sample:random_normal(R[0],sigma[R],10000)
(%i15) theta_sample:random_normal(theta[0],sigma[theta],10000)
(%i16) T_xy_sample:R_sample*tan(theta_sample)
(%i17) histogram(T_xy_sample,nclasses = 20)
(%i18) [mean(T_xy_sample),ev(mu[T[x,y]])]
(%o18) [21.79822439822625, 21.78813558144536]
(%i19) [std(T_xy_sample),ev(sigma[T[x,y]])]
(%o19) [0.5516597186615362, 0.5472429351144614]
(%i20) tex('(mu[T[x,y]]) = mu[T[x,y]])
$$\mu_{T_{x,y}}=R_{0}\,\tan \vartheta_{0}$$
(%o20) false
(%i21) tex('(sigma[T[x,y]]) = sigma[T[x,y]])
$$\sigma_{T_{x,y}}=\sqrt{\tan ^2\vartheta_{0}\,\sigma_{R}^2+R_{0}^2\,
\tan ^4\vartheta_{0}\,\sigma_{\vartheta}^2+2\,R_{0}^2\,\tan ^2
\vartheta_{0}\,\sigma_{\vartheta}^2+R_{0}^2\,\sigma_{\vartheta}^2}$$
Maxima input script:
T[x, y] : R*tan(theta) $
foo : taylor (T[x, y], [R, theta], [R_0, theta_0], [1, 1]);
foo : subst ([R_0 = R[0], theta_0 = theta[0]], expand (foo));
A : coeff (foo, R);
B : coeff (foo, theta);
C : expand (foo - A*R - B*theta);
mu[T[x, y]] : expand (A*R[0] + B*theta[0] + C);
sigma[T[x, y]] : sqrt (expand (A^2*sigma[R]^2 + B^2*sigma[theta]^2));
[R[0], sigma[R], theta[0], sigma[theta]] : [17.29, 0.25, 0.9, 0.01] $
ev ([A, B, C]);
load(distrib) $
load(descriptive) $
R_sample : random_normal (R[0], sigma[R], 10000) $
theta_sample : random_normal (theta[0], sigma[theta], 10000) $
T_xy_sample : R_sample * tan(theta_sample) $
histogram (T_xy_sample, nclasses = 20) $
[mean(T_xy_sample), ev (mu[T[x,y]])];
[std(T_xy_sample), ev (sigma[T[x,y]])];
tex ('(mu[T[x, y]]) = mu[T[x, y]]);
tex ('(sigma[T[x, y]]) = sigma[T[x, y]]);
Best Answer
We have $\frac {\partial}{\partial x}\arctan (\frac xy)=\frac y{x^2+y^2}$ so the error in the angle due to error in $a$ is $\frac b{a^2+b^2}\Delta a$ You can do a similar thing for errors in $b$ and add them together to get the total error in $\theta$