What you need is to figure out the distribution of $T_{xy}$ given distributions for $R$ and $\tan \theta$. I'll assume that $R$ and $\theta$ are Gaussian variables with standard deviations given by the stated errors or maybe half of the stated errors (you would have to try to puzzle it out from whatever product literature you have). Note that this is already demonstrably incorrect, since $\theta$ has a finite range and a Gaussian variable must have an infinite range.
There are at least three ways to go from there:
(1) Approximate $R \tan \theta$ as linear in $R$ and $\theta$. This is inaccurate because you're ignoring higher-order terms, and it probably becomes more inaccurate as $\theta$ is farther from 0. However, it does make the problem much simpler, and it may be the only way to get an exact formula. A linear combination of Gaussian variables is again Gaussian. (Even if you don't assume that $R$ and $\theta$ are Gaussian, assuming linearity is still a big simplification.) The mean of a linear combination of Gaussian variables is a linear combination of the means and the variance is a linear combination of the variances.
(2) Try harder to get an exact formula. Use the so-called change of variables method. That isn't guaranteed to work.
(3) Use a Monte Carlo method to approximate the distribution of $R \tan \theta$ with a histogram.
My advice is to try the Monte Carlo method first, in order to get a feeling for the problem, and then the linearization method. I can help you work out the formulas for the linearization if you want. If the linearization method is too inaccurate, consider using the Monte Carlo method -- does it matter how long it takes to compute the error estimate? If not, then generating a million numbers via Monte Carlo won't hold you back.
EDIT: Here's a derivation of a linearized approximation and Monte Carlo approximation for comparison. In summary, mean and standard deviation of the linearized Gaussian approximation are:
$$\mu_{T_{x,y}}=R_{0}\,\tan \theta_{0}$$
$$\sigma_{T_{x,y}}=\sqrt{\tan ^2\theta_{0}\,\sigma_{R}^2+R_{0}^2\,
\tan ^4\theta_{0}\,\sigma_{\theta}^2+2\,R_{0}^2\,\tan ^2
\theta_{0}\,\sigma_{\theta}^2+R_{0}^2\,\sigma_{\theta}^2}$$
Maxima session:
(%i2) T[x,y]:R*tan(theta)
(%i3) foo:taylor(T[x,y],[R,theta],[R_0,theta_0],[1,1])
(%o3)/T/ tan(theta_0) R_0 + (tan(theta_0) (R - R_0)
2
+ (tan (theta_0) + 1) R_0 (theta - theta_0)) + . . .
(%i4) foo:subst([R_0 = R[0],theta_0 = theta[0]],expand(foo))
2 2
(%o4) tan(theta ) R + R tan (theta ) theta + R theta - theta R tan (theta )
0 0 0 0 0 0 0
- theta R
0 0
(%i5) A:coeff(foo,R)
(%o5) tan(theta )
0
(%i6) B:coeff(foo,theta)
2
(%o6) R tan (theta ) + R
0 0 0
(%i7) C:expand(-B*theta-A*R+foo)
2
(%o7) - theta R tan (theta ) - theta R
0 0 0 0 0
(%i8) mu[T[x,y]]:expand(C+B*theta[0]+A*R[0])
(%o8) R tan(theta )
0 0
(%i9) sigma[T[x,y]]:sqrt(expand(B^2*sigma[theta]^2+A^2*sigma[R]^2))
2 2 2 4 2
(%o9) sqrt(tan (theta ) sigma + R tan (theta ) sigma
0 R 0 0 theta
2 2 2 2 2
+ 2 R tan (theta ) sigma + R sigma )
0 0 theta 0 theta
(%i10) [R[0],sigma[R],theta[0],sigma[theta]]:[17.29,0.25,0.9,0.01]
(%i11) ev([A,B,C])
(%o11) [1.260158217550339, 44.7464980980593, - 40.27184828825338]
(%i12) load(distrib)
(%i13) load(descriptive)
(%i14) R_sample:random_normal(R[0],sigma[R],10000)
(%i15) theta_sample:random_normal(theta[0],sigma[theta],10000)
(%i16) T_xy_sample:R_sample*tan(theta_sample)
(%i17) histogram(T_xy_sample,nclasses = 20)
(%i18) [mean(T_xy_sample),ev(mu[T[x,y]])]
(%o18) [21.79822439822625, 21.78813558144536]
(%i19) [std(T_xy_sample),ev(sigma[T[x,y]])]
(%o19) [0.5516597186615362, 0.5472429351144614]
(%i20) tex('(mu[T[x,y]]) = mu[T[x,y]])
$$\mu_{T_{x,y}}=R_{0}\,\tan \vartheta_{0}$$
(%o20) false
(%i21) tex('(sigma[T[x,y]]) = sigma[T[x,y]])
$$\sigma_{T_{x,y}}=\sqrt{\tan ^2\vartheta_{0}\,\sigma_{R}^2+R_{0}^2\,
\tan ^4\vartheta_{0}\,\sigma_{\vartheta}^2+2\,R_{0}^2\,\tan ^2
\vartheta_{0}\,\sigma_{\vartheta}^2+R_{0}^2\,\sigma_{\vartheta}^2}$$
Maxima input script:
T[x, y] : R*tan(theta) $
foo : taylor (T[x, y], [R, theta], [R_0, theta_0], [1, 1]);
foo : subst ([R_0 = R[0], theta_0 = theta[0]], expand (foo));
A : coeff (foo, R);
B : coeff (foo, theta);
C : expand (foo - A*R - B*theta);
mu[T[x, y]] : expand (A*R[0] + B*theta[0] + C);
sigma[T[x, y]] : sqrt (expand (A^2*sigma[R]^2 + B^2*sigma[theta]^2));
[R[0], sigma[R], theta[0], sigma[theta]] : [17.29, 0.25, 0.9, 0.01] $
ev ([A, B, C]);
load(distrib) $
load(descriptive) $
R_sample : random_normal (R[0], sigma[R], 10000) $
theta_sample : random_normal (theta[0], sigma[theta], 10000) $
T_xy_sample : R_sample * tan(theta_sample) $
histogram (T_xy_sample, nclasses = 20) $
[mean(T_xy_sample), ev (mu[T[x,y]])];
[std(T_xy_sample), ev (sigma[T[x,y]])];
tex ('(mu[T[x, y]]) = mu[T[x, y]]);
tex ('(sigma[T[x, y]]) = sigma[T[x, y]]);
Best Answer
As $Harish has already shown, the maximum uncertainty is much smaller than what you have found. Given that you are using the general formula for error propagation (using partial derivatives, etc.) I'm guessing you are not looking for the maximum uncertainty but for the "one-sigma" uncertainty. So your answer should be a smaller than what he found. Also, he has accidentally found twice the maximum error since he subtracted the lower bound from the top bound (the full range) instead of quoting half the range.
Your problem is that you are not paying careful attention to units. In your second last line you say
$\sigma_{\tan{(\theta)}} = 1.17 \times 1.2$
But notice that your "1.2" came from the original uncertainty in $\theta$ so it is really $1.2^\circ$. So this means your answer for the uncertainty in $\tan{\theta}$ is in degrees which can't be right since $\tan{\theta}$ should be a dimensionless number. You need to put $\sigma_\theta$ into radians. This will give you a far more reasonable answer.