If you have a random variable $X$ distributed with continuous distribution function $F$, and you define a random variable $Y=h(X)$, then what is its distribution function? Let's just use the definition of distribution function:
\begin{align}
G(y) &= P\{Y \le y\} \\
&= P\{h(X) \le y\}
\end{align}
If $h$ is monotonically increasing (hence invertible) and differentiable, then the next steps are easy:
\begin{align}
G(y) &= P\{X \le h^{-1}(y)\} \\
&= F(h^{-1}(y))\\
g(y) &= \frac{d}{dy}G(y) = f(h^{-1}(y))\frac{d}{dy}h^{-1}(y)
\end{align}
By considering the decreasing case, you can see that the general formula for monotonic $h$ is:
\begin{align}
g(y) &= f(h^{-1}(y))|\frac{d}{dy}h^{-1}(y)|
\end{align}
You are interested in cases where $h$ is not invertible, though, and in cases where the function $h$ takes many arguments and returns a single value but where the random variables are continuous. So, consider a bunch of random variables $X_1,\ldots,X_K$ with continuous joint distribution function $F(X_1,\ldots,X_K)$ and a random variable $Y$ defined by a differentiable function $h$ as $Y=h(X_1,\ldots,X_K)$
\begin{align}
G(y) &= P\{Y \le y\} \\
&= P\{h(X_1,\ldots,X_K) \le y\}\\
&= \int_{h(X_1,\ldots,X_K) \le y} f(X_1,\ldots,X_K) d X_1 d X_2 \ldots dX_K
\end{align}
The random variable $Y$ has density:
\begin{align}
g(y) &= \frac{d}{dy} \int_{h(X_1,\ldots,X_K) \le y} f(X_1,\ldots,X_K) d X_1 d X_2 \ldots dX_K
\end{align}
This is not that useful in practice, though. Generally, you are going to have to find a way, on a function-by-function case, to make evaluating these two items tractable. In the case of $Y=sin(X)$, $sin$ is periodic, so you just chop up its domain into half-cycles (within which it is monotonic and invertible). You can get the density (except at points where $Y=0$ or $Y=1$) from the infinite series (which, as a practical matter you approximate by just leaving off the terms where $f(x)$ is very small):
\begin{align}
g(y) &= \sum_{x:sin(x)=y} f(x) \left| \frac{d}{dy} sin^{-1}(y) \right|
\end{align}
For your example of $Y=X_1X_2$:
\begin{align}
G(y) &= P\{Y \le y\} \\
&= \int_{X_1X_2 \le y} f(X_1,X_2) d X_1 d X_2
\end{align}
Because of the way sign and multiplication work, evaluating this integral is a bit annoying. Let's evaluate it for a $y\ge0$. For a $y$ like this, $X_1X_2\le y$ any time one but not both $X$s are negative, any time both are positive but not too big, and any time both are negative but not too big in absolute value:
\begin{align}
G(y) &= \int_0^\infty \int_{-\infty}^0 f(X_1,X_2) d X_1 d X_2 + \int_{-\infty}^0 \int_0^\infty f(X_1,X_2) d X_1 d X_2\\
&+ \int_0^\infty \int_0^{\frac{y}{X_1}} f(X_1,X_2) d X_1 d X_2 + \int_{-\infty}^0 \int_{\frac{y}{X_1}}^0 f(X_1,X_2) d X_1 d X_2
\end{align}
Then the density of $Y$ is going to be:
\begin{align}
g(y) &= \frac{d}{dy}G(y)\\
&= \int_0^\infty \frac{1}{X_1}f(X_1,\frac{y}{X_1}) d X_1 + \int_{-\infty}^0 -\frac{1}{X_1} f(X_1,\frac{y}{X_1}) d X_1
\end{align}
For a standard linear regression that meets the normal assumptions, the variances of your parameter estimates can be taken from the variance covariance matrix, $\Sigma$. For example, the variance of the intercept is the first element on the main diagonal, $\Sigma_{11}$. The variance of the slope on $X_1$ is the second element on the main diagonal, $\Sigma_{22}$, and so on.
There are probably many ways to skin a cat, but the standard calculation for the variance covariance matrix uses the residual variance from your model and your design matrix. Then it is:
$$
\rm{VCov(model)} = s^2(X' X)^{-1}
$$
Here's a worked example of the calculations with your data and R:
x = c(1:6); y = c(18, 14, 15, 12, 7, 6); m = lm(y ~ x)
summary(m)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 20.4000 1.4119 14.45 0.000133 ***
# x -2.4000 0.3625 -6.62 0.002700 **
#
# Residual standard error: 1.517 on 4 degrees of freedom
s = summary(m)$sigma; s # [1] 1.516575
dm = model.matrix(m); dm
# (Intercept) x
# 1 1 1
# 2 1 2
# 3 1 3
# 4 1 4
# 5 1 5
# 6 1 6
s^2*solve(t(dm)%*%dm)
# (Intercept) x
# (Intercept) 1.993333 -0.4600000
# x -0.460000 0.1314286
vcov(m) # you can see that this is the same as the manual calculation above
# (Intercept) x
# (Intercept) 1.993333 -0.4600000
# x -0.460000 0.1314286
sqrt(diag(vcov(m))) # these are the same standard errors as the summary output
# (Intercept) x
# 1.4118546 0.3625308
Best Answer
Simulating from the density $$f(x)=\sin(x)\mathbb{I}_{(0,\pi/2)}(x)$$ can be done by the inverse cdf method, since the cdf is then $$F(x)=\int_0^x \sin(x)\text{d}x=1-\cos(x)\qquad 0\le x\le\pi/2$$ The inverse cdf method consists in simulating $U\sim\mathcal{U}(0,1)$, a standard Uniform, and then solving $F(x)=u$, which in this case means $$1-\cos(x)=u\quad\text{i.e.}\quad x=\arccos(1-u)$$
The Monte Carlo approximations to mean and variance are then easily coded, for instance in R
Note that the expectation coincides with your suggestion to use the identity$$\mathbb{E}[X]=\int_0^{\pi/2} \{1-F(x)\}\,\text{d}x=\int_0^{\pi/2} \cos(x)\,\text{d}x=1$$