For the first question, indeed there is no need to work directly with the pdf of $(X_1,X_2,X_3)$.
Let the desired probability be $$p=P(X_1>0,X_2>0,X_3>0)$$
Analogous to the two-variable case, we have $$(X_1,X_2,X_3)\stackrel{d}{=}(-X_1,-X_2,-X_3)$$
So due to symmetry we must have
$$p=P(-X_1>0,-X_2>0,-X_3>0)=P(X_1<0,X_2<0,X_3<0)\tag{1}$$
Continuing from $(1)$,
\begin{align}
1-p&=P\left[\{X_1>0\}\cup\{X_2>0\}\cup\{X_3>0\}\right]
\\\\&=P(X_1>0)+P(X_2>0)+P(X_3>0)-P(X_1>0,X_2>0)
\\\\&\quad-P(X_2>0,X_3>0)-P(X_1>0,X_3>0)+p
\\\\&=\frac{3}{2}-\left[\frac{3}{4}+\frac{1}{2\pi}(\sin^{-1}\rho_{12}+\sin^{-1}\rho_{23}+\sin^{-1}\rho_{13})\right]+p
\end{align}
Or,
$$\\1-\left[\frac{3}{4}-\frac{1}{2\pi}(\sin^{-1}\rho_{12}+\sin^{-1}\rho_{23}+\sin^{-1}\rho_{13})\right]=2p$$
Finally,
$$p=\frac{1}{8}+\frac{1}{4\pi}(\sin^{-1}\rho_{12}+\sin^{-1}\rho_{23}+\sin^{-1}\rho_{13})$$
As a side note, I think that for the derivation of the expression for $P(X_1>0,X_2>0)$, it suffices to evaluate the corresponding double integral using a polar transformation. The details are not very messy. What you have done is perfectly fine by the way.
If you assume that $X_1$ and $X_2$ have the same variance $\sigma^2$, namely that $\Sigma = \sigma^2\begin{pmatrix}
\;1 & \rho\\
\rho & \;1 \end{pmatrix}$, then you can see by diagonalizing $\Sigma$ that $x_1 := X_1+X_2$ and $x_2 :=X_1-X_2$ are independent with variances $2\sigma^2(1 \pm \rho)$ respectively.
Furthermore from the properties of the covariance of the sum we have: $$\text{svar}(X_1) + \text{svar}(X_2) = \frac{1}{2}(\text{svar}(x_1) + \text{svar}(x_2))$$
(I use $\text{svar}$ to distinguish sample variance from variance)
Therefore you can express $V_D$ and $V_p$ in term of the two independent variables $s_1^2 = \text{svar}(x_1)$, $s_2^2 =\text{svar}(x_2)$ such that
$$ V_D = s_2^2 , \quad V_p = \frac{1}{4}(s_1^2 + s_2^2) $$
Where by the well known distribution of the sample variance we know that
$$ \frac{n-1}{2\sigma^2(1+\rho)}s_1^2 \sim \chi^2_{n-1}, \quad \frac{n-1}{2\sigma^2(1-\rho)}s_2^2 \sim \chi^2_{n-1}$$
or equivalently $s_{1,2}^2 \sim \text{Gamma}((n-1)/2,4\sigma^2(1 \pm \rho)/(n-1))$.
Your desired ratio is $$ Z = 4\frac{s_2^2}{s_1^2 + s_2^2} $$
Notice that this implies that $0 \le Z \le 4$, So a $\chi^2$ approximation will probably not work very well (note also that when $\rho \to 1$ $s_2^2 \to 0$ so $Z$ becomes concentrated at 0, and likewise when $\rho \to -1$ $s_1^2 \to 0$ so $Z$ becomes concentrated at 4).
In fact the distribution of such a ratio of gamma variables is known in closed from and is given in this case by : (See e.g. here)
$$Z/4 \sim f(z)$$
$$f(z) = \frac{z^{k-1}(1-z)^{k-1}}{r^k B(k,k)} \left(1 + \frac{1-r}{r}z \right)^{-2k}, \quad 0 < z < 1.$$
Where $k = (n-1)/2$ , $r = \frac{1-\rho}{1+\rho}$ and $B(\cdotp ,\cdotp)$ is the Beta function.
Furthermore you can easily calculate the correlation between $V_D$ and $V_p$ :
$$\text{Cov}(V_D,V_p) = \text{Cov}( s_2^2 ,\frac{1}{4}(s_1^2 + s_2^2)) = \frac{1}{4} \text{var}(s_2^2) = \frac{2\sigma^4(1-\rho)^2}{n-1}$$
$$\text{var}(V_p) = \frac{1}{16} (\text{var}(s_1^2) + \text{var}(s_2^2))= \frac{\sigma^4(1+\rho^2)}{n-1}$$
$$\rho_{V_D,V_p} = \frac{\text{Cov}(V_D,V_p)}{\sqrt{\text{var}(V_D)\text{var}(V_p)}} = \frac{1-\rho}{\sqrt{2(1+\rho^2)}}$$
(Note that when $\rho=1/2$ you get exactly $1/2\sqrt{2*5/4)} = 1/\sqrt{10}$ )
Best Answer
I'll start by providing the definition of comonotonicity and countermonotonicity. Then, I'll mention why this is relevant to compute the minimum and maximum possible correlation coefficient between two random variables. And finally, I'll compute these bounds for the lognormal random variables $X_1$ and $X_2$.
Comonotonicity and countermonotonicity
The random variables $X_1, \ldots, X_d$ are said to be comonotonic if their copula is the Fréchet upper bound $M(u_1, \ldots, u_d) = \min(u_1, \ldots, u_d)$, which is the strongest type of "positive" dependence.
It can be shown that $X_1, \ldots, X_d$ are comonotonic if and only if $$ (X_1, \ldots, X_d) \stackrel{\mathrm{d}}{=} (h_1(Z), \ldots, h_d(Z)), $$ where $Z$ is some random variable, $h_1, \ldots, h_d$ are increasing functions, and $\stackrel{\mathrm{d}}{=}$ denotes equality in distribution. So, comonotonic random variables are only functions of a single random variable.
The random variables $X_1, X_2$ are said to be countermonotonic if their copula is the Fréchet lower bound $W(u_1, u_2) = \max(0, u_1 + u_2 - 1)$, which is the strongest type of "negative" dependence in the bivariate case. Countermonotonocity doesn't generalize to higher dimensions.
It can be shown that $X_1, X_2$ are countermonotonic if and only if $$ (X_1, X_2) \stackrel{\mathrm{d}}{=} (h_1(Z), h_2(Z)), $$ where $Z$ is some random variable, and $h_1$ and $h_2$ are respectively an increasing and a decreasing function, or vice versa.
Attainable correlation
Let $X_1$ and $X_2$ be two random variables with strictly positive and finite variances, and let $\rho_{\min}$ and $\rho_{\max}$ denote the minimum and maximum possible correlation coefficient between $X_1$ and $X_2$. Then, it can be shown that
Attainable correlation for lognormal random variables
To obtain $\rho_{\max}$ we use the fact that the maximum correlation is attained if and only if $X_1$ and $X_2$ are comonotonic. The random variables $X_1 = e^{Z}$ and $X_2 = e^{\sigma Z}$ where $Z \sim {\rm N} (0, 1)$ are comonotonic since the exponential function is a (strictly) increasing function, and thus $\rho_{\max} = {\rm corr} \left (e^Z, e^{\sigma Z} \right )$.
Using the properties of lognormal random variables, we have ${\rm E}(e^Z) = e^{1/2}$, ${\rm E}(e^{\sigma Z}) = e^{\sigma^2/2}$, ${\rm var}(e^Z) = e(e - 1)$, ${\rm var}(e^{\sigma Z}) = e^{\sigma^2}(e^{\sigma^2} - 1)$, and the covariance is \begin{align} {\rm cov}\left (e^Z, e^{\sigma Z}\right ) &= {\rm E}\left (e^{(\sigma + 1) Z}\right ) - {\rm E}\left (e^{\sigma Z}\right ){\rm E}\left (e^Z\right ) \\ &= e^{(\sigma + 1)^2/2} - e^{(\sigma^2 + 1)/2} \\ &= e^{(\sigma^2 + 1)/2} ( e^{\sigma} -1 ). \end{align} Thus, \begin{align} \rho_{\max} & = \frac{ e^{(\sigma^2 + 1)/2} ( e^{\sigma} -1 ) } { \sqrt{ e(e - 1) e^{\sigma^2}(e^{\sigma^2} - 1) } } \\ & = \frac{ ( e^{\sigma} -1 ) } { \sqrt{ (e - 1) (e^{\sigma^2} - 1) } }. \end{align}
Similar computations with $X_2 = e^{-\sigma Z}$ yield \begin{align} \rho_{\min} & = \frac{ ( e^{-\sigma} -1 ) } { \sqrt{ (e - 1) (e^{\sigma^2} - 1) } }. \end{align}
Comment
This example shows that it is possible to have a pair of random variable that are strongly dependent — comonotonicity and countermonotonicity are the strongest kind of dependence — but that have a very low correlation. The following chart shows these bounds as a function of $\sigma$.
This is the R code I used to produce the above chart.