First things first. There needs to be greater information given as this does not have a universally correct answer. Different types of distributions have to be looked at with different types of procedures.

But just to show that yes this is possible, we assume that each of the variables that you have mentioned are normally distributed but the parameters of the normal distributions are different from each other for any given pair.

Now we take n samples each of these variables. Then calculate the correlation coefficients for each pair of the variables. If we cannot reject the hypothesis of these correlation coefficients being zero, we hypothesize that the variables are independent of each other. So we have a set of variables which are independent from each other, but they have different probability distributions.

A clarification of your question (there seem to me to be two related, but different, parts): you are looking for (1) distribution of a sum of $n$ *independent* squared $t_{\alpha }$ random variables, and
(2) the *sampling* distribution of the variance (or the related standard deviation) of a random sample drawn from a $t_{\alpha }$ distribution (presumably your reason for asking about (1)).

### Distribution of Sum of Independent Squared $t_{\alpha }$ Variables

If $T_i\sim t_{\alpha }$ are (independent) random $t$ variables with $\alpha$ d.f., then it is false that $\sum _{i=1}^n T_i^2\sim F(n,\alpha )$ (which is what you seem to be claiming in your second "possible solution"). This is easily verified by considering the first moment of each (the latter's first moment is $n$ times the first's).

The claim in your first "possible solution" is correct: $T_i^2\sim F(1,\alpha)$. Rather than resorting to characteristic functions, I think this result is more transparent when considering the characterisation of the $t$ distribution as the distribution of the ratio $\frac{Z}{\sqrt{U/\alpha}}$ where $Z$ is a standard normal variable and $U$ is a chi-squared variable with $\alpha$ degrees of freedom, independent of $Z$. The square of this ratio is then the ratio of two independent chi-squared variables scaled by their respective degrees of freedom i.e. $\frac{V/1}{U/\alpha}$ with $V=Z^2$, which is a standard characterisation of an $F(1,\alpha)$ distribution (with numerator d.f. equal to 1 and denominator d.f. equal to $\alpha$).

Considering the note I made on first moments in the first paragraph above, it might seem that a better claim may be that $\sum _{i=1}^n T_i^2\sim n F(n,\alpha )$ [I have slightly abused notation here by using the same expression for the distribution as well as a random variable having that distribution.]. Whilst the first moments match, the second central moments do not (for $\alpha>4$ the variance of the first expression is less than the variance of the latter expression) - so this claim is false too. [That being said, it is interesting to observe that $\lim_{\alpha \to \infty } \, n F(n,\alpha)= \chi _n^2$, which is the result we expect when summing squared (standard) normal variates.]

### Sampling Distribution of Variance When Sampling from a $t_{\alpha }$ Distribution

Considering what I have written above, the expression you obtain for "the density of the standard deviation of n-sample T variables" is incorrect. However, even if the $F(n,\alpha)$ were the correct distribution, the standard deviation is not simply the square root of the sum of squares (as you seem to have used to arrive at your $g(u)$ density). You would instead be looking for the (scaled) sampling distribution of $\sum _{i=1}^n \left(T_i-\bar{T}\right){}^2=\sum _{i=1}^n T_i^2-n \bar{T}^2$. In the normal case, the LHS of this expression can be re-written as a sum of squared normal variables (the term inside the square can be re-written as a linear combination of normal variables which is again normally distributed) which leads to the familiar $\chi^2$ distribution. Unfortunately, a linear combination of $t$ variables (even with the same degrees of freedom) is not distributed as $t$, so a similar approach can not be exploited.

Perhaps you should re-consider what it is you wish to demonstrate? It may be possible to achieve the objective using some simulations, for example. However, you do indicate an example with $\alpha=3$, a situation where only the first moment of $F(1,\alpha)$ is finite, so simulation won't help with such moment calculations.

## Best Answer

There is no closed-form distribution for this quantity, but it can be simulated in a number of ways. One way to simulate it is to use the relationship between the $F(1,1)$ distribution and the uniform distribution. If $U_1, ..., U_n \sim \text{IID U}(0,1)$ are uniform then we can form corresponding random variables $F_1, ..., F_n \sim \text{IID F}(1,1)$ using the transformation:

$$F_i \equiv \frac{1-U_i}{U_i} = \frac{1}{U_i}-1.$$

So the sum of interest can be written as:

$$S \equiv \sum_{i=1}^n F_i = \sum_{i=1}^n \Bigg( \frac{1}{U_i}-1 \Bigg) = \sum_{i=1}^n \frac{1}{U_i} -n .$$

The distribution of this value is strongly positively skewed, so it is easiest to visualise its distribution on a log-scale. This is easy to simulate in base-R using the following code: