HINT:
Note that $$\dfrac{S_1^2}{\sigma^2}\sim \chi^2_{\displaystyle n_1-1} \text{and} \dfrac{S_2^2}{\sigma^2}\sim \chi^2_{\displaystyle n_2-1}$$.
Hence we get $$\displaystyle \dfrac{S_1^2}{\sigma^2} / \dfrac{S_2^2}{\sigma^2} =\dfrac{S_1^2}{S_2^2}\sim F_{\displaystyle n_1-1,n_2-1}$$
There are two versions of the two-sample t test.
(1) The pooled test makes the assumption that the two population variances are equal, computes the 'pooled' variance estimate as
$$S_p^2 = \frac{(n_1 - 1)S_1^2 + (n_2-1)S_2^2}{n_1 + n_2 -2},$$
and uses the statistic
$$T = \frac{\bar X_1 - \bar X_2}{S_p\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}},$$
which has Student's t distribution with $n_1 + n_2 -2$ degrees of freedom under the null hypothesis $H_0: \mu_1 = \mu_2.$
(2) Because the pooled test can perform quite badly if the two population variances are not equal, most practitioners prefer to routinely use the Welch (separate variances) test
which does not assume equal variances and uses the test statistic
$$T^\prime = \frac{\bar X_1 - \bar X_2}{\sqrt{\frac{S_1^2}{n_1} + \frac{S_2^2}{n_2}}},$$ which has approximately Student's t distribution with
degrees of freedom $\nu$ found according to a formula. The result of the
formula is that $\min(n_1 - 1, n_2 - 1) \le \nu \le n_1 + n_2 - 2,$ with
$\nu$ nearer the larger value if $S_1^2 \approx S_2^2$ and nearer the smaller value if the sample variances differ greatly. If the Welch test is not discussed
in your textbook, you can read about it and find the formula for the degrees of freedom on Wikipedia.
The bad behavior of the pooled t test can be especially serious if sample
sizes are unequal and the smaller sample has the larger variance.
Results from Minitab for the pooled test are as follows:
Two-Sample T-Test
Sample N Mean StDev SE Mean
1 14 15.00 2.50 0.67
2 12 16.00 2.80 0.81
Difference = μ (1) - μ (2)
Estimate for difference: -1.00
T-Test of difference = 0 (vs ≠):
T-Value = -0.96 P-Value = 0.346 DF = 24
Both use Pooled StDev = 2.6417
Because the P-value exceeds 5%, you would not reject the null hypothesis
at the 5% level of significance. (This P-value is for a two-sided test; that
is the alternative hypothesis is $H_a: \mu_1 \ne \mu_2.)$ I will leave it to you to consult a printed
t table to find the critical value for a test at the 5% level, verify the value of $S_p.$ and to
do the other intermediate computations required to find the $T$ statistic.
For the Welch test the Minitab output is as follows:
Two-Sample T-Test
Sample N Mean StDev SE Mean
1 14 15.00 2.50 0.67
2 12 16.00 2.80 0.81
Difference = μ (1) - μ (2)
Estimate for difference: -1.00
T-Test of difference = 0 (vs ≠):
T-Value = -0.95 P-Value = 0.351 DF = 22
Notice that $\nu = 22;$ you should use the formula to verify this result. Then
find the critical value for a 5% level test, and perform the intermediate
computations necessary to get $T^\prime.$
Best Answer
This question is essentially answered. The comment of user "cardinal" that mentions the B.L. Welch (1947) paper provides all there is to it, as regards where the formula comes from. Welch derives the exact mathematical solution for the general problem of calculating the degrees of freedom when the samples are more than two, then he develops an approximate solution through a Taylor expansion, and then mentions the resulting approximate df-formula for the special case of two samples. There is no deep intuition behind the formula, just patient (but healthy) mathematics. Welch's style and notation is rather old-fashioned - for educational purposes, another paper of his "The Significance of the Difference Between Two Means when the Population Variances are Unequal", Biometrika, Vol. 29, No. 3/4 (Feb., 1938), pp. 350-362, focuses on the two-samples case, and is a bit more accessible.