Since the question was updated, I update my answer:
The first part (To compute the skewness, why not standardize both the mean and the variance?) is easy: That is precisely how it's done! See the definitions of skewness and kurtosis in wiki.
The second part is both easy and hard. On one hand we could say that it is impossible to normalize random variable to satisfy three moment conditions, as linear transformation $X \to aX + b$ allows only for two. But on the other hand, why should we limit ourselves to linear transformations? Sure, shift and scale are by far the most prominent (maybe because they are sufficient most of the time, say for limit theorems), but what about higher order polynomials or taking logs, or convolving with itself? In fact, isn't it what Box-Cox transform is all about -- removing skew?
But in the case of more complicated transformations, I think, the context and the transformation itself becomes important, so maybe that is why there are no more "moments with names". That does not mean that r.v.s are not transformed and that the moments are not calculated, on the contrary. You just chose your transformation, calculate what you need and move on.
The old answer about why centralized moments represent shape better than raw:
The keyword is shape. As whuber suggested, by shape we want consider the properties of the distribution that are invariant to translation and scaling. That is, when you consider variable $X + c$ instead of $X$, you get the same distribution function (just shifted to the right or left), so we would like to say that its shape stayed the same.
The raw moments do change when you translate the variable, so they reflect not only the shape, but also a location. In fact, you can take any random variable, and shift it $X \to X + c$ appropriately to get any value for its, say, raw third moment.
The same observation holds for all odd moments and to lesser extent for even moments (they are bounded from below and lower bound does depend on shape).
The centralized moment, on the other hand, does not change when you translate the variable, so that's why they are more descriptive of the shape. For example, if your even centralized moment is large, you known that random variable has some mass not too close to mean. Or if your odd moment is zero, you known that your random variable has some symmetry around mean.
The same argument extends to scale, which is transformation $X\to cX$. The usual normalization in this case is division by standard deviation, and the corresponding moments are called normalized moments, at least by wikipedia.
I do not know about any exact matching technique. If an approximated technique could work for you (meaning getting a density which approximately matches m given moments), then you could consider using an orthogonal polynomial series approach. You would choose a polynomial basis (Laguerre, Hermite, etc) depending on the range of your data. I describe below a technique that I have used in Arbel et al. (1) for a compactly supported distribution (see details in Section 3).
In order to set the notation, let us consider a generic continuous random variable $X$ on $[0,1]$, and denote by $f$ its density (to be approximated), and its raw moments by $\gamma_r=\mathbb{E}\big[X^r\big]$, with $r\in\mathbb{N}$. Denote the basis of Jacobi polynomials by $$G_i(s) = \sum_{r=0}^i G_{i,r}s^r,\,i\geq 1.$$
Such polynomials are orthogonal with respect to the $L^2$-product
$$\langle F,G \rangle=\int_0^1 F(s) G(s) w_{a,b}(s)d s,$$
where $w_{a,b}(s)=s^{a-1}(1-s)^{b-1}$
is named the weight function of the basis and is proportional to a beta density in the case of Jacobi polynomials.
Any univariate density $f$ supported on $[0,1]$ can be uniquely decomposed on such a basis and therefore there is a unique sequence of real numbers $(\lambda_i)_{i \geq 0}$ such that $$f(s)=w_{a,b}(s)\sum_{i=0}^\infty \lambda_i G_i(s).$$
From the evaluation of $\int_0^1 f(s)\, G_i(s)\,d s$ it follows that each $\lambda_i$ coincides with a linear combination of the first $i$ moments of $S$, specifically $\lambda_i=\sum_{r=0}^i G_{i,r}\gamma_r$.
Then, truncate the representation of $f$ in the Jacobi basis at a given level $N$, providing the approximation
$$f_N(s)=w_{a,b}(s)\sum_{i=0}^N \left(\sum_{r=0}^i G_{i,r}\mu_r\right) G_i(s).$$
That polynomial approximation is not necessarily a density as it might fail to be positive or to integrate to 1. In order to overcome this problem, one can consider the density $\pi$ proportional to its positive part, thus defined by $\pi(s)\propto\max(f_N(s),0)$. If sampling from $\pi$ is needed, one can resort to the rejection sampler, see for instance Robert & Casella (2).
There is a companion R package called momentify that provides the approximated distribution given moments, and allows to sample from it, available at this link: https://www.researchgate.net/publication/287608666_momentify_R_package, and discussed at this blog post.
Below are two examples with increasing the number of moments involved. Note that the fit is much better for the unimodal density than the multimodal one.
References
(1) Julyan Arbel, Antonio Lijoi, and Bernardo Nipoti. Full Bayesian inference with hazard mixture models. Computational Statistics & Data Analysis 93 (2016): 359-372. arXiv link, journal link.
(2) Christian Robert, and George Casella. "Monte Carlo Statistical Methods Springer-Verlag." New York (2004).
Best Answer
There are various ways to find the moments of the T-distribution, but the simplest method is to use the mixture representation using the normal distribution. If $T$ has a Student's T distribution with $\varphi$ degrees-of-freedom then we can write it via the mixture $T|\lambda \sim \text{N}(0, \tfrac{1}{\lambda})$ with $\lambda \sim \text{Ga}(\tfrac{\varphi}{2}, \tfrac{\varphi}{2})$ (i.e., as a mixture of normal distributions where the precision is gamma distributed). Using this mixture representation the density function for the Student's T distribution can be written as the integral:
$$\begin{equation} \begin {aligned} \text{St}(t|\varphi) &= \int \limits_0^\infty \text{N}(t|0,\tfrac{1}{\lambda}) \text{Ga}(\lambda|\tfrac{\varphi}{2},\tfrac{\varphi}{2}) \ d \lambda. \end{aligned} \end{equation}$$
Using this mixture representation, the raw moments of $T$ can be obtained via the law of iterated expectation, using the known moments of the normal distribution. The conditional moments are:
$$\mathbb{E}(T^k|\lambda) = \int \limits_{-\infty}^\infty t^k \text{ N}(t|0,\tfrac{1}{\lambda}) \text{ } dt = \begin{cases} 0 & & \text{if } k \text{ is odd}, \\[6pt] \frac{k!}{2^{k/2}(k/2)!} \lambda^{-k/2} & & \text{if } k \text{ is even}. \\[6pt] \end{cases}$$
For values $k \geqslant \varphi$ the moments of the T distribution do not exist. For odd values $0<k<\varphi$ the moments are zero and for even values $0<k<\varphi$ the moments are:
$$\begin{equation} \begin {aligned} \mathbb{E}(T^k) = \mathbb{E}( \mathbb{E}(T^k | \lambda ) ) &= \int \limits_0^\infty \frac{k!}{2^{k/2}(k/2)!} \lambda^{-k/2} \text{ Ga}(\lambda|\tfrac{\varphi}{2},\tfrac{\varphi}{2}) \text{ } d\lambda \\[6pt] &= \int \limits_0^\infty \frac{k!}{2^{k/2}(k/2)!} \lambda^{-k/2} \cdot \frac{\varphi^{\varphi/2}}{2^{\varphi/2} \Gamma(\tfrac{\varphi}{2})} \lambda^{\varphi/2-1} \exp \Big( - \frac{\varphi}{2} \lambda \Big) \text{ } d\lambda \\[6pt] &= \frac{k!}{2^k (k/2)!} \cdot \frac{\Gamma(\tfrac{\varphi-k}{2})}{\Gamma(\tfrac{\varphi}{2})} \cdot \varphi^{k/2} \\ &\quad \quad \quad \quad \times \int \limits_0^\infty \frac{\varphi^{(\varphi-k)/2}}{2^{(\varphi-k)/2} \Gamma(\tfrac{\varphi-k}{2})} \lambda^{(\varphi-k)/2-1} \exp \Big( -\frac{\varphi}{2} \lambda \Big) \text{ } d\lambda \\[6pt] &= \frac{k!}{2^k (k/2)!} \cdot \frac{\Gamma(\tfrac{\varphi-k}{2})}{\Gamma(\tfrac{\varphi}{2})} \cdot \varphi^{k/2} \int \limits_0^\infty \text{Ga}(\lambda|\tfrac{\varphi-k}{2},\tfrac{\varphi}{2}) \text{ } d\lambda \\[6pt] &= \frac{k!}{2^k (k/2)!} \cdot \frac{\Gamma(\tfrac{\varphi-k}{2})}{\Gamma(\tfrac{\varphi}{2})} \cdot \varphi^{k/2} \\[6pt] &= \frac{\Gamma(\tfrac{k+1}{2})}{\sqrt{\pi}} \cdot \frac{\Gamma(\tfrac{\varphi-k}{2})}{\Gamma(\tfrac{\varphi}{2})} \cdot \varphi^{k/2}. \\[6pt] &= \frac{\Gamma(\tfrac{k+1}{2})}{\sqrt{\pi}} \cdot \frac{\varphi^{k/2}}{\prod_{i=1}^{k/2} (\tfrac{\varphi}{2}-i)}. \\[6pt] \end{aligned} \end{equation}$$
Application of this formula for the even moments yields:
$$\begin{equation} \begin{aligned} \mathbb{E}(T^2) &= \frac{\varphi}{\varphi-2} & & & \text{for } \varphi > 2, \\[6pt] \mathbb{E}(T^4) &= \frac{3 \varphi^2}{(\varphi-2) (\varphi-4)} & & & \text{for } \varphi > 4, \\[6pt] \mathbb{E}(T^6) &= \frac{15 \varphi^3}{(\varphi-2) (\varphi-4) (\varphi-6)} & & & \text{for } \varphi > 6. \\[6pt] \end{aligned} \end{equation}$$
You can now obtain the kurtosis, etc., via algebraic manipulation of the raw moments. In particular, the kurtosis is:
$$\begin{equation} \begin{aligned} \mathbb{Kurt}(T) = \frac{\mathbb{E}(T^4)}{\mathbb{E}(T^2)^2} &= \frac{3 \varphi^2}{(\varphi-2) (\varphi-4)} \Big/ \Big( \frac{\varphi}{\varphi-2} \Big)^2 \\[6pt] &= \frac{3 (\varphi-2)}{\varphi-4} \\[6pt] &= \frac{3 \varphi - 6}{\varphi-4} \\[6pt] &= \frac{3 \varphi - 12}{\varphi-4} + \frac{6}{\varphi-4} \\[6pt] &= 3 + \frac{6}{\varphi-4}. \\[6pt] \end{aligned} \end{equation}$$