Since the question was updated, I update my answer:
The first part (To compute the skewness, why not standardize both the mean and the variance?) is easy: That is precisely how it's done! See the definitions of skewness and kurtosis in wiki.
The second part is both easy and hard. On one hand we could say that it is impossible to normalize random variable to satisfy three moment conditions, as linear transformation $X \to aX + b$ allows only for two. But on the other hand, why should we limit ourselves to linear transformations? Sure, shift and scale are by far the most prominent (maybe because they are sufficient most of the time, say for limit theorems), but what about higher order polynomials or taking logs, or convolving with itself? In fact, isn't it what Box-Cox transform is all about -- removing skew?
But in the case of more complicated transformations, I think, the context and the transformation itself becomes important, so maybe that is why there are no more "moments with names". That does not mean that r.v.s are not transformed and that the moments are not calculated, on the contrary. You just chose your transformation, calculate what you need and move on.
The old answer about why centralized moments represent shape better than raw:
The keyword is shape. As whuber suggested, by shape we want consider the properties of the distribution that are invariant to translation and scaling. That is, when you consider variable $X + c$ instead of $X$, you get the same distribution function (just shifted to the right or left), so we would like to say that its shape stayed the same.
The raw moments do change when you translate the variable, so they reflect not only the shape, but also a location. In fact, you can take any random variable, and shift it $X \to X + c$ appropriately to get any value for its, say, raw third moment.
The same observation holds for all odd moments and to lesser extent for even moments (they are bounded from below and lower bound does depend on shape).
The centralized moment, on the other hand, does not change when you translate the variable, so that's why they are more descriptive of the shape. For example, if your even centralized moment is large, you known that random variable has some mass not too close to mean. Or if your odd moment is zero, you known that your random variable has some symmetry around mean.
The same argument extends to scale, which is transformation $X\to cX$. The usual normalization in this case is division by standard deviation, and the corresponding moments are called normalized moments, at least by wikipedia.
$$\begin{align}
\boxed{
\quad \quad \ \ \ \mathbb{E}(\bar{X}_n) = \mu
\quad \quad \quad \quad \quad \quad \quad \ \
\mathbb{V}(\bar{X}_n) = \frac{\sigma^2}{n}, \\[12pt]
\quad \mathbb{Skew}(\bar{X}_n) = \frac{\gamma}{\sqrt{n}}
\quad \quad \quad \quad \quad
\mathbb{Kurt}(\bar{X}_n) = 3 + \frac{\kappa - 3}{n}. \quad \\}
\end{align}$$
The mean, variance, skewness and kurtosis of the sample mean are shown in the box above. These formulae are valid for any case where the underlying values are IID with finite kurtosis. It is simple to confirm that $\mathbb{Skew}(\bar{X}_n) \rightarrow 0$ and $\mathbb{Kurt}(\bar{X}_n) \rightarrow 3$ as $n \rightarrow \infty$, which means that the sample mean is asymptotically unskewed and mesokurtic. This is also implied by the classical central limit theorem, which ensures that the standardised sample mean converges in distribution to the normal distribution.
Proof via cumulants: The simplest way to prove these results is via moment/cumulant generating functions. Suppose we let $m_X$ and $K_X$ denote the moment generating function and cumulant generating function for the underlying IID random variables in the sequence. It is simple to show that $m_{\bar{X}_n}(t) = m_X(t/n)^n$ so the cumulant generating function for the sample mean has the form:
$$K_{\bar{X}_n}(t) = n K_X(t/n).$$
Consequently, the cumulants of the sample mean are related to the cumulants of the underlying random variables by:
$$\begin{align}
\bar{\kappa}_r
\equiv \frac{d^r}{dt^r} K_{\bar{X}_n} (t) \Bigg|_{t=0}
&= n \frac{d^r }{dt^r} K_X(t/n) \Bigg|_{t=0} \\[6pt]
&= \frac{1}{n^{r-1}} K_X^{(r)}(t/n) \Bigg|_{t=0} \\[6pt]
&= \frac{1}{n^{r-1}} K_X^{(r)}(0) \\[6pt]
&= \frac{\kappa_r}{n^{r-1}}. \\[6pt]
\end{align}$$
Using the relationship of the cumulants to the moments of interest, we then have:
$$\begin{align}
\mathbb{V}(\bar{X}_n)
&= \bar{\kappa}_2 \\[12pt]
&= \frac{\kappa_2}{n} \\[6pt]
&= \frac{\sigma^2}{n}, \\[6pt]
\mathbb{Skew}(\bar{X}_n)
&= \frac{\bar{\kappa}_3}{\bar{\kappa}_2^{3/2}} \\[6pt]
&= \frac{\kappa_3 / n^2}{(\kappa_2/n)^{3/2}} \\[6pt]
&= \frac{1}{\sqrt{n}} \cdot \frac{\kappa_3}{\kappa_2^{3/2}} \\[6pt]
&= \frac{\gamma}{\sqrt{n}}, \\[6pt]
\mathbb{Kurt}(\bar{X}_n)
&= \frac{\bar{\kappa}_4 + 3 \bar{\kappa}_2^2}{\bar{\kappa}_2^2} \\[6pt]
&= \frac{\kappa_4/n^3 + 3 (\kappa_2/n)^2}{(\kappa_2/n)^2} \\[6pt]
&= \frac{\kappa_4/n + 3 \kappa_2^2}{\kappa_2^2} \\[6pt]
&= \frac{(\kappa \sigma^4 - 3\sigma^4)/n + 3 \sigma^4}{\sigma^4} \\[6pt]
&= 3 + \frac{\kappa - 3}{n}. \\[6pt]
\end{align}$$
This method gives the results of interest, and it can also be generalise to give corresponding results for higher-order moments. As can be seen, the higher-order results are particularly simple, but the corresponding relationships for higher-order moments get messy once you get to high order.
Proof via expansion to raw moments: An alternative method of deriving these results is to expand the relevant central moments for the sample mean and simplify down to raw moments of the underlying random variables. Let $Y_i \equiv X_i - \mu$ and note that these random variables have mean zero, but have the same higher-order moments as $X_i$. The relevant higher-order central moments for the sample mean are:$^\dagger$
$$\begin{align}
\mathbb{E}((\bar{X}_n - \mu)^3)
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n (X_i - \mu) \bigg)^3 \Bigg) \\[6pt]
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n Y_i \bigg)^3 \Bigg) \\[6pt]
&= \frac{1}{n^3} \cdot \mathbb{E} \Bigg( \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n Y_i Y_j Y_k \Bigg) \\[8pt]
&= \frac{1}{n^3} \cdot \mathbb{E} \Bigg( \sum_{i} Y_i^3 + \sum_{i \neq j} Y_i^2 Y_j + \sum_{i \neq j \neq k} Y_i Y_j Y_k \Bigg) \\[8pt]
&= \frac{1}{n^3} \cdot \sum_{i} \mathbb{E}(Y_i^3) \\[12pt]
&= \frac{1}{n^3} \cdot \sum_{i} \gamma \sigma^3 \\[12pt]
&= \frac{1}{n^3} \cdot n \gamma \sigma^3 \\[12pt]
&= \frac{\gamma}{n^2} \cdot \sigma^3, \\[12pt]
\mathbb{E}((\bar{X}_n - \mu)^4)
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n (X_i - \mu) \bigg)^4 \Bigg) \\[6pt]
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n Y_i \bigg)^4 \Bigg) \\[6pt]
&= \frac{1}{n^4} \cdot \mathbb{E} \Bigg( \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n \sum_{l=1}^n Y_i Y_j Y_k Y_l \Bigg) \\[6pt]
&= \frac{1}{n^4} \cdot \mathbb{E} \Bigg( \sum_{i} Y_i^4 + \sum_{i \neq j} Y_i^3 Y_j + \sum_{i \neq j} Y_i^2 Y_j^2 \\[12pt]
&\quad \quad \quad \quad \quad \quad + \sum_{i \neq j \neq k} Y_i^2 Y_j Y_k + \sum_{i \neq j \neq k \neq l} Y_i Y_j Y_k Y_l \Bigg) \\[6pt]
&= \frac{1}{n^4} \cdot \Bigg[ \sum_{i} \mathbb{E}(Y_i^4) + \sum_{i \neq j} \mathbb{E}(Y_i^2) \mathbb{E}(Y_j^2) \Bigg] \\[6pt]
&= \frac{1}{n^4} \cdot \Bigg[ \sum_{i} (\kappa \sigma^4) + \sum_{i \neq j} \sigma^4 \Bigg] \\[6pt]
&= \frac{1}{n^4} \cdot \Bigg[ n \kappa \sigma^4 + 3n(n-1) \sigma^4 \Bigg] \\[6pt]
&= \frac{(\kappa + 3(n-1)) \sigma^4}{n^3} \\[6pt]
&= \frac{3n + (\kappa - 3)}{n^3} \cdot \sigma^4. \\[6pt]
\end{align}$$
Consequently, the skewness and kurtosis of the sample mean are given respectively by:
$$\begin{align}
\mathbb{Skew}(\bar{X}_n)
&= \frac{\mathbb{E}((\bar{X}_n - \mu)^3)}{\mathbb{V}(\bar{X}_n)^{3/2}} \\[6pt]
&= \frac{\gamma}{n^2} \cdot \sigma^3 \bigg/ \frac{\sigma^3}{n^{3/2}} \\[6pt]
&= \frac{\gamma}{\sqrt{n}}, \\[12pt]
\mathbb{Kurt}(\bar{X}_n)
&= \frac{\mathbb{E}((\bar{X}_n - \mu)^4)}{\mathbb{V}(\bar{X}_n)^2} \\[6pt]
&= \frac{3n + (\kappa - 3)}{n^3} \cdot \sigma^4 \bigg/\frac{\sigma^4}{n^2}
\quad \quad \quad \quad \quad \quad \quad \\[6pt]
&= \frac{3n + (\kappa - 3)}{n} \\[12pt]
&= 3 + \frac{\kappa - 3}{n}. \\[12pt]
\end{align}$$
$^\dagger$ We have used a slight abuse of notation in the range of the summations; when we refer to e.g., $i \neq j \neq k$ we use this as shorthand for the set of indices where all the indices are distinct. That is, we do not read these inequalities in their strict meaning, but read them as if the inequality were intended to be transitive.
Best Answer
To facilitate this analysis, define the sums $S_{n,r} \equiv \sum_{i=1}^n c_i^r$. Using these quantities the mean, variance, skewness and kurtosis of the quantity $H_n$ can be written as shown in the box below. These formulae are valid for any case where the underlying values are IID with finite kurtosis.
$$\begin{align} \boxed{ \quad \quad \quad \mathbb{E}(H_n) = \mu S_{n,1} \quad \quad \quad \quad \quad \quad \quad \ \ \mathbb{V}(H_n) = \sigma^2 S_{n,2}, \\[18pt] \quad \mathbb{Skew}(H_n) = \gamma \cdot \frac{S_{n,3}}{S_{n,2}^{3/2}} \quad \quad \quad \quad \quad \quad \mathbb{Kurt}(H_n) = 3 + (\kappa-3) \frac{S_{n,4}}{S_{n,2}^2}. \quad \\} \end{align}$$
These reults are simplest to derive via the cumulant function of the random variable of interest. To do this, observe that the random variable $H_n$ has moment generating function:
$$\begin{align} m_{H_n}(t) \equiv \mathbb{E}(e^{t H_n}) = \prod_{i=1}^n \mathbb{E}(e^{t c_i X_i}) = \prod_{i=1}^n m_{X}(t c_i), \end{align}$$
which gives the cumulant function:
$$\begin{align} K_{H_n}(t) = \log m_{H_n}(t) = \sum_{i=1}^n \log m_{X}(t c_i) = \sum_{i=1}^n K_{X}(t c_i). \end{align}$$
Now, let $\kappa_r$ denote the $r$th cumulant of the underlying random variables $X_i$. The cumulants of $H_n$ are related to these cumulants by:
$$\begin{align} \bar{\kappa}_n \equiv \frac{d^r K_{H_n}}{dt^r}(t) \Bigg|_{t=0} = \sum_{i=1}^n c_i^r \cdot \frac{d^r K_{X}}{dt^r}(t c_i) \Bigg|_{t=0} = \sum_{i=1}^n c_i^r \cdot \kappa_r. \end{align}$$
Using the relationship of the cumulants to the moments of interest, we then have:
$$\begin{align} \mathbb{E}(H_n) &= \bar{\kappa}_1 \\[6pt] &= \sum_{i=1}^n c_i \cdot \kappa_1 \\[6pt] &= \sum_{i=1}^n c_i \cdot \mu \\[6pt] &= \mu \sum_{i=1}^n c_i \\[6pt] &= \mu S_{n,1}, \\[12pt] \mathbb{V}(H_n) &= \bar{\kappa}_2 \\[6pt] &= \sum_{i=1}^n c_i^2 \cdot \kappa_2 \\[6pt] &= \sum_{i=1}^n c_i^2 \cdot \sigma^2 \\[6pt] &= \sigma^2 \sum_{i=1}^n c_i^2 \\[6pt] &= \sigma^2 S_{n,2}, \\[12pt] \mathbb{Skew}(H_n) &= \frac{\bar{\kappa}_3}{\bar{\kappa}_2^{3/2}} \\[6pt] &= \frac{\sum_{i=1}^n c_i^3 \cdot \kappa_3}{(\sum_{i=1}^n c_i^2 \cdot \kappa_2)^{3/2}} \\[6pt] &= \frac{\sum_{i=1}^n c_i^3 \cdot \gamma \cdot \sigma^3}{(\sum_{i=1}^n c_i^2 \cdot \sigma^2)^{3/2}}, \\[6pt] &= \frac{\gamma \sum_{i=1}^n c_i^3}{(\sum_{i=1}^n c_i^2)^{3/2}} \\[6pt] &= \gamma \cdot \frac{S_{n,3}}{S_{n,2}^{3/2}}, \\[6pt] \mathbb{Kurt}(H_n) &= \frac{\bar{\kappa}_4 + 3 \bar{\kappa}_2^2}{\bar{\kappa}_2^2} \\[6pt] &= \frac{\sum_{i=1}^n c_i^4 \cdot \kappa_4 + 3 (\sum_{i=1}^n c_i^2 \cdot \kappa_2)^2}{(\sum_{i=1}^n c_i^2 \cdot \kappa_2)^2} \\[6pt] &= \frac{\sum_{i=1}^n c_i^4 \cdot (\kappa-3) \sigma^4 + 3 (\sum_{i=1}^n c_i^2 \cdot \sigma^2)^2}{(\sum_{i=1}^n c_i^2 \cdot \sigma^2)^2} \\[6pt] &= \frac{(\kappa-3) \sum_{i=1}^n c_i^4 + 3 (\sum_{i=1}^n c_i^2)^2}{(\sum_{i=1}^n c_i^2)^2}. \\[6pt] &= \frac{(\kappa-3) S_{n,4} + 3 S_{n,2}^2}{S_{n,2}^2} \\[6pt] &= 3 + (\kappa-3) \frac{S_{n,4}}{S_{n,2}^2} \\[6pt] \end{align}$$