I don't think there is a positive answer to your question. The beauty of the normal distribution, univariate or multivariate, is that it is easily defined by the cumulants of order higher than two being zero. (The cumulant of order $k$ is the normalized $k$-th derivative of the characteristic function: $\kappa_k = i^{-k} \frac{\partial^k}{\partial t^k} \phi (t) |_{t=0}$.) The CLT states essentially that for all $k$, $\kappa_k = o(1)$ as $n\to\infty$ (and the rate can be established).
However, the property of zero cumulants is very fragile. Once you depart from zero third cumulant, all higher order cumulants have to be non-zero, as well: there is no distribution for which $\kappa_4=0$ if $\kappa_3\neq 0$. So for each value of skewness (which apparently is reflected in the third cumulant), there's a range of reasonable values for other cumulants, and the beauty of the distribution will be in the eyes of the beholder.
In a way, the "closest relative" of the multivariate normal distribution is the skew-normal distribution. Its density is the normal density "filtered" by a normal cdf: $f(x;\Sigma,\alpha) = 2f(x;\mu,\Sigma)\Phi(\alpha'x)$ where $f(x;\Sigma)$ is the density of a multivariate normal with mean zero vector and covariance matrix $\Sigma$, and $\Phi(z)$ is the standard normal cdf. So you don't really have a tensor here, but only a skewing vector. Nicely enough, for $\alpha=0$, you get the special case of a multivariate normal distribution.
Another way to approach the issue is from the point of view of stable laws. A stable law is a distribution such that the linear transformation of random variables having this distribution again has this distribution, possibly scaled and shifted. The normal distribution is an obvious example; and Cauchy is yet another example, although far less obvious. Stable laws are defined implicitly by their characteristic function: $\phi(t) = \exp[ i t \mu - |ct|^\alpha (1-i \beta \, {\rm sign} (t) \Phi]$. Here, $\mu$ is the shift parameter, $c$ is the scale parameter, $\alpha$ is the stability parameter, $\beta$ is the asymmetry parameter, and $\Phi=\tan(\pi\alpha/2)$ for $\alpha\neq1$, and $\Phi=-2/\pi \ln|t|$ for $\alpha=1$. ($\alpha=2, \beta=0$ gives the normal distribution; $\alpha=1, \beta=0$ gives the Cauchy distribution; moments beyond the first one exist only for $\alpha=2$.) Wikipedia provides a bunch of pictures. The multivariate extensions of stable laws do exist, too (thanks to @mpiktas for pointing them out!).
This is straightforward in the case you're interested in (${\boldsymbol \mu} = 0$) without using matrix algebra.
To clarify the notation ${\bf y} = \{y_{1}, ..., y_{n} \}$ is a multivariate normal random vector, ${\bf a} = \{a_{1}, ..., a_{n} \}$ is a row vector, and ${\bf H}$ is an $n \times n$ matrix with entries $\{ h_{jk} \}_{j,k=1}^{n}$. By definition (see e.g. page 3 here) you can re-write this covariance as $$ {\rm cov}({\bf a}'{\bf y}, {\bf y}' {\bf H} {\bf y}) = {\rm cov} \left( \sum_{i=1}^{n} a_i y_i, \sum_{j=1}^{n} \sum_{k=1}^{n} h_{jk} y_{j} y_{k} \right) = \sum_{i,j,k} {\rm cov}( a_i y_i, h_{jk} y_{j} y_{k} ) $$ where the second equality follows from bilinearity of covariance. When ${\boldsymbol \mu} = 0$, each term in the sum is $0$ because $${\rm cov}( a_i y_i, h_{jk} y_{j} y_{k} ) \propto E(y_i y_j y_k) - E(y_i) E(y_k y_j) = 0$$ The second term is zero because $E(y_i) = 0$. The first term is zero because the third order mean-centered moments of a multivariate normal random vector are 0, this can be seen more clearly by looking at the each cases:
- when $i,j,k$ are distinct, then $E(y_i y_j y_k)=0$ by Isserlis' Theorem
- when $i\neq j = k$, then we have $E(y_i y_j y_k) = E(y_i y_{j}^2)$. First we can deduce from here that $E(y_i | y_j=y) = y \cdot \Sigma_{ij}/\Sigma_{jj}$. Therefore, $E(y_{i} y_{j}^2 | y_{j} = y) = y^3 \cdot \Sigma_{ij}/\Sigma_{jj}$. Therefore, by the law of total expectation, $$E(y_i y_{j}^2) = E( E(y_{i} y_{j}^2 | y_{j} = y) ) = E(y^3 \Sigma_{ij}/\Sigma_{jj} ) = E(y^3) \cdot \Sigma_{ij}/\Sigma_{jj} = 0 $$ where $E(y_{j}^3) = 0$ because $y_j$ being symmetrically distributed with mean 0 implies that $y_{j}^3$ is also symmetrically distributed with mean 0.
- when $i=j=k$, $E(y_i y_j y_k) = E(y_{i}^3) = 0$ by the same rationale just given.
Best Answer
The moments of such transformation can probably be found in Sec. 2.2.3 of Kollo and von Rosen (2005). Transformations of this kind have been used in some multivariate simulations. I understand there's a book on polynomials of multivariate distributions, but I have not seen it, and don't know if you'd be able to find the closed form expressions for the density of this transformation there. In a univariate case, you get a (scaled and shifted) non-central $\chi^2$ distribution, and the density expression for it is somewhat unwieldy (Bessel and hypergeometric functions, or infinite series of Poisson-weighted gamma distributions).