The concept you're calling 'exceptionality' is simply a combined variable (via a weighted average) from two or more variables standardized to a Z-score. If there were a way of observing 'exceptionality' as sampled data, you could potentially fit a (standardized) multiple regression with your variables to find the best weights to use.
Let's consider two random variables $A$ and $B$, which are standardized to $Z_A$ and $Z_B$ respectively (meaning each follows a standard normal distribution, i.e. mean of 0 and variance of 1).
The weighted average of $Z_A$ and $Z_B$, where $w_A$ and $w_B$ are the respective weights for $Z_A$ and $Z_B$, is then:
$$
W = \frac{w_A}{w_A+w_B} \cdot Z_A + \frac{w_B}{w_A+w_B} \cdot Z_B
$$
Note that $w_A$ and $w_B$ are constants, whereas $Z_A$ and $Z_B$ are random variables.
Therefore, the expected value of $W$ is as follows:
$$
\text{E}(W) = \frac{w_A}{w_A+w_B} \cdot \text{E}(Z_A) + \frac{w_B}{w_A+w_B} \cdot \text{E}(Z_B) = 0
$$
The variance of $W$, assuming the independence of $Z_A$ and $Z_B$, is:
$$
\text{Var}(W) = \left(\frac{w_A}{w_A+w_B}\right)^2 \cdot \text{Var}(Z_A) + \left(\frac{w_B}{w_A+w_B}\right)^2 \cdot \text{Var}(Z_B) \\ = \left(\frac{w_A}{w_A+w_B}\right)^2 + \left(\frac{w_B}{w_A+w_B}\right)^2
$$
The variance of $W$, depending on the discrepancy between weights $w_A$ and $w_B$, must fall inside the interval $[.5,1)$. Although the mean is 0, because the variance is not 1, $W$ does not follow a standard normal distribution and therefore cannot be treated as a $Z$-score.
To make inferences like "a value of $W$ (the weight-averaged Z-scores) $= 1$ is greater than ~84% of observations" would involve having to standardize by dividing $W$ by its standard deviation. Therefore, the Z-score of $W$ becomes:
$$
Z_W = \frac{\frac{w_A}{w_A+w_B} \cdot Z_A + \frac{w_B}{w_A+w_B} \cdot Z_B}{\sqrt{\left(\frac{w_A}{w_A+w_B}\right)^2 + \left(\frac{w_B}{w_A+w_B}\right)^2}}
$$
A value of $1$ for $Z_W$ would indicate that it's greater than ~84% of observations of $Z_W$.
Please let me know if you have any follow-up questions.
Best Answer
Maybe someone else can explain the math behind it, but consider this quick demonstration: I generate five vectors, each 100 numbers long. Each of these vectors is on a different scale, so I standardize them (i.e., create z-scored variables). That is, the mean is zero and the standard deviation is 1 for each of these five latent construct variables:
Let's check to make sure they are actually z-scores:
So, now let's say we average all five of these together:
Is this new variable a z-score? We can check to see if the mean is zero and standard deviation is one:
The variable is not a z-score, because the standard deviation is not one. However, we could now z-score this mean variable. Let's do that and compare the distributions:
The z-scored aggregate variable of z-scores looks a lot different from the aggregate variable of z-scores.
In short: No, a mean of z-scored variables is not a z-score itself.