I do not know if this helps, but here are some upper bounds, standard in operator space theory. The first inequality, attributed to Haagerup, is an analog of the Cauchy-Schwarz inequality in your setting:
$$\Vert\sum_{\alpha}S_{\alpha}\otimes B_{\alpha}\Vert\leq \Vert\sum_{\alpha}S_{\alpha}\otimes \overline{S_{\alpha}}\Vert^{1/2} \Vert\sum_{\alpha}B_{\alpha}\otimes \overline{B_{\alpha}}\Vert^{1/2}.$$
Here for a matrix $A = (A_{i,j})$, $\overline{A}$ denotes the matrix $(\overline{A_{i,j}})$. The expressions appearing on the right-hand side of this inequality are the norms of $(S_\alpha)$ and $(B_\alpha)$ in the operator Hilbert space OH. For a proof, see for example page 123 in Pisier's Introduction to Operator Space Theory.
Another inequality (no longer symmetric) that reduces to the usual Cauchy-Schwarz inequality when the matrices are of size $1$ is the following (and the same with the role of S and B reversed):
$$\Vert\sum_{\alpha}S_{\alpha}\otimes B_{\alpha}\Vert\leq \Vert\sum_{\alpha}S_{\alpha}S_{\alpha}^*\Vert^{1/2} \Vert\sum_{\alpha}B_{\alpha}^* B_{\alpha}\Vert^{1/2}.$$
Now the terms appearing on the left are, in the language of operator spaces, the row (resp. column) norm of $(S_\alpha)$ (resp. $(B_\alpha)$). The row (resp. column) norm of $S=(S_\alpha)$ is just the norm of the matrix $ROW(S)$ (resp. $COLUMN(S)$) obtained, in a block-decomposition, by putting $S_\alpha$'s on the first row (resp. column) and $0$'s on the other rows (resp. columns).
This last inequality is very easy to prove, and more generally we have $\Vert\sum_i a_i b_i\Vert\leq \Vert\sum_i a_i a_i^*\Vert^{1/2} \Vert\sum_i b_i^* b_i\Vert^{1/2}$ for any matrices $a_i$ and $b_i$. Indeed, the LHS of this inequality is $\Vert ROW(a) COLUMN(b)\Vert$, and its RHS is $\Vert ROW(a)\Vert \Vert COLUMN(b)\Vert$. This inequality is thus just expressing that the operator norm is sub-multiplicative.
Edit (for a lower bound, without the typo this time). In the case when the $S_\alpha$'s form an orthonormal family for the scalar product $\langle A,B\rangle = Tr(B^* A)/d_S$, you get the following lower bound:
$$\Vert\sum_{\alpha}S_{\alpha}\otimes B_{\alpha}\Vert\geq \max(\Vert\sum_{\alpha}B_{\alpha}^* B_\alpha\Vert^{1/2}, \Vert \sum_{\alpha}B_{\alpha} B_\alpha^*\Vert^{1/2}).$$
This is because $\sum_{\alpha}B_{\alpha}^* B_\alpha$ is $1/d_S Tr \otimes id$ applied to $X^*X$, where $X=\sum_{\alpha}S_{\alpha}\otimes B_{\alpha}$. And since $1/d_s Tr$ is a state, $1/d_S Tr \otimes id$ is of norm $1$ from $M_{d_S} \otimes M_{d_B}$ to $M_{d_B}$.
In the specific situation of your problem, here are the bounds one actually gets.
Your assumptions on the $S_\alpha$'s imply that they all are unitary, and orthonormal for $\langle A,B\rangle = Tr(B^* A)/d_S$. Therefore you have that
$$\Vert\sum_{\alpha}S_{\alpha}\otimes \overline{S_{\alpha}} \Vert = \Vert\sum_{\alpha}S_{\alpha}^*S_{\alpha}\Vert= \Vert\sum_{\alpha}S_{\alpha}S_{\alpha}^*\Vert = N$$ where $N$ is the number of terms in the sum (you want to take $N=d_S^2-1$).
You therefore get
$$ \max(\Vert\sum_{\alpha}B_{\alpha}^* B_\alpha\Vert^{1/2}, \Vert \sum_{\alpha}B_{\alpha} B_\alpha^*\Vert^{1/2}) \leq \Vert\sum_{\alpha}S_{\alpha}\otimes B_{\alpha} \Vert $$
and $$\Vert\sum_{\alpha}S_{\alpha}\otimes B_{\alpha} \Vert \leq \sqrt N \min(\Vert\sum_{\alpha}B_{\alpha}^* B_\alpha\Vert^{1/2}, \Vert \sum_{\alpha}B_{\alpha} B_\alpha^*\Vert^{1/2}).$$
The lower bound is tight (take all the $B_\alpha$'s but one equal to zero). The upper bound too (take $B_\alpha = \overline{S_{\alpha}}$), and implies the one you gave in your question.
With computer search one finds that the premise is first violated at $n = 19$. The obstruction is as follows: consider $\mu_1 = 1^{19}$, $\mu_2 = 1^{17}2$, $\mu_3 = 1^{16} 3$. We have $g_{\mu_1} = 1$, $g_{\mu_2} = {19 \choose 2} = 9 \cdot 19$, $g_{\mu_3} = 2{19 \choose 3} = 2 \cdot 3 \cdot 17 \cdot 19$. There are only two partitions $\lambda$ such that $f_{\lambda}$ divides any of $g_{\mu_i}$, namely $f_{1^{19}} = f_{19} = 1$. I'm not currently able to present a short proof of the latter fact.
To provide some insight, here are all partitions of $19$ with $f_{\lambda} \leq \max g_{\mu_i}$:
- $f_{19} = f_{1^{19}} = 1$,
- $f_{1^{17}2} = f_{1, 18} = 18 = 2 \cdot 3^2$,
- $f_{1^{16}3} = f_{1^2 17} = 153 = 3^2 \cdot 17$,
- $f_{1^{15}2^2} = f_{2, 17} = 152 = 2^3 \cdot 19$,
- $f_{1^{15}4} = f_{1^3 16} = 816 = 2^4 \cdot 3 \cdot 17$,
- $f_{1^{14}, 2, 3} = f_{1, 2, 16} = 1615 = 5 \cdot 17 \cdot 19$,
- $f_{1^{13} 2^3} = f_{3, 16} = 798 = 2 \cdot 3 \cdot 7 \cdot 19$.
Next bad value is $n = 25$, with the same $1^n$, $1^{n - 2} 2$, $1^{n - 3}3$ vs $1^n, n$ obstruction. $n = 31$ is the same.
Best Answer
I know a couple of ways to get a Shapovalov type form on a tensor product. The details of what I say depends on the exact conventions you use for quantum groups. I will follow Chari and Pressley's book.
The first method is to alter the adjoint slightly. If you choose a * involution that is also a coalgebra automorphism, you can just take the form on a tensor product to be the product of the form on each factor, and the result is contravariant with respect to *. There is a unique such involution up to some fairly trivial modifications (like multiplying $E_i$ by $z$ and $F_i$ by $z^{-1}$). It is given by: $$ *E_i = F_i K_i, \quad *F_i=K_i^{-1}E_i, \quad *K_i=K_i, $$ The resulting forms are Hermitian if $q$ is taken to be real, and will certainly satisfy your conditions 1) ad 3). Since the $K_i$s only act on weight vectors as powers of $q$, it almost satisfies 2).
The second method is in case you really want * to interchange $E_i$ with exactly $F_i$. This is roughly contained in this http://www.ams.org/mathscinet-getitem?mr=1470857 paper by Wenzl, which I actually originally looked at when it was suggested in an answer to one of your previous questions.
It is absolutely essential that a * involution be an algebra-antiautomorphism. However, if it is a coalgebra anti-automorphism instead of a coalgebra automorphism there is a work around to get a form on a tensor product. There is again an essentially unique such involution, given by
$$ *E_i=F_i, \quad *F_i=E_i, \quad *K_i=K_i^{-1}, \quad *q=q^{-1}. $$
Note that $q$ is inverted, so for this form one should think of $q$ as being a complex number of the unit circle. By the same argument as you use to get the Shapovalov form, then is a unique sesquilinear *-contravariant form on each irreducible representation $V_\lambda$, up to overall rescaling.
To get a form on $V_\lambda \otimes V_\mu$, one should define $$(v_1 \otimes w_1, v_2 \otimes w_2)$$ to be the product of the form on each factor applied to $v_1 \otimes w_1$ and $R( v_2 \otimes w_2)$, where $R$ is the universal $R$ matrix. It is then straightforward to see that the result is *-contravariant, using the fact that $R \Delta(a) R^{-1} =\Delta^{op}(a).$
If you want to work with a larger tensor product, I believe you replace $R$ by the unique endomorphism $E$ on $\otimes_k V_{\lambda_k}$ such that $w_0 \circ E$ is the braid group element $T_{w_0}$ which reverses the order of the tensor factors, using the minimal possible number of positive crossings. Here $w_0$ is the symmetric group element that reverses the order of the the tensor factors.
The resulting form is *-contravariant, but is not Hermitian. In Wenzl's paper he discusses how to fix this.
Now 1) and 2) on your wish list hold. As for 3): It is clear from standard formulas for the $R$-matrix (e.g. Chari-Pressley Theorem 8.3.9) that $R$ acts on a vector of the form $b_\lambda \otimes c \in V_\lambda \otimes V_\mu$ as multiplication by $q^{(\lambda, wt(c))}$. Thus if you embed $V_\mu$ into $V_\lambda \otimes V_\mu$ as $w \rightarrow b_\lambda \otimes w$, the result is isometric up to an overall scaling by a power of $q$. This extends to the type of embedding you want (up to scaling by powers of $q$), only with the order reversed. I don't seem to understand what happen when you embed $V_\lambda$ is $V_\lambda \otimes V_\mu$, which confuses me, and I don't see your exact embeddings.