To me continuity is more geometric and intuitive than the rest of the argument (which is purely algebraic manipulation). So I take the liberty to mis-read you question as follows:
- Is it possible to derive linearity of the inner product from the parallelogram law using only algebraic manipulations?
By "only algebraic" I mean that you are not allowed to use inequalities. (It is triangle inequality that allows one to use continuity. In fact, one can derive continuity using only the inequality $|u|^2\ge 0$ and the parallelogram law.) Also, an algebraic argument must work over any field on characteristic 0.
The answer is that it is not possible. More precisely, the following theorem holds.
Theorem. There exists a field $F\subset\mathbb R$ and a function $\langle\cdot,\cdot\rangle: F^2\times F^2\to F$ which is symmetric, additive in each argument (i.e. $\langle u,v+w\rangle=\langle u,v\rangle+\langle u,w\rangle$), satisfies the identity $\langle tu,tv\rangle = t^2\langle u,v\rangle$ for every $t\in F$, but is not bi-linear.
Note that the above assumptions imply that the "quadratic form" $Q$ defined by $Q(v)=\langle v,v\rangle$ satisfies $Q(tv)=t^2Q(v)$ and the parallelogram identity, and the "product" $\langle\cdot,\cdot\rangle$ is determined by $Q$ in the usual way. [EDIT: an example exists for $F=\mathbb R$ as well, see Update.]
Proof of the theorem. Let $F=\mathbb Q(\pi)$. An element $x\in F$ is uniquely represented as $f_x(\pi)$ where $f_x$ is a rational function over $\mathbb Q$. Define a map $D:F\to F$ by $D(x) = (f_x)'(\pi)$. This map satisfies
Define $P:F\times F$ by $P(x,y) = xD(y)-yD(x)$. From the above identities it is easy to see that $P$ is additive in each argument and satisfies $P(tx,ty)=t^2 P(x,y)$ for all $x,y,t\in F$. Finally, define a "scalar product" on $F^2$ by
$$
\langle (x_1,y_1), (x_2,y_2) \rangle = P(x_1,y_2) + P(x_2,y_1) .
$$
It satisfies all the desired properties but is not bilinear: if $u=(1,0)$ and $v=(0,1)$, then $\langle u,v\rangle=0$ but $\langle u,\pi v\rangle=1$.
Update. One can check that if $\langle\cdot,\cdot\rangle$ is a "mock scalar product" as in the theorem, then for any two vectors $u,v$, the map $t\mapsto \langle u,tv\rangle - t\langle u,v\rangle$ must be a differentiation of the base field. (A differentiation is map $D:F\to F$ satisfying the above rules for sums and products.) Thus mock scalar products on $\mathbb R^2$ are actually classified by differentiations of $\mathbb R$.
And non-trivial differentiations of $\mathbb R$ do exist. In fact, a differentiation can be extended from a subfield to any ambient field (of characteristic 0). Indeed, by Zorn's Lemma it suffices to extend a differentiation $D$ from a field $F$ to a one-step extension $F(\alpha)$ of $F$. If $\alpha$ is transcedental over $F$, one can define $D(\alpha)$ arbitrarily and extend $D$ to $F(\alpha)$ by rules of differentiation. And if $\alpha$ is algebraic, differentiating the identity $p(\alpha)=0$, where $p$ is a minimal polynomial for $\alpha$, yields a uniquely defined value $D(\alpha)\in F(\alpha)$, and then $D$ extends to $F(\alpha)$. The extensions are consistent because all identities involved can be realized in the field of differentiable functions on $\mathbb R$, where differentiation rules are consistent.
Thus there exists a mock scalar product on $\mathbb R^2$ such that $\langle e_1,e_2\rangle=0$ but $\langle e_1,\pi e_2\rangle=1$. And I am sure I reinvented the wheel here - all this should be well-known to algebraists.
I know a couple of ways to get a Shapovalov type form on a tensor product. The details of what I say depends on the exact conventions you use for quantum groups. I will follow Chari and Pressley's book.
The first method is to alter the adjoint slightly. If you choose a * involution that is also a coalgebra automorphism, you can just take the form on a tensor product to be the product of the form on each factor, and the result is contravariant with respect to *. There is a unique such involution up to some fairly trivial modifications (like multiplying $E_i$ by $z$ and $F_i$ by $z^{-1}$). It is given by:
$$
*E_i = F_i K_i, \quad *F_i=K_i^{-1}E_i, \quad *K_i=K_i,
$$
The resulting forms are Hermitian if $q$ is taken to be real, and will certainly satisfy your conditions 1) ad 3). Since the $K_i$s only act on weight vectors as powers of $q$, it almost satisfies 2).
The second method is in case you really want * to interchange $E_i$ with exactly $F_i$. This is roughly contained in this http://www.ams.org/mathscinet-getitem?mr=1470857 paper by Wenzl, which I actually originally looked at when it was suggested in an answer to one of your previous questions.
It is absolutely essential that a * involution be an algebra-antiautomorphism. However, if it is a coalgebra anti-automorphism instead of a coalgebra automorphism there is a work around to get a form on a tensor product. There is again an essentially unique such involution, given by
$$
*E_i=F_i, \quad *F_i=E_i, \quad *K_i=K_i^{-1}, \quad *q=q^{-1}.
$$
Note that $q$ is inverted, so for this form one should think of $q$ as being a complex number of the unit circle. By the same argument as you use to get the Shapovalov form, then is a unique sesquilinear *-contravariant form on each irreducible representation $V_\lambda$, up to overall rescaling.
To get a form on $V_\lambda \otimes V_\mu$, one should define
$$(v_1 \otimes w_1, v_2 \otimes w_2)$$
to be the product of the form on each factor applied to $v_1 \otimes w_1$ and $R( v_2 \otimes w_2)$, where $R$ is the universal $R$ matrix. It is then straightforward to see that the result is *-contravariant, using the fact that $R \Delta(a) R^{-1} =\Delta^{op}(a).$
If you want to work with a larger tensor product, I believe you replace $R$ by the unique endomorphism $E$ on $\otimes_k V_{\lambda_k}$ such that $w_0 \circ E$ is the braid group element $T_{w_0}$ which reverses the order of the tensor factors, using the minimal possible number of positive crossings. Here $w_0$ is the symmetric group element that reverses the order of the the tensor factors.
The resulting form is *-contravariant, but is not Hermitian. In Wenzl's paper he discusses how to fix this.
Now 1) and 2) on your wish list hold. As for 3): It is clear from standard formulas for the $R$-matrix (e.g. Chari-Pressley Theorem 8.3.9) that $R$ acts on a vector of the form $b_\lambda \otimes c \in V_\lambda \otimes V_\mu$ as multiplication by $q^{(\lambda, wt(c))}$. Thus if you embed $V_\mu$ into $V_\lambda \otimes V_\mu$ as $w \rightarrow b_\lambda \otimes w$, the result is isometric up to an overall scaling by a power of $q$. This extends to the type of embedding you want (up to scaling by powers of $q$), only with the order reversed. I don't seem to understand what happen when you embed $V_\lambda$ is $V_\lambda \otimes V_\mu$, which confuses me, and I don't see your exact embeddings.
Best Answer
I'm not sure if this is what you're looking for. But to me, a Hermitian metric is just $g+i\omega$, where $g$ is a real inner product, and $\omega$ is a symplectic form (alternating, but still non-degenerate).
To share my simplest intuition, once you believe that this concept is useful: $g$ tells you how a pair of vectors measure up geometrically in $\mathbb R^{2n}\simeq\mathbb C^n$, as you've already noted. But $i\omega$ tells you how much closer to linearly dependent the vectors are now that complex scalars are allowed, so it's still a lot like measuring angle. That is, if $v,w$ are orthonormal in the real sense, then if $i\omega(v, w)=i$, they are in the same complex line; if $i\omega(v, w)=0$, they are just as orthogonal in the complex sense as in the real sense.
It may be just the fact that I do symplectic geometry that makes me think of $\omega$ as such useful geometric information, but once you start looking there are many settings that can be made symplectic. Probably the most well-known one is the symplectic geometry of a cotangent bundle as a setting for Hamiltonian methcanics: there are coordinates for position and velocity (er, momentum), and the complex structure, measured by the metric and the symplectic form, tells you how they're related.