[Math] avoid the Frobenius Norm

examples-counterexamplesmatricesmatrix-normsnormed-spacesnumerical methods

I vaguely remember the Frobenius matrix norm
( ${||A||}_F = \sqrt{\sum_{i,j} a_{i,j}^2}$ ) was somehow considered unsuitable for numerical analysis applications. I only remember, however, that it was not a subordinate matrix norm, but only because it did not take the identity matrix to $1$. It seems this latter problem could be solved with a rescaling, though. I don't remember my numerical analysis text considering this norm any further after introducing this fact, which seemed to be its death-knell for some reason.

The question, then: for fixed $n$, when looking at $n \times n$ matrices, are there any weird gotchas, deficiencies, oddities, etc, when using the (possibly rescaled) Frobenius norm? For example, is there some weird series of matrices $A_i$ such that the Frobenius norm of the $A_i$ approaches zero while the $\ell_2$-subordinate norm does not converge to zero? (It seems like that can not happen because the $\ell_2$ norm is the square root of the largest eigenvalue of $A^*A$, and thus bounded from above by the Frobenius norm…)

Best Answer

The Frobenius norm is actually quite nice, and also natural. it is defined by merely $$\|A\|_F^2 = \mbox{trace}(A'A)$$

and since it is naturally an inner-product norm, it makes optimization, etc. with it much easier (think quadratic programs, instead of semidefinite programs)

Numerical analysis probably like the operator norm perhaps because they often exploit $\|Ax\| \le \|A\| \|x\|$, and if you use the operator-2 norm, you get a tighter inequality (in general).

Otherwise, what norm you use, should be governed by the application where you are trying to use it.