I am implementing the algorithm in "Approximating the Logarithm of a Matrix to Specified Accuracy" by Sheung Hun Cheng, Nicholas J. Higham, Charles S. Kenny, Alan J. Laub, 2001.
In this algorithm, I would to avoid computing the 2-norm of a real-valued square matrix $A\in\mathbb{R}^{n\times n}$. Numerical experiments suggest to me that the following upper bound holds
$\|A\|_2 \leq \max ( \|A\|_1, \|A\|_\infty )$
Can anybody confirm whether this inequality always holds? Thank you and happy new year!
One user remarked that Cauchy-Schwarz implies
$\|A\|_2 \leq \sqrt n \min ( \|A\|_1, \|A\|_\infty )$
which in some cases improves the bound, but not always. So I hope my initial question is still of relevance. A counter example to the suggested inequality would also be appreciated, if it exists.
Best Answer
Indeed:
$\|A\|_2 \leq \max ( \|A\|_1, \|A\|_\infty )$
follows from
$\|A\|_2 \leq \sqrt { \|A\|_1 \|A\|_\infty } \leq \max ( \|A\|_1, \|A\|_\infty )$
which - according to Wikipedia - is a special case of Hölder's inequality.