taking the $Euclidea\: norm$ in to account defined for square-summable sequence space, then
\begin{equation*}
k(A)= \frac{\sigma_{\max}(A)}{\sigma_{min}(A)}
\end{equation*}
where $\sigma_{\max}(A)$ and $\sigma_{\min}(A)$ are maximal and minimal singular values of $A$. So, if $A$ is normal then
\begin{equation}\label{eq8_1}
k(A)=|\frac{\lambda_{\max}(A)}{\lambda_{\min}(A)}|.
\end{equation}
On the other hand,
\begin{equation}\label{eq8_2}
\rho(A)=\max\{|\lambda|:\lambda \in \sigma(A)\}=\lambda_{\max}(A)
\end{equation}
and if $A$ is invertible and have $\lambda$ as eigenvalue the $A^{-1}$ has the eigenvalue of $\frac{1}{\lambda}$, so we have
\begin{equation}\label{eq8_3}
\lambda_{\min}(A)=\frac{1}{\lambda_{\max}(A^{-1})} \Rightarrow \frac{1}{\lambda_{\min}(A)}=\lambda_{\max}(A^{-1})=\rho(A^{-1})
\end{equation}
Now substituting (\ref{eq8_3}) and (\ref{eq8_2}) in (\ref{eq8_1}) yields
\begin{equation*}
k(A)=|\frac{\lambda_{\max}(A)}{\lambda_{\min}(A)}|=\rho(A)\rho(A^{-1}).
\end{equation*}
Note that $||A||_2=\sigma_1$ where ${\sigma_1}$ is a greatest singular value of $A$. Moreover $|\lambda_1|\leq \sigma_1$ where $\lambda_1$ is an eigenvalue of $A$ with greatest modulus. Thus, if $B$ is similar to $A$ (over $\mathbb{C}$), then $||B||_2\geq |\lambda_1|$.
Assume that $A$ is diagonalizable (over $\mathbb{C}$): $A=PDP^{-1}$. Then $||D||_2=|\lambda_1|$, that is the minimum of the $2$-norm of a matrix that is similar to $A$.
EDIT. The general case. Proposition. Let $A\in M_n(\mathbb{C})$ and $\epsilon >0$. Then there is $B\in M_n(\mathbb{C})$ that is similar to $A$ and $\lambda_1\leq ||B||_2<\lambda_1+\epsilon$.
Proof. According to Jordan theory, there is a sequence $(B_k)_k$ s.t. $B_k$ is similar to $A$ and $\lim_k B_k=diag(\lambda_i)$. When $k\rightarrow \infty$, $||B_k||_2\rightarrow ||D||_2=|\lambda_1|$ and we are done.
Conjecture. (With the previous notations.) There is $B$ s.t. $||B||_2=|\lambda_1|$ iff $\lambda_1$ is a semisimple eigenvalue and is the unique eigenvalue with maximal modulus.
EDIT. Instead of removing the green chevron, you should think for a moment...
You can easily conclude about the condition number.
$cond_2(A)=\sigma_1/\sigma_n$ and $\sigma_1\geq |\lambda_1|\geq |\lambda_n|\geq \sigma_n$. Then $cond_2(A)\geq |\lambda_1/\lambda_n|=cond_2(B)$ and we are done when $A$ is diagonalizable. Otherwise, by the reasoning above, for every $\epsilon >0$, there is $B$ s.t. $cond_2(B)<|\lambda_1/\lambda_n|+\epsilon$.
Best Answer
Choose any matrix $E \in \mathbb{R}^{m \times m}$ and scale the rows such that $\|E\|_\infty = \epsilon < 1$. Consider the matrix $$A=I-E.$$ where $I$ is the identity matrix of dimension $m$. Then $A$ is nonsingular and $$A^{-1} = \sum_{j=0}^\infty E^j.$$ This is known as the Neumann series. By the triangular inequality we have $$\|A\|_\infty \leq 1 + \|E\|_\infty$$ and $$\|A^{-1}\|_\infty \leq \sum_{j=0}^\infty \|E\|_\infty^j = \frac{1}{1 - \|E\|_\infty}.$$ It follows that $$\kappa_\infty(A) = \|A\|_\infty \|A^{-1}\|_\infty \leq \frac{1 + \epsilon}{1 - \epsilon}.$$
A complete MATLAB program that implements this strategy is given here.