This could be true if $n\leq m$ at least in the case when all matrices involved have full rank equal to $n$. Because the singular values of $[A,B]$ are the square roots of the eigenvalues of $AA^T+BB^T$, then we have
$$
\begin{split}
\sigma_n^2([A,B]) &= \lambda_n(AA^T+BB^T)
=\min_{\|x\|_2=1}x^T(AA^T+BB^T)x\\&\geq\min_{\|x\|_2=1}x^TAA^Tx=\lambda_n(AA^T)=\sigma_n^2(A)
\end{split}
$$
Similarly,
$$
\begin{split}
\sigma_n^2([A,B]) &= \lambda_n(AA^T+BB^T)
=\min_{\|x\|_2=1}x^T(AA^T+BB^T)x\\&\geq\min_{\|x\|_2=1}x^TBB^Tx=\lambda_n(BB^T)=\sigma_n^2(B)
\end{split}
$$
and hence
$$
\sigma_n([A,B])\geq\max\{\sigma_n(A),\sigma_n(B)\}.
$$
Similarly, variational characterisation of singular values also implies that
$$
\sigma_1^2([A,B])\leq\sigma_1^2(A)+\sigma_1^2(B).
$$
Indeed,
$$
\begin{split}
\sigma_1^2([A,B]) &= \lambda_1(AA^T+BB^T)
=\max_{\|x\|_2=1}x^T(AA^T+BB^T)x\\&\leq\max_{\|x\|_2=1}x^TAA^Tx+\max_{\|x\|_2=1}x^TBB^Tx=\lambda_1(AA^T)+\lambda_1(BB^T)\\&=\sigma_1^2(A)+\sigma_1^2(B)
\end{split}
$$
Hence
$$
\mathrm{cond}^2([A,B])\leq\frac{\sigma_1^2(A)+\sigma_1^2(B)}{\max\{\sigma_n^2(A),\sigma_n^2(B)\}}\leq\frac{\sigma_1^2(A)}{\sigma_n^2(A)}+\frac{\sigma_1^2(B)}{\sigma_n^2(B)}\leq\mathrm{cond}^2(A)+\mathrm{cond}^2(B)
$$
and
$$
\mathrm{cond}([A,B])\leq\sqrt{\mathrm{cond}^2(A)+\mathrm{cond}^2(B)}
\leq\mathrm{cond}(A)+\mathrm{cond}(B).
$$
For $n>m$, it's generally not true (still assuming full rank of both $A$, $B$, and $[A,B]$). As an extreme example, take $m=1$. Then $\mathrm{cond}(A)=\mathrm{cond}(B)=1$ but $\mathrm{cond}([A,B])$ can be arbitrarily large. Consider, e.g., $A=[1,0]^T$ and $B=[1,0.001]^T$.
The rank-deficient case is a bit more difficult if you considered $\mathrm{cond}(A)=\sigma_1(A)/\sigma_r(A)$, where $r$ is the rank of $A$ (this is how the condition number is usually defined when $A$ does not have full rank). The trouble here is not with the upper bound on $\sigma_1([A,B])$ but with the lower bound which does not generally hold. Again, as an extreme example, consider $A$ and $B$ of the case 1) and 2) and augment them with a sufficient number of zero rows to convert it to the case 3) (which we know where the statement is false).
Best Answer
When $A$ and $B$ are square matrices, the inequality is true for every matrix norm (which satisfies $\|AB\|\le \|A\|\,\|B\|$, by definition.) Indeed, $$ \operatorname{cond}(AB)=\|AB\|\,\|(AB)^{-1} \| \le \|A\|\,\|B\|\,\|B^{-1}\|\,\|A^{-1} \| =\operatorname{cond}(A)\,\operatorname{cond}(B) $$ If $A$ and $B$ are non-square, then $A^{-1}$ is not meaningful, and the condition number has to be defined differently. The one definition I know for this case (which agrees with the above when the operator norm is used), is $$ \operatorname{cond}(A)=\frac{\sigma_1(A)}{\sigma_n(A)} = \frac{\max\{|Ax|:|x|=1\}}{\min \{|Ax| : |x|=1\}} $$ (Here $\sigma_1$ and $\sigma_n$ are the greatest and smallest singular values of $A$, defined in the quotient on the right). This definition is of interest only when the kernel is trivial. The submultiplicative inequality still holds, because $\sigma_1(AB)\le \sigma_1(A)\sigma_1(B)$ and $\sigma_n(AB)\ge \sigma_n(A)\sigma_n(B) $.