To use a standard Lagrangian approach, and still assuming that everything is real, since in the contrary case, $x^HMy$ is complex and can not be maximized, and if it is replaced by $Re(x^HMy)$, then one may forget the complex structure and double the dimension from complex to real,...
... the Lagrange function is
\begin{align}
L(M,\Lambda)&=∥MA−B∥^2_F−x^TMy-tr(Λ(M^TM-I))\\
&=∥A∥^2_F+∥B∥^2_F-2tr(MAB^T)−tr(yx^TM)-tr(Λ(M^TM-I)) %tr(ΛM^TX+ΛX^TM)
\end{align}
with derivative
$$
0=\frac{∂L}{∂M}=-2AB^T-yx^T-(Λ+Λ^T)M^T
$$
or
$$
M(Λ+Λ^T)=-2BA^T-xy^T
$$
There is no unique solution to this problem, one special solution can be found by noting that if $(Λ+Λ^T)$ is positive definite, then the left side is the polar decomposition of the right side. Which again can be computed using the SVD of the right side.
The problem is trivial I think. I assume $A,B,C$ are linearly independent. Let's write the solution in this form $t_*=aA+bB+cC+dD$, here $a,b,c,d$ are unknowns and also $D$ is a vector that is orthogonal to the basis $\{A,B,C\}$, in other words if we construct the matrix $M=[A\; B \;C]\in \mathbb{R}^{K\times3}$ then $D\in N(M)$ or $D$ is in the nullspace of $M$. Here for a general $t$ the problem
$$\max_{x_1\in \mathbb{R},x_2\in \mathbb{R}} [t_*^TA+x_1t_*^TB+x_2 t_*^TC]=\\\max_{x_1\in \mathbb{R},x_2\in \mathbb{R}} [\text{something}+x_1(b\Vert B\Vert_2^2+aA^TB+cC^TB)+x_2(aA^TC+bB^TC+c\Vert C\Vert_2^2) ]$$
which is unbounded above unless $\begin{cases}b\Vert B\Vert_2^2+aA^TB+cC^TB = 0\\aA^TC+bB^TC+c\Vert C\Vert_2^2=0\end{cases}$ we can extract $b,c$ as a function of $a$ from above equations
$$\begin{cases}b = \frac{\frac{A^TB}{\Vert B\Vert_2^2}-\frac{(A^TC)(C^TB)}{\Vert C\Vert_2^2\Vert B\Vert_2^2}}{1-\frac{(C^TB)(B^TC)}{\Vert B\Vert_2^2\Vert C\Vert_2^2}}a=m_1a\\c = \frac{\frac{A^TC}{\Vert C\Vert_2^2}-\frac{(A^TB)(B^TC)}{\Vert B\Vert_2^2\Vert C\Vert_2^2}}{1-\frac{(C^TB)(B^TC)}{\Vert B\Vert_2^2\Vert C\Vert_2^2}}a=m_2a\end{cases}$$
If these conditions being fulfilled, the outer optimization tier will be
$$\min_{t\geq 0_{K}} (a\Vert A \Vert^2+m_1aA^TB+m_2aA^TC)=\begin{cases}\min a &\text{if} \quad \Delta \ge 0 \\ \max a &\text{if} \quad \Delta < 0 \end{cases}$$
where $\Delta = \Vert A \Vert^2+m_1A^TB+m_2A^TC$. To solve the original problem. first we compute $m_1,m_2$ and then we compute $\Delta$. Then we solve the following feasibility problem
$$\text{find} \quad a\in\mathbb{R},n \in\mathbb{R}^K \\
\text{s.t.}\quad a(A+m_1B+m_2C)+n^T(I-M(M^TM)^{-1}M^T) \in\mathbb{R}_+^K$$
If this problem is feasible and has the solution $v$, since $\mathbb{R}_+^K$ is a convex cone then any $ev,e\in \mathbb{R}_+$ is also a solution. Thus if $\Delta\ge 0$, The optimum solution will zero. If $\Delta <0$ the problem will be unbounded below. If the feasibility problem is infeasible, then the original problem is unbounded above.
Best Answer
Transform the problem by introducing a new decision variable $z$ which captures $\min (|\mathbf{g}_1\mathbf{Ml}|^2,|\mathbf{g}_2 \mathbf{Ml}|^2)$. Your problem now becomes $$ \max_{\mathbf{l},z} z $$ subject to: $$ \begin{aligned} \|\mathbf{Ml}\|^2 &=k \\ z &\le |\mathbf{g}_1 \mathbf{Ml}|^2 \\ z &\le |\mathbf{g}_2 \mathbf{Ml}|^2 \end{aligned} $$