For any matrix $A$, $\|A\|_F^2=\sum_{i=1}^r \sigma_i^2$, where $\sigma_i$ is the $i$th singular value, and $r$ is the rank of $A$. For a rank-$1$ matrix $A=ba^T$, the singular value decomposition becomes trivial:
$$
A = \left( \frac{b}{\| b\|_2}\right)\left( \|b\|_2\|a^T\|_2\right)\left( \frac{a^T}{\| a^T\|_2}\right)
$$
So clearly $\| A\|_F^2 = \left( \|b\|_2\|a^T\|_2\right)^2 = \|b\|_2^2\|a\|_2^2$.
Recall that the frobeniuns norm $\def\norm#1{\left\|#1\right\|_F}\norm{\cdot} \colon \mathrm{Mat}_{n,m}(\mathbf R) \to \mathbf R$ if given by
$$ \norm A = \def\t{\mathop{\rm tr}}\t(A^t A)^{1/2} $$
and hence the derivative of $\norm{\cdot }^2$ is (we used $\t(A^t H) = \t(H^t A)$)
$$ D(\norm{\cdot}^2)(A)H = 2 \t(A^t H) $$
If we denote, for given $x_1, \ldots, x_{i-1}, x_{i+1}, \ldots, x_m \in \mathbf R^n$, the map $x_i \mapsto [x_1, \ldots, x_n] \in \mathrm{Mat}_{n,m}(\mathbf R)$ by $A^{\hat x}$, we have by the chain rule, that the derivative of $x_i \mapsto \psi\bigl(A^\hat x(x_i)\bigr)$, is given by
$$ D\psi(A^{\hat x}(x_i))DA^{\hat x}(x_i) $$
Now $A^{\hat x}$ is affine, hence $DA^{\hat x}(x_i)$ is the linear part $h \mapsto [0, \ldots, 0, h, 0, \ldots, 0] \in \mathrm{Mat}_{n,m}(\mathbf R)$ and $D\psi$ is given by
$$ D\psi(A)H = 2\t\bigl((A-I)^t H\bigr) $$
Hence,
$$ \frac{\partial \psi}{\partial x_i}(h)
= D\psi(A)DA^{\hat x}(x_i)h = 2\t\bigl((A-I)^t[0,\ldots, 0, h, 0,\ldots, 0])\bigr) $$
Now $(A - I)^t = A^t - I^t$ has the rows $x_j^t - e_j^t$, and as
$$ (A-I)^t[0,\ldots, 0, h,0,\ldots, 0] = [0, \ldots, (A^t - I^t)h,0,\ldots, 0] $$
taking the trace leaves us with
$$ \frac{\partial \psi}{\partial x_i}(h)
= 2(x_i^t - e_i^t)h $$
Best Answer
$$\begin{aligned} &\frac{𝖽 ‖ Y - XBX^⊤ ‖_F^2}{𝖽 X} \\ &=\frac{𝖽 ‖ Y - XBX^⊤ ‖_F^2}{𝖽(Y - XBX^⊤)} ∘ \frac{𝖽 (Y - XBX^⊤)}{𝖽 X} \\ &= [{∆Z}↦2⟨Y - XBX^⊤∣{∆Z}⟩_F] ∘ [{∆X} ↦ XB{∆X}^⊤ + {∆X}BX^⊤] \\ &= [{∆X} ⟼ 2⟨Y - XBX^⊤∣XB{∆X}^⊤ + {∆X}BX^⊤⟩_F] \\ &\text{To get the derivative in tensorial form, we need to move everything except ${∆X}$ to the left.} \\ &= [{∆X} ⟼ 2⟨Y - XBX^⊤∣XB{∆X}^⊤⟩_F + 2⟨Y - XBX^⊤∣{∆X}BX^⊤⟩_F] \\ &= [{∆X} ⟼ ⟨2B^⊤X^⊤(Y - XBX^⊤)∣{∆X}^⊤⟩_F + ⟨2(Y - XBX^⊤)XB^⊤∣{∆X}⟩_F] \\ &= [{∆X} ⟼ ⟨2(Y - XBX^⊤)^⊤XB∣{∆X}⟩_F + ⟨2(Y - XBX^⊤)XB^⊤∣{∆X}⟩_F] \\ &= [{∆X} ⟼ ⟨4(Y - XBX^⊤)XB∣{∆X}⟩_F] \end{aligned}$$