Define the vector $y=(Ls-\theta)$, then using this vector and the Frobenius product, rewrite the function and find its differential
$$\eqalign{
f &= D:yy^T + \lambda L:L \cr
df &= D:2\,{\rm sym}(dy\,y^T) + 2\,\lambda L:dL \cr
&= 2\,D:dy\,y^T + 2\,\lambda L:dL \cr
&= 2\,Dy:dy + 2\,\lambda L:dL \cr
&= 2\,Dy:dL\,s + 2\,\lambda L:dL \cr
&= 2\,(Dys^T + \lambda L):dL \cr
}$$
Since $df=\frac{\partial f}{\partial L}:dL,\,$ the gradient is
$$\eqalign{
\frac{\partial f}{\partial L} &= 2\,(Dys^T + \lambda L) \cr
}$$
Setting the gradient to zero yields
$$\eqalign{
\lambda L &= D(\theta-Ls)s^T \cr
D\theta s^T &= \lambda L+DLss^T \cr
}$$
At this point, we can't solve for $L$ because it's sandwiched between 2 matrices. So we use a Kronecker-vec trick to solve for $L$ in vector form.
Applying vec to both sides of the above equation yields
$$\eqalign{
(\lambda I + ss^T\otimes D)\,{\rm vec}(L) &= {\rm vec}(D\theta s^T)\cr
}$$
This is a normal matrix-vector equation which can be solved for ${\rm vec}(L)$. The matrix $L$ can be recovered by un-stacking the rows of the solution vector.
For ease of typing, define
$$\eqalign{
&M = \Sigma,\quad
&N = \Sigma^{-1} \\
&R = M\otimes I,\quad
&S = N\otimes I = R^{-1},\quad
&f = f(S) \\
&h = {\rm vech}(M),\quad
&v = {\rm vec}(M) \\
&D = D_2,\quad &v = Dh \\
}$$
You don't tell us anything about the function $f(S),\,$ so I'll assume you don't need help
calculating its gradient $G=\left(\frac{\partial f}{\partial S}\right)$
Before we begin, we need a few results from Wikipedia
and this post which can be summarized
$$\eqalign{
&A\in{\mathbb R}^{m\times n},\quad B\in{\mathbb R}^{p\times q} \\ &I_k\in{\mathbb R}^{k\times k}\qquad \big({\rm Identity\,Matrix}\big) \\
&a = {\rm vec}(A),\quad b={\rm vec}(B)\\
&x={\rm vec}(A^T) = K_{m,n}\,a\quad \big({\rm Commutation\,Matrix}\big) \\
&{\rm vec}(A\otimes B)
= \left(I_n\otimes K_{q,m}\otimes I_p\right)(I_m\otimes I_n\otimes b)\,a
\\
}$$
Using this, we can write
$$\eqalign{
{\rm vec}(R)
&= {\rm vec}\big(M\otimes I_n\big) \\
&= \Big(I_2\otimes K_{n,2}\otimes I_n\Big)
\Big(I_2\otimes I_2\otimes{\rm vec}(I_n)\Big)\,v \\
&= Qv \\
}$$
Start by writing the differential of the function, then perform a sequence of changes of variables
from $S\to R\to v\to h$.
$$\eqalign{
df
&= G:dS
\\&= G:(-S\,dR\,S) \\&= -SGS:dR \\
&= -\operatorname{vec}\left(SGS\right):Q\,dv \\
&= -Q^T\operatorname{vec}\left(SGS\right):dv \\
&= -Q^T\operatorname{vec}\left(SGS\right):D\,dh \\
&= -D^TQ^T\operatorname{vec}\left(SGS\right):dh \\
\frac{\partial f}{\partial h}
&= -D^TQ^T\operatorname{vec}\left(SGS\right) \\
\\
}$$
The trace/Frobenius product
$\;A:B = {\rm Tr}\big(A^TB\big)\;$
is used in several steps.
The trace's cyclic property allows terms in such products to be rearranged in many ways, e.g.
$$\eqalign{
A:B &= A^T:B^T &= B:A \\
A:BC &= B^TA:C &= AC^T:B \\
}$$
Several steps also made use of the fact that $(M,N)$ and therefore $(R,S)$ are symmetric matrices.
Best Answer
When vectorizing the $m\times n$ matrix $X$, you obtain a vector $v =\mathrm{vec}(X)$ whose $i$th element is given by $v_i = X_{i\%m, i//m+1},$ where $i//m$ is the integer division and $i\%m$ is the remainder of the integer division.
Now consider what you mean by $$\frac{\partial AXB}{\partial X},$$ You are taking the derivative of an object with two indices with respect to an object with two indices, so you are looking at all terms of the form $$\frac{\partial [AXB]_{i,j}} {\partial X_{k,l}},$$ the matrix derivative vectorizes this indexing along two indices, the row is the position along $i,j$, the column is the position along $k,l$, so $$ \frac{\partial [AXB]_{i,j}} {\partial X_{k,l}}= (B\otimes A)_{m(j-1)+i, m(l-1)+k},$$ where the indexing reverses the vectorization operation.