For the first question alone (without context) I'm going to prove something else first (then check the $\boxed{\textbf{EDIT}}$ for what is asked):
Suppose we have three matrices $A,X,B$ that are $n\times p$, $p\times r$, and $r\times m$ respectively. Any element $w_{ij}$ of their product $W=AXB$ is expressed by:
$$w_{ij}=\sum_{h=1}^r\sum_{t=1}^pa_{it}x_{th}b_{hj}$$
Then we can show that: $$s=\frac {\partial w_{ij}}{\partial x_{dc}}=a_{id}b_{cj}$$
(because all terms, expect the one multiplied by $x_{dc}$, vanish)
One might deduce (in an almost straightforward way) that the matrix $S$ is the Kronecker product of $B^T$ and $A$ so that:$$\frac {\partial AXB}{\partial X}=B^T⊗A$$
Replacing either $A$ or $B$ with the appropriate identity matrix, gives you the derivative you want.
$$\boxed{\textbf{EDIT}}$$
Upon reading the article you added (and after some sleep!), I've noticed that $dD$ is not $\partial D$ in their notation, but rather $\dfrac {\partial f}{\partial D}$ where $f$ is a certain function of $W$ and $X$ while $D=WX$. This means that the first expression you're having problems with is $$\frac{\partial f}{\partial W}=\frac{\partial f}{\partial D}X^T$$
Since the author at the beginning stated that he'd use the incorrect expression "gradient on" something to mean "partial derivative" with respect to that same thing.
So any element of $\partial f/\partial W$ can be written as $\partial f/\partial W_{ij}$. And any element of $D$:
$$D_{ij}=\sum_{k=1}^qW_{ik}X_{kj}$$
We can write $$df=\sum_i\sum_j \frac{\partial f}{\partial D_{ij}}dD_{ij}$$
$$\frac{\partial f}{\partial W_{dc}}=\sum_{i,j} \frac{\partial f}{\partial D_{ij}}\frac{\partial D_{ij}}{\partial W_{dc}}=\sum_j \frac{\partial f}{\partial D_{dj}}\frac{\partial D_{dj}}{\partial W_{dc}}$$
This last equality is true since all terms with $i\neq d$ drop off.
Due to the product $D=WX$, we have $$\frac{\partial D_{dj}}{\partial W_{dc}}=X_{cj}$$ and so $$\frac{\partial f}{\partial W_{dc}}=\sum_j \frac{\partial f}{\partial D_{dj}}X_{cj}$$
$$\frac{\partial f}{\partial W_{dc}}=\sum_j \frac{\partial f}{\partial D_{dj}}X_{jc}^T$$
This means that the matrix $\partial f/\partial W$ is the product of $\partial f/\partial D$ and $X^T$. I believe this is what you're trying to grasp, and what's asked of you in the last paragraph of the screenshot. Also, as the next paragraph after the screenshot hints, you could've started out with small matrices to work this out before noticing the pattern, and generalizing as I attempted to do directly in the above proof. The same reasoning proves the second expression as well...
In index notation, you can write
$$\frac{\partial f}{\partial x_k}=A_{kj}x_j + A_{ik}x_i$$
where a summation is implied by the presence of a repeated index.
You can also change a summation index (aka a dummy index), without altering the result, e.g. $(x_iy_i = x_ky_k)$.
So let's change both dummy indices to $p$ yielding
$$\eqalign{
\frac{\partial f}{\partial x_k} &= A_{kp}x_p + A_{pk}x_p \cr
&= (A_{kp}+A_{pk})\,x_p \cr
}$$
which in matrix notation would be written as $\,(A+A^T)\,x$
Best Answer
Perhaps some help to compute the partial derivatives would be appreciated. It is always best to be explicit when one is a bit confused with heavy notation. Write $A = (a_{ij})$, $x = (x_1,\dots,x_n)^{\top}$ and $b = (b_1,\dots, b_m)^{\top}$, assuming $A$ is an $m \times n$ matrix. Then the $i^{\text{th}}$ component of $Ax-b$ is $$ (Ax-b)_i = \left( \sum_{j=1}^n a_{ij} x_j \right) - b_i $$ so that $$ (Ax-b)^{\top} (Ax-b) = \sum_{i=1}^m (Ax-b)_i^2 = \sum_{i=1}^m \left( \left( \sum_{j=1}^n a_{ij} x_j \right) - b_i \right)^2. $$ Suppose you want to compute the derivative with respect to $x_k$, $1 \le k \le n$ (I choose $k$ because choosing $i$ or $j$ would be confusing with the preceding subscripts used). Then $$ \frac{\partial}{\partial x_k} (Ax-b)^{\top} (Ax-b) = \sum_{i=1}^m \frac{\partial}{\partial x_k} \left( \left( \sum_{j=1}^n a_{ij} x_j \right) - b_i \right)^2 = \sum_{i=1}^m 2 \left( \left( \sum_{j=1}^n a_{ij}x_j \right) - b_i \right) (a_{ik}). $$ In particular, we can let $A_k = (a_{1k},a_{2k},\dots,a_{mk})$ so that $$ \frac{\partial}{\partial x_k} (Ax-b)^{\top} (Ax-b) = 2 \langle A_k, Ax-b \rangle $$ where $\langle - , - \rangle$ denotes inner product. You can use convexity arguments to show that any critical point is a minimizer in this case ; although you can see that the minimizer will not always be unique, even when $m=n$ ; it suffices for $A$ to be singular for this to happen.
Hope that helps,