$\delta^i_j$ is 0 if $i\ne j$ and it is 1 if $i = i$. So if you "contract" it with an indexed quantity $v_k$, i.e. evaluate the sum $\sum_{i=1}^N \delta^i_j v_i$, all of the terms disappear except the one where $i=j$ and the result is $v_j$. The summation convention simplifies the notation by implicitly summing over repeated indices, so the equality
$$ \sum_{i=1}^N \delta^i_j v_i = v_j$$
can be written as
$$ \delta^i_j v_i = v_j$$
where the summation over $i$ is implied. In effect, multiplying $v_i$ by $\delta^i_j$ and summing over $i$, replaces the index $i$ in $v_i$ by the index $j$.
In the expression
$$\delta^i_j v_i u^j$$
both indices are repeated, that is there are two implied summations, one over $i$ and one over $j$. What the abbreviation really means is
$$ \sum_{i=1}^N \sum_{j=1}^N \delta^i_j v_i u^j $$
You can do the sums in either order, so let's use the first part of the answer to do the summation over $i$:
$$ \sum_{i=1}^N \sum_{j=1}^N \delta^i_j v_i u^j = \sum_{j=1}^N v_j u^j $$
But using the summation convention, the RHS can be abbreviated as $v_j u^j$, the sum of the products of the corresponding components of the two vectors, i.e. the dot product.
The summation convention is convenient, but like any abbreviation, it hides things under the rug so to speak. So if you find yourself confused, just expand it: write out the explicit sums - for small $N$ (say $N=3$), you can even expand the sums and write each term out explicitly. That should clarify things completely.
What you're doing when calculating the value of $(AD)_{lj}$ is the equivalent of doing this
$$
D_{ij} = \delta_{ijk}d_k \color{red}{\stackrel{!!}{=}} d_i
$$
which clearly shows the problem much earlier than you noticed: expanding the symbol $\delta_{ijk}$ is the issue here. Einstein's notation is useful, but it doesn't mean you need to use it everywhere, here's an option
\begin{eqnarray}
(A D)_{lj} &=& \sum_{i}A_{li}\color{blue}{D_{ij}} = \sum_iA_{li}\color{blue}{\delta_{ij}d_j} = A_{lj}d_j ~~~\mbox{(sum not implied)}
\end{eqnarray}
Best Answer
Mainly, the Kronecker delta makes sums collapse, making the two indexes equal everywhere else in the expression. For example: $$\delta_j^i \delta^j_i = \delta_i^i = n,$$and $$\delta^{\color{red}{a}}_{\color{blue}{b}}g_{c\color{red}{a}}g^{bd}\delta^{c}_{d} = g_{c\color{blue}{b}}g^{bd}\delta^c_d.$$I'll use colors again to ilustrate how this computation proceeds: $$g_{\color{red}{c}b}g^{bd}\delta_{\color{blue}{d}}^{\color{red}{c}} = g_{\color{blue}{d}b}g^{bd} \stackrel{(\ast)}{=} \delta_d^d = n,$$where in $(\ast)$ I used the definition of the inverse metric tensor.