I'm no physicist so I can't speak exactly to what they do or how they think of things, but here is some related math. As you suggested, $(1,1)$ tensor is an element of $V \otimes V^*$ for some finite dimensional vector space $V$. So let's talk about this space $V \otimes V^*$ and some of the structure it has, I will address the different actions you mentioned in a somewhat different order.
-First and foremost, the definition of a dual vector space means that we have a bilinear map from $V \times V^*$ to the ground field $K$. If $\vec{e}$ is a vector in $V$ and $f \in V^*$ is a linear map from $V$ to $K$ this bilinear map just sends $(\vec{e},f)$ to $f(\vec{e})$. Bilinear maps from a pair of vector spaces are the same thing as linear maps from the tensor product, this is how tensor products are (usually) defined by mathematicians. Hence we can view this pairing as a linear map $V \otimes V^* \rightarrow K$, this is the trace or contraction map, it is also often called the evaluation map by mathematicians.
-Next, there is a natural map from $V \otimes V^*$ into $End(V)$ the space of linear maps from $V$ to itself. In order to define this map it will be enough to say how a $(1,1)$ tensor acts on a vector $\vec{v}$ and extend linearly. This is just given by $(\vec{e}\otimes f)(\vec{v}) = f(\vec{v})\vec{e}$. More coordinate freely this map is given by the evaluation map from above applied to the $V^*$ component along with the copy of $V$ corresponding to the input vector $\vec{v}$, followed by the scalar multiplication map from $V \otimes K$ to $V$. This map is defined and injective for any vector space $V$, but for infinite dimensional vector spaces it is not surjective, its image is the space of finite rank linear maps from $V$ to $V$. So this is how we get a linear map from $V$ to $V$, and since $V$ and $V^*$ play essentially the same role (for finite dimensional vector spaces) we can also get a linear map from $V^*$ to itself essentially the same way. Also note that if we choose a basis $\vec{e_1},...,\vec{e_n}$ for $V$ then $End(V)$ is can be identified with $n \times n$ matrices, and the evaluation map from above is just taking trace.
-Finally, we get to the scalar multiplication map which is often called coevaluation by mathematicians. From the previous part, if $V$ is finite dimensional we have an isomorphism between $V\otimes V^*$ and $End(V)$ the space of linear maps from $V$ to itself. Inside here there is a distinguished linear map given by the identity map $\vec{v} \rightarrow \vec{v}$. Coevaluation is just the map $K \rightarrow V \otimes V^*$ that sends $1$ to the identity map under the identification of $V \otimes V^*$ with $End(V)$. Again this part relies heavily on $V$ being finite dimensional as the identity map only has finite rank (and is hence in the image of $V \otimes V^*$) in this case.
Note that all of these maps are canonically defined (i.e. independent of the basis you choose), which means that if you do any calculation only involving these maps the answer you get won't depend on how you choose coordinates.
Best Answer
$\delta^i_j$ is 0 if $i\ne j$ and it is 1 if $i = i$. So if you "contract" it with an indexed quantity $v_k$, i.e. evaluate the sum $\sum_{i=1}^N \delta^i_j v_i$, all of the terms disappear except the one where $i=j$ and the result is $v_j$. The summation convention simplifies the notation by implicitly summing over repeated indices, so the equality
$$ \sum_{i=1}^N \delta^i_j v_i = v_j$$
can be written as
$$ \delta^i_j v_i = v_j$$
where the summation over $i$ is implied. In effect, multiplying $v_i$ by $\delta^i_j$ and summing over $i$, replaces the index $i$ in $v_i$ by the index $j$.
In the expression
$$\delta^i_j v_i u^j$$
both indices are repeated, that is there are two implied summations, one over $i$ and one over $j$. What the abbreviation really means is
$$ \sum_{i=1}^N \sum_{j=1}^N \delta^i_j v_i u^j $$
You can do the sums in either order, so let's use the first part of the answer to do the summation over $i$:
$$ \sum_{i=1}^N \sum_{j=1}^N \delta^i_j v_i u^j = \sum_{j=1}^N v_j u^j $$
But using the summation convention, the RHS can be abbreviated as $v_j u^j$, the sum of the products of the corresponding components of the two vectors, i.e. the dot product.
The summation convention is convenient, but like any abbreviation, it hides things under the rug so to speak. So if you find yourself confused, just expand it: write out the explicit sums - for small $N$ (say $N=3$), you can even expand the sums and write each term out explicitly. That should clarify things completely.