My favorite way to interpret the trace is as the average value of an associated quadratic form. Here's how that works.
Let $V$ be an $n$-dimensional vector space, and let $T$ be a tensor on $V$. First let's consider the case in which $T$ is a tensor of type $(1,1)$, which we can also interpret as a linear map from $V$ to itself. Choose an inner product $\left< \cdot,\cdot\right>$ on $V$, and define the associated quadratic form $Q\colon V\to\mathbb R$ by
$$Q(x) = \left< x, Tx \right>.$$
Then a computation shows that the trace of $T$ is $n$ times the average value of $Q$ over the unit sphere in $V$.
(Here's a sketch of how this computation is done: Choose an orthonormal basis for $V$ and express $x$ in terms of that basis as an $n$-tuple $(x^1,\dots,x^n)$, with $(x^1)^2 + \dots + (x^n)^2 = 1$. Then
$$\int_{\mathbb S^{n-1}} Q(x)\,dA =
\sum_{i,j}T_i^j\int_{\mathbb S^{n-1}} x^ix^j\,dA.
$$
The integrals on the right with $i\ne j$ are all zero, while the ones with $i=j$ are all the same, as can be seen by renaming the variables; adding them all up yields the volume of the sphere, so each integral is $1/n$th of the volume.)
It's interesting to note that, because the trace is independent of basis, this result doesn't depend on the inner product chosen, even though the quadratic form will change depending on the inner product.
The quadratic form may seem to capture only part of the information encoded in $T$. But note that once an inner product is chosen, there's a one-to-one correspondence between linear maps $T\colon V\to V$ and bilinear forms $B_T\colon V\times V\to\mathbb R$, given by $B_T(x,y) = \left<x,Ty\right>$. Each such bilinear form decomposes into a symmetric part and a skew-symmetric part: $B_T = B_T^{\text{sym}}+B_T^{\text{skew}}$. The trace of the skew part is zero, so the trace only "sees" the symmetric part; and the symmetric part can be reconstructed from the quadratic form by using the polarization identity $B_T(x,y) = \tfrac14(Q(x+y)-Q(x-y))$.
Now if $T$ is a tensor of type $(k,l)$, the contraction on any pair of indices yields a tensor of type $(k-1,l-1)$, whose value on any set of arguments $x_1,\dots,x_{k-1}, x_1^*,\dots,x_{l-1}^*$ is just $n$ times the average value of the quadratic form determined by the $(1,1)$-tensor $T(x_1,\dots,x_{k-1},\ \cdot\ , x_1^*,\dots,x_{l-1}^*,\ \cdot\ )$.
I've seen in the literature the notation $C$ with some additional specifications for the contraction maps of all sorts, but the amount of decorations on the symbol $C$ varied depending on the context. See, e.g., A.Gray, Tubes, p.56, where these maps are used in the case of somewhat special tensors, and therefore the notation is simpler.
In general, there is a whole family of uniquely defined maps
$$
C^{(r,s)}_{p,q} \colon \otimes^{r}_{s} V \to \otimes^{r-1}_{s-1} V
$$
which are collectively called tensor contractions ($1 \le p \le r, 1 \le q \le s$).
These maps are uniquely characterized by making the following diagrams commutative:
$$
\require{AMScd}
\begin{CD}
\times^{r}_{s} V @> {P^{(r,s)}_{p,q}} >> \times^{r-1}_{s-1} V\\
@V{\otimes^{r}_{s}}VV @VV{\otimes^{r-1}_{s-1}}V \\
\otimes^{r}_{s} V @>{C^{(r,s)}_{p,q}}>> \otimes^{r-1}_{s-1} V
\end{CD}
$$
Explanations are in order.
Recall that the tensor products $\otimes^{r}_{s} V$ are equipped with the universal maps
$$
\otimes^{r}_{s} \colon \times^{r}_{s} V \to \otimes^{r}_{s} V
$$
where $\times^{r}_{s} V := ( \times^r V) \times (\times^s V^*)$.
Besides that, there is a canonical pairing $P$ between a vector space $V$ and its dual:
$$
P \colon V \times V^* \to \mathbb{R} \colon (v, \omega) \mapsto \omega(v)
$$
Notice that map $P$ is bilinear and can be extended to a family of multilinear maps
$$
P^{(r,s)}_{p,q} \colon \times^{r}_{s} V \to \times^{r-1}_{s-1} V
$$
by the formula:
$$
P^{(r,s)}_{p,q} (v_1, \dots, v_p, \dots, v_r, \omega_1, \dots, \omega_q, \dots, \omega_s) = \omega_q (v_p) (v_1, \dots, \widehat{v_p}, \dots, v_r, \omega_1, \dots, \widehat{\omega_q}, \dots, \omega_s)
$$
where a hat means omission.
Since maps $P^{(r,s)}_{p,q}$ are multilinear, the universal property of the maps $\otimes^{r}_{s}$ implies that there are uniquely defined maps
$$
\tilde{P}^{(r,s)}_{p,q} \colon \otimes^{r}_{s} V \to \times^{r-1}_{s-1} V
$$
and then the maps $C^{(r,s)}_{p,q}$ are given by
$$
C^{(r,s)}_{p,q} := \otimes^{r-1}_{s-1} \circ \tilde{P}^{(r,s)}_{p,q}
$$
Best Answer
Calculating the component values of a tensor contraction simply involves pairing one contravariant (upper) index of one tensor with a covariant (lower) index of the other tensor and then summing over both indices.
So the component values of the contracted tensor $C^{abc\dots}_{\dots xyz}$ are given in terms of the component values $A^{abc \dots n}$ and $B_{n \dots xyz}$ by
$C^{abc\dots}_{\dots xyz} = \sum_{n=1}^{k}A^{abc\dots n}B_{n\dots xyz}$
where $k$ is the dimension of the underlying vector space. Using the convention of the Einstein summation convention the sum operator is omitted (and understood to be implied by the repeated index label $n$) so the same expression is written as
$C^{abc\dots}_{\dots xyz} = A^{abc\dots n}B_{n\dots xyz}$