Basically the point is that it is ambiguous to use an index as both a free and dummy index.
$$ A_iB_{ii} \stackrel{\huge{?}}{=} A_iB_{jj} = A_i \sum_j B_{jj}$$
or
$$ A_iB_{ii} \stackrel{\huge{?}}{=} A_jB_{ji} = \left(\sum_j A_jB_j\right)_i ? $$
This is a problem, we can't have both.
Added after Trapu's edit. So, to be clear, I will be non-einstein notation on the r.h.s. of the equations below: the point here is that $A_iB_{ii}$ cannot be interpreted meaningfully treating one pair of $i$ as dummies (summed over) and the other as free (not summed)
$$ A_iB_{ii} = \sum_{j=1}^n A_jB_{ji} = A_1B_{1i}+A_2B_{2i}+ \cdots +A_nB_{ni} \qquad (I.) $$
verses:
$$ A_iB_{ii} = \sum_{j=1}^n A_iB_{jj} = A_i\sum_{j=1}^n B_{jj} = A_i\left(B_{11}+B_{22}+ \cdots +B_{nn} \right) \qquad (II.) $$
Expressions (I.) and (II.) are two reasonable interpretations of $A_iB_{ii}$ if just one index is taken to be free. But, these are not equal.
For example, $B_{11}=1, B_{22}=-1, B_{12}=0=B_{21}$ and $A_1=1, A_2=1$,
$$ (I.) \qquad A_1B_{1i}+A_2B_{2i} = B_{1i}+B_{2i} = \begin{cases} 1 & i=1 \\ -1 & i=2 \end{cases} $$
verses
$$ (II.) \qquad A_i(B_{11}+B_{22}) = 0 = \begin{cases} 0 & i=1 \\ 0 & i=2 \end{cases} $$
As you can see these expressions do not agree. Therefore, we cannot use the same index for a dummy and a free index.
I would liken this problem to that I have with my calculus I students who insist they need not change the bounds in a u-substitution since they're just going to write it back in terms of $x$ at the end. However, if such a practice is made then some of the intermediate steps are wrong. We are left with the situation that what we write is insufficient to capture the precise mathemtical intent of the expression. This should be avoided since good notation ought to be unambiguous. Or, at a minimum the ambiguity should reflect a deeper mathematical structure as in the case of quotient spaces and the non-uniqueness of the representative. This is not that, this is just bad notation. It does lead to errors, trust me, I've made them.
What is Einstein's summation notation?
While Einstein may have taken it to be simply a convention to sum "any repeated indices", as Zev Chronocles alluded to in a comment, such a summation convention would not satisfy the "makes it impossible to write down anything that is not coordinate-independent" property that proponents of the convention often claim.
In modern geometric language, one should think of Einstein's summation convention as a very precise way to express the natural duality pairings/contractions when looking at a multilinear object.
More precisely: let $V$ be some vector space and $V^*$ its dual. There is a natural bilinear operation taking $v\in V$ and $\omega\in V^*$ to obtain a scalar value $\omega(v)$; this could alternatively be denoted as $\omega\cdot v$ or $\langle \omega,v\rangle$. This duality pairing can also be called contraction and sometimes denoted by $\mathfrak{c}: V\otimes V^* \to \mathbb{R}$ (or different scalar field if your vector space is over some other field).
Now, letting $\eta$ be an arbitrary element of $V^{p,q}:= (\otimes^p V)\otimes (\otimes^q V^*)$, as long as $p,q$ are both positive, we can take a contraction between any one factor of $V$ against any other factor of $V^*$. Each one of these contractions give a mapping $V^{p,q} \to V^{p-1,q-1}$, and it is tedious to name every one of them (you can index each one by calling $\mathfrak{c}_{i,j}$ the contraction between the $i$th factor of $V$ with the $j$th factor of $V^*$).
The Einstein convention gets around this by being an index convention, where $\eta$ is written as $\eta^{i_1\cdots i_p}_{j_1\cdots j_q}$, an indexed object, each of the index corresponds to one of the $V$ or $V^*$ factors. Then instead of $\mathfrak{c}_{i,j}$, we just single out the relevant factor in the index and trace over it. For example
$$ \mathfrak{c}_{1,1}(\eta)^{i_1\cdots i_{p-1}}_{j_1 \cdots j_{q-1}} = \eta^{k i_1\cdots i_{p-1}}_{k j_1 \cdots j_{q-1}} $$
where the summation symbol over $k$ is suppressed. For one single tensor the advantage of this notation is not clear, but for multiple contractions, you see the advantage
$$ \mathfrak{c}_{1,1} \mathfrak{c}_{p,q} \eta = \mathfrak{c}_{p-1,q-1} \mathfrak{c}_{1,1} \eta $$
if $\eta \in V^{p,q}$. Basically, if you have multiple contractions on one expression, you will have to keep careful track of the level of contractions to put in the correct indices in the contraction symbol; in particular the symbols are not commutative. The same expression above in Einstein notation would only be
$$ \eta^{k i_1\cdots i_{p-2} \ell}_{k j_1\cdots j_{q-2} \ell} $$
and it is immediately clear which slots are contracted together. Furthermore, it is manifest that the "formulae" obtained thus are independent of the choice of basis of $V$ and $V^*$ (with respect to which we can write down the actual components of $\eta$).
What is the correct use of Einstein's notation?
- Einstein's notation should only be used to denote contraction of one contravariant slot with one covariant slot. That's it. Don't sum over two covariant slots. Don't use triply-repeated indices. If you limit it to these kinds of contractions, you are using it to denote a "natural operation" and therefore will never get you expressions that are coordinate-dependent/non-geometric.
- This is especially an issue in Lorentzian or other pseudo-Riemannian geometric set-ups, or in situation where you don't have a metric at all. That in Riemannian geometry often times we can get away with doing contraction of a pair of covariant indices or a pair of contravariant indices is that there is a natural isomorphism (given by the metric) between $V$ and $V^*$ in this situation. Furthermore, in usual convention this isomorphism doesn't "change sign". In the situation without any metric there are no preferred isomorphism between $V$ and $V^*$, and so the bilinear map $V\otimes V\to \mathbb{R}$ would necessarily be coordinate dependent. In the Lorentzian case there can be sign issues if you are not careful.
- Einstein's summation convention takes advantage of the fact that the dual pairing $\omega(v)$ can be expressed as first taking the tensor product $\omega\otimes v$ then taking the contraction. So you should only use it when this procedure makes sense: don't use it to do elementwise division, for example.
- Einstein's summation convention should be used when there are no "coordinate dependent manipulations". In particular, if you ever find the need to speak of one particular component of a tensor when expressed in one particular coordinate system, then you should not use Einstein notation. Alternatively, you should find an invariant way of expressing that particular component (for example, fixing a distinguished one-form/vector field and write the component as the contraction of your tensor against that one-form or vector field).
Alternatives
Einstein's summation notation is ultimately about pairings between $V$ and $V^*$, so (in spite of its likely origin) you should not think of it primarily as a notation used for decluttering computations of tensor components in local coordinates, but rather as a way to efficiently solve the problem of "which two slots are we contracting again?"
From this point of view the alternatives to Einstein's notation are "invariant notation" (don't use any index; write everything in coordinate free manner) and the "Penrose diagrammatic notation" (see e.g. https://en.wikipedia.org/wiki/Penrose_graphical_notation).
Best Answer
Your solution is equivalent to
$$\sum_{i,j,k=1}^2g^i_{jk} $$
and no summation convention (I think it is a way to say "Einstein convention on repeated indices") appears, as pointed out by @Raskolnikov.
The aim of the exercise is to arrive at an expression with "repeated indices", i.e. an expression in which you use the summation convention. To do so, one needs to have no free index (like yours $i$, $j$ and $k$) and to contract-or produce pairs of- all indices.
It is clear that the starting expression has 3 indices: so 3 summations, or contractions are needed. The textbook begins to produce a summation considering at first the above index, called $i$. This is done introducing the vector
$$c=(c_1,c_2)=(1,1)$$
and realizing the sum $\sum_{i=1}^2 g^i_{j,k}$ (which, once again, uses no summation convention) as
$$g^1_{j,k}+g^2_{j,k}=\sum_{i=1}^2 g^i_{j,k}=g^i_{j,k}c_i,$$
for any $j,k$. On the rightmost r.h.s. of the above expression we use the Einstein convention, summing over $i$, the only repeated index. Please note that the length of $c$ is equal to $2$, i.e. the cardinality of the set $\{1,2\}$ of all the possible values for $i$.
We are left with 2 indices, so 2 more summations are needed. We repeat the above lines to produce the summation w.r.t. to $j$ through
$$g^i_{1,k}c_i+g^i_{2,k}c_i=\sum_{j=1}^2 g^i_{j,k}c_i=g^i_{j,k}c_ic_j,$$
for any $k$. Repeating the same trick with $k$, the only free index remaining, we arrive at
$$\sum_{k=1}^2g^i_{j,k}c_ic_j=g^i_{j,k}c_ic_jc_k,$$
and
$$\sum_{i,j,k=1}^2g^i_{jk}=g^i_{j,k}c_ic_jc_k.~~~(1) $$
On the r.h.s. of $(1)$ we have all repeated indices and we are done.