[Math] Einstein Summation with multiple terms

notationsummation

I know the basics of Einstein Summation but i've got an equation here that is a little more complex than the easy examples i'm only finding on this subject:

$C = (p-nT) \partial_\gamma u_\gamma + A_{\alpha\beta} \partial_\alpha u_\beta – (\zeta \partial_\gamma u_\gamma)\partial_\alpha u_\alpha + n^2\partial_\gamma u_\gamma + K \left( -\frac12 (\partial_\gamma n) (\partial_\gamma n) (\partial_\alpha u_\alpha) – n(\partial_\gamma n)(\partial_\gamma \partial_\alpha u_\alpha) – (\partial_\gamma n)(\partial_\gamma u_\alpha)(\partial_\alpha n) \right)$

where all indices are either x=1 or y=2.

Now the problem i have with this is that i'm unsure where the scopes of the summations are.

It could be either this:

$C=\sum_{\alpha,\beta,\gamma}\left( (p-nT) \partial_\gamma u_\gamma + A_{\alpha\beta} \partial_\alpha u_\beta – (\zeta \partial_\gamma u_\gamma)\partial_\alpha u_\alpha + n^2\partial_\gamma u_\gamma + K \left( -\frac12 (\partial_\gamma n) (\partial_\gamma n) (\partial_\alpha u_\alpha) – n(\partial_\gamma n)(\partial_\gamma \partial_\alpha u_\alpha) – (\partial_\gamma n)(\partial_\gamma u_\alpha)(\partial_\alpha n) \right) \right)$

or the other extreme:

$C = (p-nT) \sum_\gamma (\partial_\gamma u_\gamma) + \sum_{\alpha,\beta} (A_{\alpha\beta} \partial_\alpha u_\beta) – (\zeta \sum_\gamma \partial_\gamma u_\gamma)\sum_\alpha(\partial_\alpha u_\alpha) + n^2 \sum_\gamma (\partial_\gamma u_\gamma) + K \left( -\frac12 (\sum_\gamma \partial_\gamma n) (\sum_\gamma \partial_\gamma n) (\sum_\alpha \partial_\alpha u_\alpha) – n(\sum_\gamma \partial_\gamma n)(\sum_{\alpha,\gamma} \partial_\gamma \partial_\alpha u_\alpha) – (\sum_\gamma \partial_\gamma n)(\sum_{\alpha,\gamma} \partial_\gamma u_\alpha)(\sum_\alpha \partial_\alpha n) \right)$

or anything in between the two extremes.
How are you supposed to decode this?

In case you are wondering the equation has to do with a CFD code where p=pressure n=density T=temperature u=velocity and A=pressure tensor. Everything else are constants.

Any help is appreciated.

Best Answer

When you use Einstein Summation Convention you sum over repeated indices on each of the terms. And in truth we really say that the index to be summed over must appear once upstairs and once downstairs (I'll tell you the reason in a moment).

If we insist to force summing over repeated indices even though they're all downstairs like yours, the second way to think about it is correct: you must think of each term independent of the other. For example, if I give you the relation $A_i B_i + A_j C_j B_i D_i$ this should mean the summation in each term itself: $\sum_i A_i B_i + \sum_j \sum _i A_iC_jB_iD_i$.

Now, when I've learned this notation I found pretty confusing, so even though the question has already been addressed I'll just give a quick overview of it below.

In my opinion this notation becomes really good just when we sum over indices upstairs and downstairs only. The point is: given a set of vectors (and I say vectors, and not their components) we can index the vectors. The convention is:

Notation: If we are given a set of $n$ vectors to index, we index the vectors downstairs, i.e.: we write the set as $\{v_1, \dots, v_n\}$

Now, the second convention is that when we index the coeficients of a linear combination they sould be written upstairs:

Notation: If we are given a set of $n$ scalars and $n$ vectors and we are to form the linear combination of the vectors with the scalars, we write each scalar with index upstairs, so the set of scalars will be $\{a^1, \dots, a^n\}$.

Now the convention is:

Einstein Summation Convention: if in a summation an index appears once upstairs and once downstairs, it must be summed over without needing to explicit the sum.

In this case, look that the linear combination of vectors $v_i$ with coeficients $a^i$ will be simply denoted $a^iv_i$ and the summation is being understood. Now there's just a another thing:

Notation: If we are given a set of $n$ linear functionals, i.e.: linear functions defined on a vector space with values in the scalar field, then the functionals are indexed with upstairs indexes and the set becomes $\{\omega^1, \dots, \omega^n\}.$

And the convention to write the scalars when linearly combining functionals is just the opposite again:

Notation: If we are given a set of $n$ scalars and $n$ linear functionals and we are to form the linear combination of the linear functionals with the scalars, we write each scalar with index downstairs, so the set of scalars will be $\{a_1, \dots, a_n\}$.

The consequence is that the combination of linear functionals will be $a^i\omega_i$. Now the convention extends to tensor directly from this: covariant tensors will have indexes downstairs and contravariant tensors will have indexes upstairs. And writing linear combination and so on becomes natural. Just to finish:

Notation: When giving coordinates to a point or indexing components of a function, we write the indexes upstairs. So if $a \in \mathbb{R}^n$ is a point we write $a = (a^1, \dots, a^n)$ and if $f : \mathbb{R}^n \to \mathbb{R}^m$ we write it's components as $f^i$. These notations extends to manifolds in general.

If you understand all of this and can get along without being confused go ahead and use the notation, it makes life easier (specially in differential geometry).

Related Question