Write the dot product of 3rd order tensor (piezoelectric constant) with 1st order tensor (Electric field vector) in the matrix form

piezoelectrictensor-calculus

In the Stress-Charge form of Piezoelectric constitutive equations:

$T$ $=$ $c$ : $S$ $-$ $(e^T) . E$

( The symbols are explained in the picture below)

My question is, why do we take the transpose of $e$ when multiplying with $E$, and how can this multiplication be shown in matrix form ? It's easy to think about the dot product of two vectors where we transpose one of them so we can multiply a row vector with a column vector, but I don't know how to represent a 3rd order tensor as a matrix.

enter image description here

enter image description here

Reference of the pictures: http://bluebox.ippt.pan.pl/~tzielins/doc/ICMM_TGZielinski_Piezoelectricity.Slides.pdf

Best Answer

The single dot dot product in your notes appears to be defined as contraction over "neighbouring" indices, i.e. the last index of the left object and the first index of the right object in the scalar product $$A\cdot B\equiv A_i\dots _n B_n\dots_j$$ where the lower dots indicate an arbitrary number of additional indices. If we look at the Einstein notation we see that the contraction in your expression for $T$ is not over neighbouring indices in the term $e_{kij}E_k$. The scalar product requires contraction over the last index of the left argument $e$ and we don't get the correct expression without transposing,

$$ e\cdot E =e_{jik}E_k\neq e_{kij}E_k $$ while $$ e^T\cdot E = e_{jik}^TE_k= e_{kij}E_k $$ which is equal to the desired contraction as given in the definition of $T_{ij}=c_{ijkl}S_{kl}-e_{kij}E_k$.

I woud recommend to use Einstein notation instead of matrix notation but we can connect higher indexed objects as matrices by combining indices and enumerating all possible combinations with a new single index.

Lets take the $e_{kij}$ as example. We must reduce the number of indices to two, then we can use normal matrix multiplication. To do so we can keep index $k$ as one of our matrix indices and now we define the second index by enumerating all possible tuples $(i,j)$ with a new index named $n$, which allows us to replace $ij$ with the single index $n$. Lets take a look at a minimal example to make clear how that works. lets say i and j run from 1 to 2. This means that we have the possible combinations, $(11)=1$, $(12)=2$, $(21)=3$, $(22)=4$. These four enumerated tuples can now be described by the single index $n$ running form 1 to 4.

This enables me to write $$ e_{kij} = e_{kn} $$ defining the "matrix" $e$. Now I can take the matrix product $$ e^T E = e_{nk}^T E_k = e_{kn} E_k = e_{kij}E_k $$ as desired.

Some additional remarks:

The "matrix" style notation is not clear cut and I would ask the lecturer for the definitions in Einstein notation. The transpose is also not obviously defined and I am aware of at last two different definitions when it comes to objects with 4 indices. Some conventions switch groups of 2 indices while others reverse the order of indices. The choice adopted typically depends on the definition of the double dot product $:$ which can either be defined as consecutive dot product or as contraction of the last pair of indices of the left argument and the first two indices of the right argument. This seems to be the case in these notes since we are given the following $$ c:S = c_{ijkl}S_{kl} $$ with this definition I would assume that $$c_{ijkl}^T=c_{klij}$$

Answer to comment:

The problem is the inequality, $e\cdot E =e_{jik}E_k\neq e_{kij}E_k$. You want to reproduce the right side term using a scalar product, since that is the term given in the definition of $T_{ij}$. Simply plugging $e$ into the scalar product with $E$ does not yield this summation. To obtain the proper summation when using the scalar product, you need to transpose $e$. Only then you get the same result as $e_{kij}E_k$. The definition of the scalar product does not change at all and what you wrote down is the scalar product.

Lets take a look at a simlper example. Lets take two matrices and define a new matrix via Einstein summation, $$ C_{jl}=A_{ij}B_{il} $$ Note that the summation is over the first index of A and B. This is not a matrix multiplication and $C\neq AB$, since matrix multiplication would be $$ (AB)_{jl} = A_{ji}B_{il} \neq A_{ij}B_{il} $$ But we can "fix" this by transposing A before we plug it into normal matrix multiplication, $$ (A^TB)_{jl} = A^T_{ji}B_{il} = A_{ij}B_{il} $$ This allows me to write $C=A^TB$. The transposing of $e$ serves a similar reason.

Related Question