Two questions on tensor (wedge) product

differential-geometryexterior-algebramultivariable-calculus

I am new to exterior algebra, and am very confused about the new concepts. I would appreciate a little help.

  1. I saw in the textbook that if $\alpha$ is a $k$-covector where $k$ is odd, then $\alpha \land \alpha = 0$. If I assume that $\beta$ is a $k$-tensor, then does the same result hold?

I tried
$$\alpha \land \alpha = \frac{1}{2k!}\sum_{\sigma \in S_{2k}} sgn(\sigma)\, \sigma \,\alpha \otimes \alpha$$
but without any assumption about symmetricity or alternating property, I can't really say anything about the evaluation of the product. (perhaps this is the reason that wedge product is defined only on alternating functions)

  1. Also, I saw this post: Wedge product of $\beta \wedge dx$
    $\alpha = dx + dy + dz$ and $\beta = 2dx -dy + dz$. Then,

i) Is $\alpha$ even a tensor? It doesn't take any argument. If it is not a tensor, how can I do wedge product on $\alpha \land \beta$?

ii) If $\alpha$ is a tensor, then is it a 3-tensor or 1-tensor? $\alpha$ is a linear combintation of three 1-tensors, but it takes three arguments, namely $dx, dy, dz$. If it is a 3-tensor, then $\alpha$ consists of basis $\{\hat e^i \otimes \hat e^j \otimes \hat e^k \}$ but it is not clear to me if $\alpha$ can be expressed in terms of the basis elements.

ii) I do not think $\alpha$ nor $\beta$ is alternating; if they are not alternating, how can I even do wedge sum? (note that on the link above, it does: $(dx+dy)∧(2dx−dy)=dx∧(2dx−dy)+dy∧(2dx−dy)=−dx∧dy+2dy∧dx=−3dx∧dy.$ I am confused since wedge sum is defined only on alternating functions.

(Perhaps, I am mixed up between differential form and wedge sum in the second question…)

Thank you in advance.

Best Answer

  1. I think you misunderstood the use of the wedge product. It is useful for alternating tensors as it makes the graded vector space $\Lambda^*V = \sum_{k\geqslant 0}\Lambda^k V$ a graded algebra. Of course you can expand the wedge product to arbitrary tensors but you loose a lot of sense. Moreover, your definition of $\alpha \wedge \alpha$ is false. If $(e_1,\ldots,e_n)$ is a basis of $V$, then a basis of $\Lambda^k V$ is $\left(e_{i_1}\wedge\cdots\wedge e_{i_k}\right)_{i_1<\cdots < i_k}$, where \begin{align} e_{i_1}\wedge \cdots \wedge e_{i_k} =\sum_{\sigma \in \mathbb{\mathfrak{S}_k}} \varepsilon(\sigma) e_{\sigma(i_1)}\otimes\cdots\otimes e_{\sigma(i_k)} \end{align} This is a definition. Moreover, the wedge product of two alternating tensors is defined to be coherent with the fact that $\left(e_{i_1}\wedge\cdots\wedge e_{i_k}\right)\wedge \left(e_{j_1}\wedge \cdots \wedge e_{j_l} \right)$ will be equal to $\left(e_{i_1}\wedge\cdots\wedge e_{i_k}\wedge e_{j_1}\wedge \cdots \wedge e_{j_l} \right)$ and to be linear, thus the definition on the general case. Using the definition on the basis and linearity, one can show that if $\alpha$ is a $k$-alternating tensor and $\beta$ is a $l$ alternating tensor, then $\alpha\wedge \beta$ is a $(k+l)$ alternating tensor and \begin{align} \alpha \wedge \beta = (-1)^{kl}\beta \wedge \alpha \end{align} (show it on the basis and linearity gives you the result). Thus, if $\alpha$ is a $(2k+1)$-alternating tensor \begin{align} \alpha\wedge \alpha = (-1)^{(2k+1)^2}\alpha\wedge \alpha = -\alpha\wedge \alpha \end{align} and it follows that $\alpha \wedge \alpha = 0$.
  2. In $\mathbb{R}^n$ with canonical basis $(e_1,\ldots,e_n)$, one define its dual space $\Lambda^1 \mathbb{R}^n = \left(\mathbb{R}^n\right)^* = L\left(\mathbb{R}^n,\mathbb{R}\right)$ with the dual the basis $({e_1}^*,\ldots,{e_n}^*)$, which is defined by ${e_i}^*(e_j) = \delta_{i,j}$. We write this basis $(\mathrm{d}x^1,\ldots,\mathrm{d}x^n)$. This is a notation. A vector $\mathbb{R}^n$ is defined in coordinates by $V = V_1 e_1 + \cdots V_n e_n$. A $1$ tensor on $\mathbb{R}^n$ is of the form $\alpha = \sum_{i=1}^n \alpha_i \mathrm{d}x^i$ where $\alpha_i$ are scalars. By the very definition of the dual basis, we can define \begin{align} \alpha(V) = \sum_{i=1}^n \alpha_iV_i \end{align} it is an alternating $1$ tensor as it takes only one argument (the vector $V$).

In $\mathbb{R}^3$, we prefer to use $(x,y,z)$ as coordinates, $\partial_x,\partial_y,\partial_z$ as the canonical basis and $\mathrm{d}x,\mathrm{d}y$ and $\mathrm{d}z$ for the basis of alternating $1$ tensors.

i) $\alpha = \mathrm{d}x + \mathrm{d}y + \mathrm{d}z$ is an alternating $1$-tensor as a linear combination of the basis of alternating $1$-tensors. Same for $\beta = 2\mathrm{d}x - \mathrm{d}y + \mathrm{d}z$. If $V$ is a vector field over $\mathbb{R}^3$, say $V = V_x \partial_x + V_y\partial_y + V_z \partial_z$: \begin{align} \alpha(V) &= V_x + V_y + V_z\\ \beta(V) &= 2V_x - V_y + V_z \end{align} As they are alternating tensors, their wedge product is well defined and by linearity, one has \begin{align} \alpha\wedge\beta &= \left(\mathrm{d}x + \mathrm{d}y + \mathrm{d}z \right) \wedge \left(2\mathrm{d}x - \mathrm{d}y + \mathrm{d}z \right)\\ &= \mathrm{d}x \wedge (2\mathrm{d}x) + \mathrm{d}x \wedge (-\mathrm{d}y) + \mathrm{d}x \wedge \mathrm{d}z \\ &~~~+ \mathrm{d}y \wedge (2\mathrm{d}x) + \mathrm{d}y \wedge (-\mathrm{d}y) + \mathrm{d}y \wedge \mathrm{d}z \\ &~~~+\mathrm{d}z \wedge (2\mathrm{d}x) + \mathrm{d}z \wedge (-\mathrm{d}y) + \mathrm{d}z \wedge \mathrm{d}z \\ &= -3\mathrm{d}x\wedge\mathrm{d}y + 2 \mathrm{d}y \wedge \mathrm{d}z - \mathrm{d}x \wedge \mathrm{d}z \end{align} (recall that as we use $1$-tensors, $\mathrm{d}x\wedge\mathrm{d}x = 0$, $\mathrm{d}y \wedge \mathrm{d}x = - \mathrm{d}x \wedge \mathrm{d}y$, etc.)

For ii) and iii), I already answered by saying that every $1$-tensor is an alternating tensor. This is because there is only one permutation of $1$ index, the identity, with signature signature $1$!

Commentary In the case of a manifold $M$, we are not looking at a fixed vector space $V$ and its exterior algebra but at a vector bundle $\Lambda^*(T^*M)$. For each $p \in M$, the cotangent bundle $T_pM^*$ has a local frame $(\partial_{x^1},\ldots,\partial_{x^n})$, thus we can define its exetrior algebra pointwisely the way we did for vector spaces. We define the exterior bundle of $M$ to be their union. We are then considering sections of this vector bundle. A $k$ differential form is defined to be a section of $\Lambda^k (TM^*)$, that is a smooth function $\alpha : p \mapsto \alpha_p \in \Lambda^k(T_pM^*)$. In a local coordinate system, every $k$ differential form can be written $\alpha(p) = \sum_{i_1< \cdots< i_k} \alpha_{i_1,\cdots,i_k}\mathrm{d}x^{i_1}\wedge\cdots\wedge\mathrm{d}x^{i_k}$, where $\alpha_{i_1,\cdots,i_k}$ is a smooth function of $M$. For a fixed $p$, $\alpha(p)$ is a $k$ alternating tensor (in the vector space $\Lambda^k(T_pM^*)$). The definition of the wedge product is to be understood pointwisely: \begin{align} \alpha\wedge\beta : p \mapsto \alpha(p)\wedge \beta(p) \end{align} and if $\alpha$ and $\beta$ are $k$ and $l$ differential forms, then $\alpha\wedge \beta$ is a $k+l$ differential form.

Related Question