# Prove that Weil Algebras (in synthetic differential geometry) are Finitely Presented

finitely-generatedsynthetic-differential-geometry

I'm working through An Introduction to Synthetic Differential Geometry and I have found myself a bit stuck.

## Context

Recall that we are working without the law of excluded middle, and that there is a distinguished $$\mathbb{Q}$$-algebra $$R$$ known as the "geometric line".

Presumably, all of these axioms are meant to apply in the internal logic of a topos (or at the very least a locally cartesian closed category).

From the book:

Definition 2.7 Let $$R[X_1, \ldots, X_n]$$ be a commutative ring with $$n$$ generators. Let $$p_1(X_1, \ldots, X_n), \ldots, p_m(X_1, \ldots, X_m)$$ be polynomials with coefficients from $$R$$, and let $$I$$ be the ideal generated from these polynomials. A finitely presented $$R$$-algebra is an $$R$$-algebra $$R[X_1, \ldots, X_n] / I = R[X_1, \ldots, X_n] / (p_1(X_1, \ldots, X_n), \ldots, p_m(X_1, \ldots, X_n))$$

So far so good. This is a familiar definition. I wouldn't have phrased it in exactly this way, but it seems clear enough.

Definition 2.9 A Weil algebra over $$R$$ is an $$R$$-algebra $$W$$ (denoted sometimes as $$R \otimes W$$) such that:

1. There is an $$R$$-bilinear multiplication map $$\mu : R^n \times R^n \to R^n$$, making $$R^n$$ an $$R$$-algebra with $$(1, 0, \ldots, 0)$$ as a multiplication unit.
2. The object ('set') $$I$$ of elements of $$R^n$$ with first coordinate equal zero is a nilpotent ideal.
3. There is an $$R$$-algebra map $$\pi : W \to R$$ given by $$(x_1, \ldots, x_n) \mapsto x_1$$, called the augmentation (its kernel is $$I$$, and it's called the ideal of augmentation).

The notation here is somewhat vague. I am making a few assumptions here.

The first is that $$R^n$$ is to be given the obvious structure as an $$R$$-module. We then equip it with $$\mu$$ which should be compatible with the aforesaid $$R$$-module structure and make $$R^n$$ into an $$R$$-algebra.

The second is that $$(1, 0,\ldots, 0)$$ is supposed to be the multiplicative identity and not merely a unit.

The third is that $$W$$ is supposed to consist of the triple $$(R^n, \mu, \pi)$$. This was never explicitly stated, but it's the only thing that makes sense in context.

## Problem

The following is a quote from immediately after Definition 2.9:

Moreover, it is easy to see, [sic] that each Weil algebra is a finitely presented $$R$$-algebra.

My question is: how does one prove this?

It's definitely clear that each Weil-algebra is finitely generated. Indeed, a Weil algebra on $$R^n$$ is clearly generated by $$n – 1$$ elements – that is, the $$R$$-algebra homomorphism $$R[X_1, \ldots, X_{n – 1}] \to R^n$$ sending $$X_i$$ to the unit vector $$e_{i + 1}$$ is surjective. But I don't see how the kernel of this map is a finitely generated ideal.

Perhaps I am missing something obvious, but it's really bugging me.

Ah, I figured it out. The following theorem holds:

Let $$R$$ be a ring, and let $$W$$ be an $$R$$-algebra which is finitely presented as an $$R$$-module. Then $$W$$ is finitely presented as an $$R$$-algebra.

Proof: suppose that $$x_1, \ldots, x_n$$ are generators of $$W$$ as an $$R$$-module which come with a presentation.

Then in particular, consider the unique $$R$$-algebra homomorphism $$\phi : R[X_1, \ldots, X_n] \to W$$ such that for all $$1 \leq i \leq n$$, we have $$\phi(X_i) = x_i$$. The image of such a map is a sub-$$R$$-algebra, hence a sub-$$R$$-module, which contains all the $$x_i$$, and thus is all of $$W$$. So $$\phi$$ is surjective; by the isomorphism theorem, $$\phi$$ thus gives rise to an isomorphism $$\tilde{\phi} : R[X_1, \ldots, X_n] / \ker \phi \to W$$. It therefore suffices to show that $$\ker \phi$$ is finitely generated.

Choose values $$w_{i, j, k}$$ such that for all $$1 \leq i, j \leq n$$, we have $$x_i x_j = \sum\limits_{k = 1}^n w_{i, j, k} x_k$$. Define $$P_{i, j} = X_i X_j - \sum\limits_{k = 1}^n w_{i, j, k} X_k$$. Note that $$\phi(P_{i, j}) = 0$$.

Finally, we write $$1 \in W$$ as $$\sum\limits_{i = 1}^n \omega_i x_i$$ for $$\omega_1, \ldots, \omega_n \in R$$. Let $$Q = 1 - \sum\limits_{i = 1}^n \omega_i X_i$$; then $$\phi(Q) = 0$$.

Let $$I_1$$ be the ideal generated by the $$P_{i, j}$$ and by $$Q$$. Note that $$I_1 \subseteq \ker \phi$$.

We will use $$I_1$$ to "kill the non-linear terms". I claim the following lemma:

Lemma: Every $$P \in R[X_1, \ldots, X_n]$$ can be written as $$T + \sum\limits_{i = 1}^n p_i X_i$$ for some $$T \in I_1$$ and some $$p_1, \ldots, p_n \in R$$.

Proof: Let $$S$$ be the set of all $$P$$ which can be written in such a way. Note that for all $$1 \leq i \leq n$$, we have $$X_i = 0 + \sum\limits_{j = 1}^n \delta_{ij} X_j \in S$$. Therefore, to show that $$S = R[X_1, \ldots, X_n]$$, it suffices to show that $$S$$ is a sub-algebra.

We first note that $$S$$ is closed under scalar multiplication and addition, and that $$0 \in S$$; therefore, $$S$$ is a sub-module. We further note that $$1 = Q + \sum\limits_{i = 1}^n \omega_i X_i \in S$$. It therefore suffices to show that $$S$$ is closed under multiplication.

Further note that $$I_1 \subseteq S$$, since given $$Q \in I_1$$, we have $$Q = Q + \sum\limits_{i = 1}^n 0 X_i \in S$$.

We see that

$$(T_1 + \sum\limits_{i = 1}^n p_{1, i} X_i)(T_2 + \sum\limits_{i = 1}^n p_{2, i} X_i) = T_1 (T_2 + \sum\limits_{i = 1}^n p_{2, i} X_i) + T_2 (\sum\limits_{i = 1}^n p_{1, i} X_i) + (\sum\limits_{i = 1}^n p_{1, i} X_i)(\sum\limits_{i = 1}^n p_{2, i} X_i)$$

The terms $$T_1 (T_2 + \sum\limits_{i = 1}^n p_{2, i} X_i)$$ and $$T_2 (\sum\limits_{i = 1}^n p_{1, i} X_i)$$ are in $$I$$, hence in $$S$$. Since $$S$$ is a sub-module, it suffices to show that $$(\sum\limits_{i = 1}^n p_{1, i} X_i)(\sum\limits_{i = 1}^n p_{2, i} X_i) \in S$$.

To do this, we apply double distribution:

$$(\sum\limits_{i = 1}^n p_{1, i} X_i)(\sum\limits_{i = 1}^n p_{2, i} X_i) = \sum\limits_{i = 1}^n \sum\limits_{j = 1}^n p_{1, i} p_{2, j} X_i X_j$$

Since $$S$$ is a sub-module, it suffices to show that $$X_i X_j \in S$$ for all $$i, j$$. This is straightforward, since $$X_i X_j = P_{i, j} + \sum\limits_{k = 1}^n w_{i, j, k} X_k$$. The lemma is thus proved. $$\square$$

Let us now back up. We have a surjective map of $$R$$-modules $$\kappa : R^n \to W$$ sending $$e_i$$ to $$x_i$$. Take $$w_1, \ldots, w_m \in R^m$$ which generate the submodule $$\ker \kappa$$.

Take the $$R$$-module map $$\gamma : R^n \to R[X_1, \ldots, X_n]$$ sending $$e_i$$ to $$X_i$$. Let $$W_i = \gamma(w_i)$$. Note that $$\phi \circ \gamma = \kappa$$ since for all $$i$$, we have $$\phi(\gamma(e_i)) = \phi(X_i) = x_i = \kappa(e_i)$$ and $$\phi \circ \gamma$$ is $$R$$-linear. Therefore, we have $$\phi(W_i) = \phi(\gamma(w_i)) = \kappa(w_i) = 0$$, so $$W_i \in \ker \phi$$.

Now let $$I_2$$ be the ideal generated by the $$P_{i, j}$$, $$Q$$, and also the $$W_k$$. Note that $$\ker \kappa \subseteq \gamma^{-1}(I_2)$$, since $$\gamma(w_i) \in I_2$$ for all $$i$$. I claim that

Theorem: $$I_2 = \ker \phi$$. Therefore, $$\ker \phi$$ is finitely generated.

Clearly, $$I_2 \subseteq \ker \phi$$, since we've established that all the $$P_{i, j}$$, $$Q$$, and $$W_k$$ are in the kernel. So it suffices to show that $$I_2 \subseteq \ker \phi$$.

Indeed, suppose $$P \in \ker \phi$$. Write $$P = T + \sum\limits_{i = 1}^n p_i X_i$$ where $$T \in I_1$$ and $$p_1, \ldots, p_n \in R$$ (this can be done by the lemma). Let $$x = (p_1, \ldots, p_n) \in R^n$$; then $$P = T + \gamma(x)$$.

Then we see that $$0 = \phi(T + \gamma(x)) = \phi(T) + \phi(\gamma(x)) = \kappa(x)$$. So $$\kappa(x) = 0$$. Then $$x \in \ker \phi \subseteq \gamma^{-1}(I_2)$$. So $$\gamma(x) \in I_2$$. Since $$T \in I_1 \subseteq I_2$$, we have $$T \in I_2$$. So $$P = T + \gamma(x) \in I_2$$. We have thus proved that $$I_2 = \ker(\phi)$$. $$\square$$

We have proved a rather general and convenient result which is substantially more powerful than needed here. In fact, for this problem, we only need $$W$$ to be finite free, so we can actually dispense with $$I_2$$ altogether and show that $$I_1 = \ker \phi$$. But there's no reason not to go as general as possible.