Could you give me an example of a ring A without multiplicative identity in which the only ideals are (0) and the whole ring A?
The example of ring A can be either non-commutative or commutative..
abstract-algebracommutative-algebraidealsnoncommutative-algebraring-theory
Could you give me an example of a ring A without multiplicative identity in which the only ideals are (0) and the whole ring A?
The example of ring A can be either non-commutative or commutative..
From some of the comments above you seem a little confused. Since you said you are not familiar with proofs I will try to write this out in a way that you can understand.
You are trying to prove the equivalence of the following statements:
$P:$ A commutative ring $R$ with $1$ is a field.
$Q:$ The only ideals of $R$ are $(0)$ and $(1)$.
Let us look at statement $Q$ closely. Well it is saying that the only ideals of $R$ are the zero ideal (which has only one element, zero) and $(1)$. What is $(1)$? Well by definition of an ideal if you multiply anything in the ideal $(1)$ by anything in $R$, you should get something back in $(1)$ again. But then $1$ is the multiplicative identity of $R$ so multiplying everything in $R$ by $1$ just gives everything in $R$ back again. This means that $(1)$ must be the whole ring $R$.
Now suppose we want to prove $P \implies Q$. Let $I$ be an ideal of a ring $R$. Here "ring" means "commutative ring with a unit". Now here are some things you should know:
(1) $I$ is non-empty
(2) $I$ must at least contain the element $0$ (Why?)
(3) If $I$ has more than one element this means that at least one non-zero element $a$ of the ring must be in $I$. (Why?)
Therefore if $I$ contains only $0$, $I = (0)$. If $I$ does not only contain zero, then by (3) above it contains at least one non-zero element $a$ of $R$. Now recall that we are trying to prove $P \implies Q$. We already know $P$. Therefore this means by definition of a field that $a^{-1}$ exists in $R$.
But then by definition of an ideal $I$, $a^{-1} a = 1$ must be in $I$. Therefore $1 \in I$ so that $I$ must be the whole ring $R$. Hence $I = (1)$. This establishes $P \implies Q$.
Now for the converse:
To show $Q \implies P$ it suffices to show that non-zero every element $a \in R$ contains a multiplicative inverse. So let $a$ be a non-zero element of $R$. The trick now is to consider the principal ideal generated by $a$ (which we denote by $(a)$ ).
Now by assumption of $Q$, since the only ideals of $R$ are $(0)$ and $(1)$, this means that $(a)$ being an ideal of $R$ must be either $(0)$ or $(1)$. Now $(a)$ cannot be $(0)$ for $a \neq 0$. So $(a) = (1)$. But then this means that $1$ is a multiple of $a$, viz. there exists a non-zero $c$ such that
$$ac = 1.$$
However this is precisely saying that $a$ has a multiplicative inverse. Since $a$ was an arbitrary non-zero element of $R$, we are done. Q.E.D.
Does this help you? I can discuss more if you need help.
These rings are called simple rings. As Torsten says in the comments, a nice class of examples which are not division rings is the matrix algebras $M_n(k)$. More generally, by the Artin-Wedderburn theorem the artinian simple rings are precisely the rings of the form $M_n(D)$ where $D$ is a division algebra. If the center of $D$ is $k$ then these are central simple algebras over $k$ and are classified by the Brauer group of $k$.
To prove that $M_n(D)$ is simple it's cleaner to prove a more general result:
Claim: The two-sided ideals of $M_n(R)$, for any ring $R$, are of the form $M_n(I)$ where $I$ is a two-sided ideal of $R$.
Corollary: $M_n(R)$ is simple iff $R$ is simple.
Proof. Let $X \in M_n(R)$ be any element. The ideal generated by $X$ consists of linear combinations of elements of the form
$$e_{ij} X e_{kl}$$
where $1 \le i, j, k, l \le n$; this matrix has only a single nonzero component, namely the $il$ entry, whose value is $X_{jk}$. So by picking $i, j, k, l$ appropriately we see that we can arrange for any particular component of $X$ to end up in any other component; in other words, the ideal generated by $X$ is $M_n(I)$ where $I$ is the ideal of $R$ generated by the components of $X$. The desired result follows upon taking sums of ideals, which gives more generally that the ideal generated by any collection of matrices is $M_n(I)$ where $I$ is the ideal of $R$ generated by their components. $\Box$
I said above that the artinian simple rings are the rings of the form $M_n(D)$, so let's close with a non-artinian example. Maybe the most famous non-artinian simple rings are the Weyl algebras, of which the one-variable version can be written
$$k[x, \partial]/(\partial x - x \partial = 1)$$
where $k$ is a field of characteristic zero; this can be thought of as the algebra of differential operators on $k[x]$, with $\partial$ acting by differentiation by $x$.
Claim: With the above hypotheses, the Weyl algebra is simple.
Proof. Let $f = \sum f_{ij} x^i \partial^j$ be an element of the Weyl algebra; we will show directly that if $f$ is nonzero then the ideal it generates is the entire Weyl algebra. First, observe that the monomials $x^i \partial^j$ form a basis of the Weyl algebra; there are various ways to prove this, it is a PBW-type result. So $f = 0$ iff $f_{ij} = 0$ for all $i, j$.
The defining relation of the Weyl algebra can be written $[\partial, x] = 1$, where $[a, b] = ab - ba$ is the commutator bracket. Now, the commutator bracket $[\partial, -]$ is always a derivation, so it follows that
$$[\partial, x^i] = \partial x^i - x^i \partial = ix^{i-1}$$
while $[\partial, \partial] = 0$ and hence $[\partial, \partial^j] = 0$. This gives
$$[\partial, x^i \partial^j] = ix^{i-1} \partial^j$$
and hence
$$[\partial, f] = \sum f_{ij} ix^{i-1} \partial^j.$$
That is, computing the commutator by $\partial$ has the effect of differentiating the polynomial part of $f$ (and note that $[\partial, f] = \partial f - f \partial$ lies in the ideal generated by $f$). So we can repeatedly apply $[\partial, -]$ to $f$ until all of its polynomial parts vanish except the ones of highest degree; hence we can assume WLOG that $f$ in fact has the form
$$f = \sum f_j \partial^j$$
where at least one $f_j$ is zero (this is where we need both the assumption that the $x^i \partial^j$ form a basis and that $k$ has characteristic zero). At this point we can now instead apply the derivation $[-, x]$, which satisfies $[\partial, x] = 1$ and hence by induction
$$[\partial^j, x] = j\partial^{j-1}$$
which gives
$$[f, x] = \sum f_j j \partial^{j-1}.$$
So we can again repeatedly "differentiate" until $f$ is a nonzero constant, which clearly generates the entire Weyl algebra. $\Box$
Best Answer
Proffering the following. Let $V$ be the space of sequence of real numbers (any field would work the same) with basis $e_i, i\in\Bbb{N}$. Let $R$ be the set of such linear transformations $T$ that $T(e_i)=0$ for all but finitely many $i$. So basically $R$ consists of $\infty\times\infty$ matrices with only finitely many non-zero rows and columns. In yet another way, $R$ is the linear span of the transformations $T_{i,j}$ determined by $T_{i,j}(e_k)=\delta_{ik}e_j$ (imagine a matrix with a single entry equal to one and the rest equal to zero).