The isomorphism $i$ sends each element $a\in R$ to the set of those homomorphisms $h:R\to\mathbb Z/2$ that send $a$ to $1$. In other "words",
$$
h\in i(a)\iff h(a)=1.
$$
As for how one might have guessed something like this, I think the most plausible approach is to begin with an $R$ that you already know to be a power set Boolean algebra, say $R=\mathcal P(X)$ for some finite set $X$, and to ask how you could recover $X$ if you were just given $R$ as an abstract Boolean algebra. The easy way to do the recovery is that the elements of $X$ correspond to the minimal non-zero elements (also called the atoms) of $R$. In other words: The smallest element of $R$ is the empty subset of $X$, and just above this are the singletons $\{x\}$, one for each $x\in X$. Having recovered $X$ as (in canonical bijection with) the set of atoms of $R$, you get an isomorphism $R\cong\mathcal P(\text{Atoms}(R))$.
That argument presupposed that $R$ is already known to be a power set algebra, but once you have this idea, $R\cong\mathcal P(\text{Atoms}(R))$, you could verify that it works for any finite Boolean algebra.
In the question, in place of Atoms$(R)$, you had the set of Boolean homomorphisms $R\to\mathbb Z/2$. For finite Boolean algebras, those homomorphisms are in canonical bijection with the atoms (an atom $a$ corresponds to the homomorphism that sends everything $\geq a$ to $1$ and everything else to $0$). This switch from atoms to homomorphisms is hard to motivate in the case of finite Boolean algebras. The real reason for using homomorphisms is that they still work (with some extra caution) in the case of infinite Boolean algebras; that's Stone duality.
This answer assumes that you already believe Boolean rings are commutative. That proof appears elsewhere on the site and is well-known.
I'm also assuming that you believe it has an identity, which can also be proven in various ways, but it is usually assumed.
I'll also try to write it so that you can try to quit reading at any point to pursue the idea I'm describing, if you've decided you've read enough for a hint. Good luck.
I think the easiest angle to pursue is to use idempotents. In a Boolean ring, every element is idempotent, that is, $x^2=x$.
The thing to notice is that if $e$ is idempotent,
- $eR$ is a commutative ring with identity $e$;
- $1-e$ is also an idempotent; and
- $R=eR\oplus (1-e)R$
None of these involves anything more than basics of ring theory that you mentioned.
Now, of course you see that the idempotents $\{0,1\}$ give trivial splittings of $R$ into $R\oplus\{0\}$ or $\{0\}\oplus R$, so the interesting cases are when you have an idempotent $e\notin\{0,1\}$.
Take your finite Boolean ring and start splitting into smaller pieces this way. Obviously if $e\notin\{0,1\}$, the pieces $eR$ and $(1-e)R$ have strictly fewer elements than $R$. Each piece that you get is another finite Boolean ring (obvious, right?)
This splitting can't go on forever. The question becomes: when do I hit bottom? Obviously, if your ring had two elements (just the additive and multiplicative identities) you would be down to $\mathbb Z/2\mathbb Z$ and done.
So what if your ring has more than two elements? Then it has an element $x$ which is not the additive or multiplicative identity, and $x^2=x$, so you can again split using the idempotents $x$ and $1-x$ into a strictly smaller ring. This establishes that you can't split anymore precisely when the piece is a copy of $\mathbb Z/2\mathbb Z$.
So there you have it: you can refine $R$ over and over again into smaller pieces until it is a direct sum of copies of $\mathbb Z/2\mathbb Z$.
Best Answer
Once you know that $R$ has characteristic $2$, it will be a vector space (an algebra, really) over the field $F$ with $2$ elements, of dimension $d$, say. So $R$ has $2^d$ elements.
For each $a \in R$, consider the linear map $T_a : R \to R$ given by $u \to au$. Note that $a \mapsto T_a$ gives a morphism of rings from $R$ to $\operatorname{End}(R)$ (these are the endomorphisms of $R$ as a vector space over $F$). This morphism is injective because $T_a(a) = a$, so its kernel is $0$. If $S$ is its image, then $R$ is isomorphic to $S$.
Since $S$ is commutative, and all of its elements are roots of $x^2-x$, the elements of $S$ can be simultaneously diagonalized, with $0$ and $1$ on the diagonal. There are $2^d$ matrices like this, and $R \cong S$ has $2^d$ elements, so $S$ is the whole ring of diagonal matrices. Choose the (canonical) basis $T_{a_{1}}, \dots, T_{a_{d}}$ such that all $T_{a_{i}}$ have only one $1$ on the diagonal (which corresponds to $T_{a_i}(a_i) = a_i^2=a_i$).
Then with respect to the basis ${a_{1}}, \dots, {a_{d}}$ the ring $R$ has the form you require, because $a_i a_j = 0$ for $i \ne j$.
I guess this is just a version for this case of Stone's representation theorem.