Always remember that the names we pick for things are a matter of convenience, and there are not really any rules to follow. (But it helps when people pick predictable things!)
Here's a fast answer that is probably not historically accurate, but will probably put your mind at ease: in universal algebra an algebra is just some set with different operations and rules acting in it. In that sense, groups, rings, rings of sets etc are all just generic "algebras". So you can see some people (at least) don't mind using "algebra" very flexibly.
As you have found out, the -set versions of rings and algebras are a little different from the algebraic ones. Let's focus on the similarities for a moment, to see why the names are kind of parallel to each other:
Ring and ring-of-sets: Both involve a set closed under two operations.
Field and field-of-sets: Both involve a set closed under two operations, plus a unary operation (multiplicative inverse/complementation)
The case of a "Boolean algebra" is interesting, because it kind of lies at the intersection of these two notions. While someone said they are lattice theoretic, it is also important to remember that they really are honest-to-goodness rings, too.
The use of "semi-" in front of terms has a pretty consistent use, and that is just to say that it is not quite as strong as the usual version. This is true for both a semi-ring-of-sets and a semiring.
To find an analogue of $\sigma$-algebras in ring theory, we would have to think of a field with infinitary operations; however, I don't know if anything like that exists. I do have an easy example of a semiring with infinitary operations, and that is the semiring of ideals of a ring. (That is, the set is the set of ideals of a fixed ring, along with the operations of ideal addition and ideal multiplication.)
For your example of a $\pi$-system, I think the best analogue is a semigroup, since there are no "inverses" provided by the complement. If you took a $\pi$-system and required it to be closed under complements, then I would be more inclined to analogize that to a group.
Let me take a sidestep. Trying to understand $\sigma$ algebra through Vitali sets and Lebesgue measure, etc. is, in my opinion, the wrong approach. Let me offer a much, much simpler example.
A probability space is a measure space $(X, \sigma, \mu)$ where $\mu(X)=1$.
What's the simplest thing you can model with probability theory? Well, the flip of a fair coin. Lets write out all the measure theoretic details.
What are the possible outcomes? Well you can get a heads, $H$, or a tail, $T$. Measure theoretically, this is the set $X$. That is, $X=\{H,T\}$.
What are the possible events? Well first of all nothing can happen or something can happen. This is a given in ANY probability space (or measure space). Also, you can get a heads or you can get a tail. This is the $\sigma$ algebra. That is, $\sigma=\{\emptyset, X, \{H\}, \{T\}\}$.
What is the (probability) measure? Well what are the probailities? First, what is the probability that NOTHING happens? Well, $0$. That is, $\mu(\emptyset)=0$. What is the probability that SOMETHING happens? Well, $1$. That is, $\mu(X)=1$ (this is what makes something a probability space). What is the probability of a heads or a tail? Well $1 \over 2$. That is, $\mu(\{H\})=\mu(\{T\})=\frac12$.
This is just filling out all the measure theoretic details in a very simple situation.
Okay. In this situation EVERY subset of $X$ is measurable. So still, who gives a damn about $\sigma$ algebras? Why can't you just say "everything is measurable" (which is a perfectly fine $\sigma$ algebra for every set, including $\Bbb{R}$!!) and totally forget this business about $\sigma$ algebras? Well lets very slightly modify the previous example.
Lets say you find a coin, and you're not sure if it's fair or not. Lets talk about about the flip of a possibly unfair coin and fill in all the measure theoretic details.
Again, what are the possible outcomes? Again $X=\{H,T\}$.
Now here is where things are interesting. What is the $\sigma$ algebra for this coin flip (the events)? This is where things differ. The $\sigma$ algebra is just $\{\emptyset, X\}$. Remember, a $\sigma$ algebra is the DOMAIN of the measure $\mu$. Remember that we don't know if the coin is fair or not. So we don't know what the measure (probability) of a heads is! That is, $\{H\}$ is not a measurable set!
What is the measure? $\mu(\emptyset)=0$ and $\mu(X)=1$. That is, something will happen for sure and nothing won't happen. What a remarkably uninformational measure.
Here is a situation where the $\sigma$ algebras ARE different because they describe two different situations. A good way to think about $\sigma$ algebras is "information", especially with probability spaces.
Lets connect this back to Vitali sets and Lebesgue measure.
AFAIK, I can have a sigma-algebra that contains Vitali set. Everything can be a sigma-algebra, as long as it satisfies its 3 simple axioms. So if there is something you could add to clarify the quoted explanation, I'd be very grateful.
You are quite right. You CAN have a $\sigma$ algebra of $\Bbb{R}$ that contains every Vitali set. Just like you can have a $\sigma$ algebra of $\{H,T\}$ that contains $\{H\}$. But the point is that you might not be able to assign this a measure in a satisfactory way! Meaning, that if you have a list of properties that a you want Lebesgue measure wants to satisfy, a Vitali set necessarily can't satisfy them. So you "don't know" what measure to assign to it. It is not measurable with respect to a specified measure. You can create all sorts of measures where Vitali sets are measurable (for example one where the measure of everything is $0$).
Let me motivate the axioms of a $\sigma$ algebra in terms of probability.
The first axiom is that $\emptyset, X \in \sigma$. Well you ALWAYS know the probability of nothing happening ($0$) or something happening ($1$).
The second axiom is closed under complements. Let me offer a stupid example. Again, consider a coin flip, with $X=\{H, T\}$. Pretend I tell you that the $\sigma$ algebra for this flip is $\{\emptyset, X, \{H\}\}$. That is, I know the probability of NOTHING happening, of SOMETHING happening, and of a heads but I DON'T know the probability of a tails. You would rightly call me a moron. Because if you know the probability of a heads, you automatically know the probability of a tails! If you know the probability of something happening, you know the probability of it NOT happening (the complement)!
The last axiom is closed under countable unions. Let me give you another stupid example. Consider the roll of a die, or $X=\{1,2,3,4,5,6\}$. What if I were to tell you the $\sigma$ algebra for this is $\{\emptyset, X, \{1\}, \{2\}\}$. That is, I know the probability of rolling a $1$ or rolling a $2$, but I don't know the probability of rolling a $1$ or a $2$. Again, you would justifiably call me an idiot (I hope the reason is clear). What happens when the sets are not disjoint, and what happens with uncountable unions is a little messier but I hope you can try to think of some examples.
I hope this cleared things up.
Best Answer
Closure properties can be formulated in terms of concepts from universal algebra. Let $X$ be the underlying set (in our examples, $X$ is a famly of sets itself). Let $I$ be an index set, $(\kappa_i)_{i\in I}$ be a family of cardinal numbers and $(f_i)_{i\in I}$ a family of function satisfying $f_i:X^{\kappa_i}\to X$ for all $i$. We say that $C\subseteq X$ is closed under $(f_i)_{i\in I}$ if we have for all $i\in I$ that $f_i(x)\in C$ for all $x\in C^{\kappa_i}$. One can show that the family of sets closed under $(f_i)_{i\in I}$ forms a Moore collection.
Let's look an an example: Let $U$ be a set and $X\subseteq 2^U$. We let $I=\{s,c,u\}$, $\kappa_s=0$, $\kappa_c=1$, and $\kappa_u=\omega$. We identify constants and nullary functions, so we can let $f_s=U$. We let $f_c(A)=A^C$ for all $A\in X$, and we let $f_u(A_0,A_1,\ldots)=\bigcup_n A_n$. That $X$ is closed under these three functions means simply that it contains $X$, is closed under complements and countable unions- it is a $\sigma$-algebra.
Now, one cannot write down semi-algebras this way, since there is no unique decomposition of the complement into disjoint sets. If $\mathcal{S}$ is a semi-algebra and $A\in\mathcal{S}$, then there exists a number $n$ and sets $B_1,\ldots,B_n\in\mathcal{S}$ that are disjoint and such that $A_c=B_1\cup\ldots\cup B_n$. Now if there exists a unique such family and if this family only depended on $A$, we could write down this property as closure under some functions in the following way: We let $f_{c_1}=B_1,\ldots, f_{c_n}=B_n$, and for $m>n$ we let $f_{c_m}=f_{c_n}$. We use the last condition because we have no a priori bound on how many sets are needed. But these sets are not a function of $A$, so this property can not be viewed as a closure property.
Here is an explicit example (taken from Alprantis & Border) that shows that the intersection of sem-algebras might fail to be a semi-algebra: Let $X=\{0,1,2\}$, $\mathcal{S}_1=\big\{\emptyset, X,\{0\},\{1\},\{2\}\big\}$, $\mathcal{S}_2=\big\{\emptyset, X,\{0\},\{1,2\}\big\}$, and $A=\{0\}$. We have $\mathcal{S}_1\cap\mathcal{S}_2=\big\{X,\emptyset,\{0\}\big\}$, and $A^C=\{0\}^C=\{1,2\}$ is not the disjoint union of elements of this intersection.