As mentioned in a comment, $R_0$ being a subring of $R$ follows from how $R_0R_0\subseteq R_{0+0}$.
As for your question on direct sums, it might be helpful to look at the finite analogue.
Given two abelian groups $R$ and $S$, the direct sum $R\oplus S$ will coincide with (as in, it is isomorphic to) the direct product $R\times S$.
The latter is a cartesian product of abelian groups, so the elements of $R\times S$ are of the form $(r,s)$ for $r\in R$ and $s\in S$, with addition and multiplication done component-wise.
With this notation, saying that an element of $R\times S$ is "in $R$" means that the $S$ component is zero. In other words, $R$ can be identified with the subgroup $\{(r,0)\mid r\in R\}$ of $R\times S$, and similarly for $S$.
Now, saying that an element of $R\times S$ can be written uniquely as a sum of an element of $R$ with an element of $S$ is to say that $(x,y)=(r,0)+(0,s)$ forces $x=r$ and $y=s$, which follows by computing the sum on the right-hand side.
In the infinite setting, this is no longer true for products only because infinite sums don't make sense in abelian groups. For example, you can't decompose $(1,1,1,\dots)\in\prod_{i=0}^\infty\mathbb Z$ into a sum where each summand lies in one of the $\mathbb Z$'s because such a sum would have to be infinite.
If we restrict our attention to the subgroup of $\prod_{i=0}^\infty\mathbb Z$ where we only take those $(x_1,x_2,x_3,\dots)$ where finitely many of the $x_i$ are nonzero, the decomposition is true again: if we let $I$ be the (finite) set of indices $i\in I$ for which $x_i\neq0$, then $(x_1,x_2,x_3,\dots)=\sum_{i\in I}\vec x_i$, where $\vec x_i=(0,\dots,0,x_i,0,\dots)$ is nonzero only in the $i$th index.
In general, the subgroup of $\prod_iR_i$ consisting of those sequences wherein only finitely many terms are nonzero is precisely the direct sum, denoted $\bigoplus_iR_i$. Intuitively, this is the resulting abelian group where you allow elements of the different rings to be added together "freely" (if $r_i\in R_i$ and $r_j\in R_j$, then $r_i+r_j$ only really makes sense if $i=j$; if $i\neq j$, then we don't know what to do, so just let their sum be this new element called $r_i+r_j$). In this way, it is essentially by definition that elements of $\bigoplus_iR_i$ are uniquely given by finite sums of elements in the individual $R_i$'s.
In fact, your disjointness of direct summands from linear algebra is still valid for direct sums of abelian groups: if you identify $R_j$ with the subgroup of $\bigoplus_iR_i$ consisting of those elements whose only nonzero term is at index $j$, then $R_i\cap R_j=\{0\}$ whenever $i\neq j$.
In response to your question about the $M_i$ being generated by those elements ($\ast$), we already have by assumption that $M$ as a whole is generated as an $R$-module by the $m_1,\dots,m_s$. This gives the first decomposition you wrote down (before "Let $d = \min\dots$"). Now, since $R$ is finitely generated as an $R_0$-algebra, say by the $r_1,\dots,r_p$, which can be taken to be homogeneous as you mentioned, we know we can write any such $r$ in that first decomposition as $r_1^{\alpha_1}\dots r_p^{\alpha_p}$. Then substituting this into the first decomposition should give you what you want.
Best Answer
Basically the idea is that if $a$ has some non-zero part in a degree other than $0$, then taking powers will shift this away from degree $0$ and so a non-zero polynomial cannot have $a$ as a zero. More precisely:
Let $0\ne g\in R_0[X]$ be a polynomial with coefficients in $R_0$ and write $g=b_nX^n+b_{n-1}X^{n-1}+\dots+b_0$ with $b_n\ne0$. Let $a\in R$ and $a_i$ be the $i$-th homogeneous component of $a$ (so that $a=\sum_{i\in \Bbb Z}a_i$). Assume that $a\notin R_0$. Assume that $a$ has some non-zero component in positive degree and let $m$ be the largest positive integers for which $a_m\ne0$ (if $a$ has no non-zero component in positive degree we take $m$ to be the minimal integer with $a_m\ne0$). Now if we plug $a$ into $g$ and write out all the homogeneous terms, we see that in the expression $g(a)$ there is exactly one term of degree $nm$, namely $b_na_m^n$. As the other non-zero $a_i$ have lower degree there is no other term of degree $nm$. By assumption $R$ is an integral domain, so we have $b_na_m^n\ne0$. In particular this means that $g(a)$ cannot be zero as it isn't zero in the $nm$-th component. We showed that no $a\in R\setminus R_0$ can satisfy a non-trivial polynomial equation over $R_0$, i.e. $R_0$ is algebraically closed in $R$.
In fact, we see that we can weaken the integral domain hypothesis: we only need that $R$ is reduced and torsionfree as an $R_0$-module.