We can use downward induction on $\dim V_i$.
The case $\dim V_i=n$ is trivial (with $S=\{0\}$). Now suppose $\dim V_i<n$, then we have
$\bigcup_iV_i\ne V$, so there is an $s\in V$ such that $s\notin \bigcup_iV_i$.
Then apply the induction hypothesis on $V_i\oplus\Bbb F s$.
The answer is a development of ideas from the comments of amsmath. The decisive step forward is derivation of the equation $(7)$ below.
Given a set of $n$ pairs of complex numbers
$$\{(x_1,y_2),(x_2,y_2),\dots,(x_n,y_n)\}$$ such that
$$
\forall\; i\ne j:\; x_i\ne x_j,
$$
define its interpolating polynomial as
$$
y(x)=\sum_{i}y_i\prod_{k\ne i}\frac{x-x_k}{x_i-x_k}.\tag1
$$
Differentiating the expression over $x$ and evaluating the result at $x_i$ one obtains:
$$
y'_i\equiv y'(x_i)=\sum_{j\ne i}\frac1{x_i-x_j}
\left[y_i-y_j\prod_{k\ne(i,j)}\frac{x_i-x_k}{x_j-x_k}\right].\tag2
$$
or
$$
\frac{y'_i}{\prod\limits_{k\ne i}x_i-x_k}
=\sum_{j\ne i}\frac1{x_i-x_j}\left[\frac{y_i}{\prod\limits_{k\ne i}x_i-x_k}
+\frac{y_j}{\prod\limits_{k\ne j}x_j-x_k}\right].\tag{2a}
$$
Introducing $f_i=\frac{y_i}{\prod\limits_{k\ne i}x_i-x_k}$, $f'_i=\frac{y'_i}{\prod\limits_{k\ne i}x_i-x_k}$ the equation $(\text{2a})$ can be rewritten in matrix notation as:
$$
\begin{pmatrix}
\sum\limits_{i\ne1}\frac{1}{x_1-x_i}& \frac1{x_1-x_2}&\cdots&\frac1{x_1-x_n}\\
\frac1{x_2-x_1}& \sum\limits_{i\ne2}\frac{1}{x_2-x_i}&\cdots&\frac1{x_2-x_n}\\
\vdots& \vdots& \ddots&\vdots\\
\frac1{x_n-x_1}&\frac1{x_n-x_2}&\cdots&\sum\limits_{i\ne n}\frac{1}{x_n-x_i}\\
\end{pmatrix}
\begin{pmatrix}
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f_1\\
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f_2\\
\vdots\\
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f_n\\
\end{pmatrix}=
\begin{pmatrix}
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f'_1\\
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f'_2\\
\vdots\\
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f'_n\\
\end{pmatrix}.\tag3
$$
or
$$
A f=f'.\tag4
$$
Assume now a special form of the interpolating polynomial:
$$
y_l(x)=(\lambda x+\beta)^l,\quad
\beta,\lambda\in\mathbb C,\,\lambda\ne 0;\; l\in\mathbb Z,\,0\le l<n,
$$
with corresponding $f$-vector components
$$
f_{li}=\frac{(\lambda x_i+\beta)^l}{\prod\limits_{k\ne i}x_i-x_k},\quad
f'_{li}=\frac{\lambda l(\lambda x_i+\beta)^{l-1}}{\prod\limits_{k\ne i}x_i-x_k}.\tag5
$$
Substituting the vectors $f_l$ and $f'_l$ into $(4)$ and multiplying both sides of the resulting equation from the left by diagonal matrix $D$ with elements
$$
D_{ii}=\lambda x_i+\beta,\tag6
$$
one obtains:
$$
DA f_l=\lambda l f_l.\tag7
$$
From this one concludes that $f_l$ are the eigenvectors of the matrix $DA$ with corresponding eigenvalues $\epsilon_l=\lambda l$. Observe that we have found all $n$ (distinct) eigenvalues of the matrix.
Since the determinant of the matrix $DA+Iz$ is characteristic polynomial of the matrix $-DA$, one obtains:
$$\begin{align}
&\det (DA+I z)=\prod_{l=0}^{n-1} z+\lambda l\tag8\\
&\implies
\det (A+D^{-1}z)=\det{D^{-1}}\det (DA+Iz)
=\prod_{l=0}^{n-1}\frac{z+\lambda l}{\lambda x_{l+1}+\beta}.\tag9
\end{align}
$$
It remains only to observe that $A+D^{-1}z$ with $\lambda=1$, $\beta=-b$, $z=c$, $x_i=a_i$ is exactly your matrix $M$.
Best Answer
Note that $B$ must have both primitive cube roots of unity as eigenvalues, as $B$ is conjugate to its inverse and has order $3$. Then note that $A$ and $B$ can have no common eigenvector, since if $Av = \lambda v$ and $Bv = \omega v,$ then $\lambda^{2} = 1$, so $ABAv = \lambda^{2}\omega v = \omega v,$ while $ABAv = \omega^{-1}v,$ a contradiction.
This means that there is no $1$-dimensional subspace which is left invariant by both $A$ and $B.$ But if $\mu$ is an eigenvalue of $C,$ and $A$ and $B$ commute with $C,$ then the $\mu$-eigenspace of $C$ is invariant under both $A$ and $B,$ so must be two-dimensional, and $C = \mu I.$
This is really an instance of Schur's Lemma from representation theory.