Eureka! I figured it out! It's quite silly because in fact I was so close.
So yes, my hypothesis is true.
Actually, Since $W_1=\{c_1x_1+c_2x_2+...+c_kx_k|c_i\in\mathbb{C}\}$ is obviously a subspace of $W_0=\mathbb{C}^n$, and we can easily deduce (same approach as the first part of my proof) that $Bx\in W_1$ as long as $x\in W_1$, we obtain that $B$ is $W_1$-invariant, or we can say $B$ is also a linear transformation on $W_1$, so that there are $k$ eigenvalues of $B$ in $W_1$.
Now, pick one of them, say $\lambda_B$. As we can clearly see it's the first part of my proof all over again. For any $C\in\mathscr{F}$, we can get a new subspace $W_2\subseteq W_1\subseteq W_0$, that $C$ is $W_2$-invariant, and go ahead we get $W_3\supseteq W_4\supseteq ...$
Note that, $W_0$ has a finite dimension $n$, and every time we have $1\le\dim{W_{i+1}}\le\dim{W_i}$ so there gotta be a subspace $$W_\infty=\bigcap_i{W_i}$$ with $\dim{W_\infty}\ge1$, that $\forall x\in W_\infty(x\ne 0)$ is an common eigenvector of all the matrices in $\mathscr{F}$.
As inspired by Matt.E's answer in that question, we know that $\forall A\in\mathscr{F}$, if $A$ has $k$ eigenvalues, then there are at least $k$ common eigenvectors in $\mathscr{F}$. Simply because we can do the same process for each of its eigenvalues.
Remark: You can consider the following answer as a comment, rather than an actual answer.
Let me give you a simple example of how PageRank (in its initial form) works. Suppose that we have a web consisting of $3$ webpages, with links between them. For our example, let' say that we have the adjacency matrix
$$L = \begin{bmatrix} 0 & 1 & 0\\ 1& 0& 1 \\ 1 & 1 & 1 \end{bmatrix}.$$ Notice that $L_{ij} = \begin{cases} 1, &i\rightarrow j \\ 0, & \text{otherwise}. \end{cases}$
We create the stochastic matrix $P$ which follows from $L$ if we divide the element of each row by the number of outlinks of each page. Thus, in our example, we have:
$$P =\begin{bmatrix} 0 & 1 & 0\\ \frac 12 & 0 & \frac 12 \\ \frac 13 & \frac 13 & \frac 13 \end{bmatrix}.$$
Matrix $P$ is a stochastic matrix, which in our case is irreducible and aperiodic (irreducibility and aperiodicity can be found here). Thus, there is a normalized stationary distribution $\pi$ , satisfying the condition $\pi \cdot P =\pi$, which means $\pi$ is an eigenvector associated to the dominant eigenvalue $\lambda_1=1$ of the matrix $P$. This dominant eigenvector is the PageRank vector. By Perron - Frobenius theorem for non - negative, irreducible matrices we know that $\lambda_1 =1 $ is a simple eigenvalue and the associated eigenvector(s) is (are) the only one with positive elements.
Now, how can we approach that $\pi$? One way to do so is through Power iteration. How Power iteration works? Actually, it can compute the dominant eigenvector of a matrix (under some assumptions) starting from (almost) any vector $x_0$ and through successive iterations. When $$\|x^{(k+1)T} - x^{(k)T}\|_1 \le \tau =10^{-\ell},$$ then we can say that we have approached our dominant eigenvector $\pi$ with a very good accuracy (considering $\ell =8,9,\ldots$).
Certainly, there are many other ways to compute the dominant eigenvector. So, why Power Method? The answer is simple. It is the fastest algorithm, when we have to deal with very large scaled webs (the adjacency matrix is extremely sparse).
If you are interested in PageRank, I could recommend you a very good book (e - book):
A. Langville & C. Meyer, "Google's PageRank and Beyond: The Science of Search Engine Rankings", Princeton University Press.
Best Answer
Suppose $A$ and $B$ did not have any common eigenvectors. Let $\lambda$ be any eigenvalue of $A$ and let $v\neq 0$ such that $Av=\lambda v$. Then $$A(Bv)=AB(v)=(BA+\mu B)v=B(Av+\mu v)=(\lambda+\mu)(Bv).$$ Since $v$ is not an eigenvector of $B$, we must have $Bv \neq 0$, and so $\lambda+\mu$ is an eigenvalue of $A$. Since $\mu \neq 0$, this would mean that $A$ has infinitely many eigenvalues which is a contradiction.