This argument works, but in a sense it's overkill. You have a finite variance $\sigma^2$ for each observation, so $\operatorname{var}\left(\overline{X}_n\right)=\sigma^2/n$. Chebyshev's inequality tells you that
$$
\Pr\left(\left|\overline{X}_n - \mu\right|>\varepsilon\right) \le \frac{\sigma^2}{\varepsilon^2 n} \to 0\text{ as }n\to\infty.
$$
And Chebyshev's inequality follows quickly from Markov's inequality, which is quite easy to prove.
But the proof of the central limit theorem takes a lot more work than that.
There is likely a proof somewhere on this site but I could not find it. Here I give a quick proof of my comment (since I originally mis-stated the result by forgetting the "lower bounded" restriction):
Let $\{X_i\}_{i=1}^{\infty}$ be a sequence of random variables, not necessarily identically distributed and not necessarily independent, that satisfy:
i) $E[X_i]=m_i$, where $m_i \in \mathbb{R}$ for all $i\in\{1, 2, 3, ...\}$.
ii) There is a constant $\sigma^2_{bound}$ such that $Var(X_i) \leq \sigma^2_{bound}$ for all $i \in \{1, 2, 3, ...\}$.
iii) The variables are pairwise uncorrelated, so $E[(X_i-m_i)(X_j-m_j)]=0$ for all $i \neq j$.
iv) There is a value $b \in \mathbb{R}$ such that, with prob 1, $X_i-m_i\geq b$ for all $i \in \{1, 2, 3, ...\}$.
Define $L_n = \frac{1}{n}\sum_{i=1}^n (X_i-m_i)$. Then $L_n\rightarrow 0$ with prob 1.
Proof: Since the variables are pairwise uncorrelated with bounded variance, we easily find for all $n$:
$$ E[L_n^2] = \frac{1}{n^2}\sum_{i=1}^n \sigma_i^2 \leq \frac{\sigma_{bound}^2}{n} $$
Fix $\epsilon>0$. It follows that:
$$ P[|L_n|>\epsilon] = P[L_n^2 \leq \epsilon^2] \leq \frac{E[L_n^2]}{\epsilon^2} \leq \frac{\sigma_{bound}^2}{n\epsilon^2} $$
Hence:
$$ \sum_{n=1}^{\infty} P[|L_{n^2}|>\epsilon] \leq \sum_{n=1}^{\infty}\frac{\sigma_{bound}^2}{n^2\epsilon^2} < \infty $$
and so $L_{n^2}\rightarrow 0$ with probability 1 by the Borel-Cantelli Lemma. That is, the $L_n$ values converge over the sparse subsequence $n\in\{1, 4, 9, 16, ...\}$.
Since $L_n \geq b$ for all $n$ and $L_{n^2}\rightarrow 0$ with probability 1, it can be shown that $L_n\rightarrow 0$ with probability 1. $\Box$
The lower bounded condition is typically treated by writing $X_n = X_n^+ - X_n^-$ where $X_n^+$ and $X_n^-$ are nonnegative and defined $X_n^+=\max[X_n,0]$, $X_n^-=-\min[X_n,0]$. If $X_n$ and $X_i$ are independent, then $X_n^+$ and $X_i^+$ are also independent. So the lower bounded condition can be removed for the case when variables are independent. However, if $X_n$ and $X_i$ are uncorrelated, that does not mean $X_n^+$ and $X_i^+$ are uncorrelated. So it is not clear to me if the lower-bounded condition can be removed when "independence" is replaced by the weaker condition "pairwise uncorrelated."
Best Answer
The statement that you cite as weak law of large numbers is wrong. Without additional assumptions it is not true that $\lim_{n\to\infty}\operatorname {P}\bigl(\bigl|\overline {X}_{n}\bigr|>\varepsilon \bigr)=0$ for any random variables $X_1,\ldots,X_n$ with finite variances $\text{Var}[X_i]=\sigma_i^2$ and expectations $\mathbb{E}[X_i]=\mu_i$ for all $i=1,...,n$.
Say, you can take $X_1=\ldots=X_n=X$ with $\mathbb E[X]=\mu$, $\operatorname{Var}[X]=\sigma^2>0$ and get $\overline X_n=X-\mu$. This value does not depend on $n$ and does not converge in probability to $0$.
When r.v.'s are i.i.d., the statement is true. There can exists a number of other sufficient conditions for convergence ${\frac 1n}\textstyle \sum \limits _{{i=1}}^{{n}}(X_{i}-E({X}_{i}))\xrightarrow{p} 0$. Say, if $X_i$, $i=1,2,\ldots$ are pairwise independent and $\frac{\sigma_1^2+\ldots+\sigma_n^2}{n^2}\to 0$, it is valid. This is a consequence of Chebyshev inequality.
For i.i.d. case, it is sufficient that the first moment of $X_1$ exists. So for $${\tfrac 1n}\textstyle \sum \limits _{{i=1}}^{{n}}(X_{i}-\mu)^2{\xrightarrow {p}}\ \sigma^2$$ in i.i.d. case it is sufficient that $\mathbb E(X_1-\mu)^2=\sigma^2<\infty$.