For showing $$\bigcup_{n\geq 1}\{T_n=\infty\}\subseteq\{\lim_{t\to\infty}M_t\ \text{exists and is finite}\}$$ use that for the stopped process $|M^{T_n}_t|\leq n$ holds. Due to Doobs optional stopping theorem $Z^n_t:=M^{T_n}_t$ is still a continous local maringale. It is even a ture martingle, since $E\sup_{s\leq t}|Z^n_s|<\infty$ and by the submartingle convergence theorem follows, that $Z^n$ is convergent. Now, look at the paths, where $T_n=\infty$ and the statement follows. Furthermore the process $Z^n$ is in $L^2$, so your argument will work with $\langle Z^n,Z^n\rangle=\langle M,M\rangle^{T_n}$. And since $n\in\mathbb{N}$ is countable, you find a set of $\omega$ with measure $1$ and independent of $n$ so that for all $n\in\mathbb{N}$ $$\langle M,M\rangle^{T_n}_\infty<\infty$$ holds.
For the second part, how to show $$Q:=\{\langle M,M\rangle_\infty<\infty\}\subseteq\{\lim_{t\to\infty}M_t\ \text{exists and is finite}\},$$ simply consider the process $Q^n:=M^{S_n}$. Since $Q^n$ is a martingale in $L^2$, $\sup_{t}E|Q^n_t|<\infty$ holds and thus $Q^n$ converges a.s.
I think you are trying to prove Meyer's theorem with the integrability assumptions missing. (Roger's & Williams has a proof in vol 2, "Diffusions, Markov processes... Ito's...".) If so, operationally/symbolically/heuristically, your 2nd approach is correct - assume $M_0=0$ a.s. Then for any partition $0=t_0^{(n)}<\dots< t_{k-1}^{(n)}< t_k^{(n)}<\dots<t_n^{(n)}=t$ of $[0,t]$ into $n$ sub-intervals,
\begin{align}
M^2_t−⟨M⟩_t &= M_t^2−\lim_{n→∞}\sum_{k=1}^{n}(M_{t_k^{(n)}} − M_{t_{k-1}^{(n)}})^2\\
&=M_t^2−\lim_{n→∞}\left[\sum_{k=1}^{n}M_{t_{k} ^{(n)}}^2 -\sum_{k=1}^{n}M_{t_{k} ^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\
&=M_t^2−\lim_{n→∞}\left[M^2_t+\sum_{k=1}^{n-1}M_{t_{k}^{(n)}}^2 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\
&=−\lim_{n→∞}\left[\sum_{k=1}^{n-1}M_{t_{k}^{(n)}}^2 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\
&=−\lim_{n→∞}\left[\sum_{k=2}^{n}M_{t_{k-1}^{(n)}}^2 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\
&=−\lim_{n→∞}\left[\sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2 - M_0 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} + \sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2\right]\\
&=−\lim_{n→∞}\left[2\sum_{k=1}^{n}M_{t_{k-1}^{(n)}}^2 - 2\sum_{k=1}^{n}M_{t_{k}^{(n)}}M_{t_{k-1}^{(n)}} - M_0\right]\\
&=2\lim_{n→∞}\left[\sum_{k=1}^{n}M_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] + M_0.\\
\end{align}
Assume $M_0=0$ a.s. to get rid of the last term; meanwhile the first term tends to an Ito integral,
$$\lim_{n\to\infty}\sum_{k=1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)=\int_0^t X_u dM_u,
$$
which is a martingale. Heurestically, by computing the conditional expectations for $s\le t$, say $s=t_{m}^{(n)}$,
\begin{align}
\mathbb{E}_s\left[\sum_{k=1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] &=
\mathbb{E}_s\left[\sum_{k=1}^{m}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] +\\
&\qquad\mathbb{E}_s\left[\sum_{k=m+1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] \\
&=
\sum_{k=1}^{m}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)+\\
&\qquad\mathbb{E}_s\left[\sum_{k=m+1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right]. \\\end{align}
That $M$ is a martingale shows the last term is $0$:
\begin{align}
\mathbb{E}_s\left[\sum_{k=m+1}^{n}X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right] &= \sum_{k=m+1}^{n}\mathbb{E}_s\left[\mathbb{E}_{t_{k-1}^{(n)}}\left[X_{t_{k-1}^{(n)}}\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right]\right]\\
&= \sum_{k=m+1}^{n}\mathbb{E}_s\left[X_{t_{k-1}^{(n)}}
\mathbb{E}_{t_{k-1}^{(n)}}\left[\left(M_{t_{k}^{(n)}} - M_{t_{k-1}^{(n)}}\right)\right]\right]\\
&= \sum_{k=m+1}^{n}\mathbb{E}_s\left[X_{t_{k-1}^{(n)}} 0\right] \\
&=0.
\end{align}
Operationally: $\mathbb E_s[\int_0^t X_u dM_u] = \mathbb E_s[\int_0^s X_u dM_u]+\mathbb E_s[\int_s^t X_u dM_u] = \int_0^s X_u dM_u + \int_s^t \mathbb E_s[ X_u dM_u] = \int_0^s X_u dM_u + \int_s^t \mathbb E_s[ \mathbb E_u[ X_u dM_u]] = \int_0^s X_u dM_u + \int_s^t \mathbb E_s[X_u \underbrace{\mathbb E_u[ dM_u]}_{=0}] = \int_0^s X_u dM_u.$
Best Answer
It is the quadratic variation of the martingale. Also sometimes written with square brackets.
https://en.wikipedia.org/wiki/Quadratic_variation
Greg