Without any additional assumptions (e.g. on the speed of convergence), the assertion is wrong.
(Counter)Example Let $Z \sim N(0,1)$ be a standard Gaussian random variable. If we set $$X_n := \frac{1}{n} Z \qquad Y_n:= \frac{1}{\sqrt{n}} Z$$ then clearly $X_n \to 0$, $Y_n \to 0$ in probability. The density $p$ of $Z$,
$$p(y) = \frac{1}{\sqrt{2\pi}} \exp \left( - \frac{y^2}{2} \right)$$
is continuous and satisfies $p(0)=1/\sqrt{2\pi}$, and therefore it follows that we can choose $r>0$ such that
$$p(y) \geq \frac{3}{4} \frac{1}{\sqrt{2\pi}} \quad \text{for all $|y| \leq r$}.$$
This implies that the transition density $f_n$ of $X_n$ satisfies
$$f_n(y) = \sqrt{n} p \left( \sqrt{n} y \right) \geq \frac{3}{4} \frac{\sqrt{n}}{\sqrt{2\pi}} \quad \text{for all $|y| \leq r/\sqrt{n}$.}$$
Since
$$\int_{\mathbb{R}} |f_n(y)-g_n(y)| \, dy \geq \int_{|y| \leq r/\sqrt{n}} (f_n(y))-g_n(y)) \, dy$$
and $\|g_n\|_{\infty} \leq n^{1/4}/\sqrt{2\pi}$, we get
$$\int_{\mathbb{R}} |f_n(y)-g_n(y)| \, dy \geq \left( \frac{3}{4} \frac{\sqrt{n}}{\sqrt{2\pi}} - \frac{n^{1/4}}{\sqrt{2 \pi}} \right) \frac{2r}{\sqrt{n}} \xrightarrow[]{n \to \infty} \frac{3}{2} \frac{r}{\sqrt{2\pi}}>0$$
and so
$$\int_{\mathbb{R}} |f_n(y)-g_n(y)| \, dy \to 0$$
does not hold true.
Remark For random variables $X$ and $Y$ with density $f$ and $g$, respectively, the total variation distance is defined as
$$d_{\text{TV}}(X,Y) = \frac{1}{2} \int |f-g|.$$
It is possible to show that
$$d_{\text{TV}}(X,Y) = \sup_{A \in \mathcal{B}(\mathbb{R})} |\mathbb{P}(X \in A)-\mathbb{P}(Y \in A)|.$$
The above (counter)example shows that $X_n \to \mu$, $Y_n \to \mu$ in probability does, in general, not imply $d_{\text{TV}}(X_n,Y_n) \to 0$.
The idea is sound, although there is a minor typo towards the end
$$\mathbb{P}(X_n+Y_n \text{ does not converge})\le\mathbb{P}(S_1 \cup S_2 \cup S_3)$$
instead of intersections like you have used.
In any case, you've chosen to prove that the set of divergence has measure zero which is fine. There is a slightly different approach that may be a little shorter. If we let $S_1=\{X_n \text{ converges}\}$ and $S_2=\{Y_n \text{ converges}\}$ then $\mathbb{P}(S_1)=\mathbb{P}(S_2)=1$. Thus, $\mathbb{P}(S_1\cap S_2)=1$ and on $S_1\cap S_2$, we know from elementary real analysis that $X_n+Y_n$ converges.
Best Answer
One way I like, is by using subsequential arguments.
We can use this in the following way.
Let $X_{n_{k}}Y_{n_{k}}$ be an arbitrary subsequence of $X_{n}Y_n$ . Now $X_{n_{k}}\xrightarrow{P} X$ and $Y_{n_{k}}\xrightarrow{P}$ and hence there exists subsequences of $X_{n_{k}}$ and $Y_{n_{k}}$ which converge almost surely to $X$ and $Y$. So chose a common subsequence $n_{k_{m}}$ of $n_{k}$ such that $X_{n_{k_{m}}}\xrightarrow{a.s.}X$ and $Y_{n_{k_{m}}}\xrightarrow{a.s.} Y$
Hence $X_{n_{k_{m}}}Y_{n_{k_{m}}}\xrightarrow{a.s.}XY$ and hence $X_{n_{k_{m}}}Y_{n_{k_{m}}}\xrightarrow{P}XY$
Thus each subsequence has a further subsequence that converges in probability to $XY$ and thus, the whole sequence converges in Probability to $XY$.
Proof of the claim I am using.
One way is clear that convergence in probability implies convergence for all subsequences.
So assume that $X_{n}$ does not converge in probability to $X$. Then there exists $X_{n_{k}}$ such that $P(|X_{n_{k}}-X|>\epsilon)\geq \delta_{0}$ for some $\epsilon>0$, for some $\delta_{0}>0$ and all $k\geq 1$.
Then, for this subsequence, we can find $X_{n_{k_{l}}}\xrightarrow{P}X$ and hence $P(|X_{n_{k_{l}}}-X|>\epsilon)\xrightarrow{l\to\infty} 0$. But this is not possible as $P(|X_{n_{k}}-X|>\epsilon)\geq \delta_{0}$ for all $k\geq 1$.
Hence by contradiction, $X_{n}\xrightarrow{P}X$