The approach based on decomposing $S_1$ and $S_2$ as linear combinations of unitary operators does prove that $S_1\otimes S_2$ is
bounded but, as far as I can remember, does not directly give the sharp estimate
$$
\|S_1\otimes S_2\|\leq \|S_1\|\|S_2\|,
$$
since the coefficients in the aforementioned linear combinations have little relationship with the norm of the
operators involved.
Here is a different approach, completely avoiding unitaries, and which gives the desired norm estimate.
Lemma 1. If $T:H_1\to H_2$ is a bounded linear operator between Hilbert spaces $H_1$ and $H_2$, and given $x_1,x_2,\ldots ,x_n\in H_1$, let $A$ and $B$ be the $n\times n$
matrices defined by
$$
a_{ij} = \langle T(x_i),T(x_j)\rangle ,
\quad \text{and} \quad
b_{ij} = \langle x_i,x_j\rangle .
$$
Then $A\leq \|T\|^2 B$.
Proof. Given scalars $\lambda _1,\lambda _2,\ldots ,\lambda _n$, we have
$$
\sum_{i,j}\lambda _i\bar\lambda _ja_{ij} =
\sum_{i,j}\lambda _i\bar\lambda _j\langle T(x_i),T(x_j)\rangle =
\sum_{i,j}\langle T(\lambda _ix_i),T(\lambda _jx_j)\rangle = $$$$ =
\left\langle \sum_iT(\lambda _ix_i),\sum_jT(\lambda _jx_j)\right\rangle =
\left\|T\left(\sum_i \lambda _ix_i\right)\right\|^2 \leq $$$$ \leq
\|T\|^2\left\|\sum_i \lambda _ix_i\right\|^2 =
\|T\|^2 \sum_{i,j}\langle \lambda _ix_i,\lambda _jx_j\rangle = $$$$ =
\|T\|^2 \sum_{i,j}\lambda _i\bar\lambda _j\langle x_i,x_j\rangle =
\|T\|^2 \sum_{i,j}\lambda _i\bar\lambda _jb_{ij}.
$$
QED
Lemma 2. If $T:H_1\to H_2$ is a bounded linear operator between Hilbert spaces $H_1$ and $H_2$, and $K$ is another Hilbert
space, then there is a unique linear operator
$$
T\otimes I:H_1\otimes K\to H_2\otimes K,
$$
such that
$$
T(x\otimes y) = T(x)\otimes y, \quad \forall x\in H_1, \quad \forall y\in K.
$$
Moreover $\|T\otimes I\|\leq \|T\|$.
Proof. By replacing $T$ with $T/\|T\|$, we may assume that $\|T\|=1$.
The crucial step in the proof is then showing the inequality
$$
\left\| \sum_i T(x_i)\otimes y_i \right\| \leq \left\| \sum_i x_i\otimes y_i \right\|,
$$
whenever $x_1,x_2,\ldots ,x_n\in H_1$, and $y_1,y_2,\ldots ,y_n\in K$.
In this case,
observe that the matrices $A$ and $B$ defined in Lemma (1) satisfy $A\leq B$, whence $B-A\geq 0$,
so we may find an $n\times n$ scalar matrix $C$ such that
$B-A=CC^*$. Therefore, for every $i$ and $j$ one has that
$$
\langle x_i, x_j\rangle = \langle T(x_i), T(x_j)\rangle + \sum_k c_{ik}\bar c_{j k}.
$$
Multiplying by $\langle y_i, y_j\rangle $ and summing on $i$ and $j$, we get
$$
\sum_{i, j}\langle x_i, x_j\rangle \langle y_i, y_j\rangle = \sum_{i, j}\langle T(x_i), T(x_j)\rangle \langle y_i, y_j\rangle + \sum_{i, j}\sum_k c_{ik}\bar c_{jk}\langle y_i, y_j\rangle ,
$$
which means that
$$
\left\|\sum_i x_i\otimes y_i \right\|^2 = \left\|\sum_i T(x_i)\otimes y_i \right\|^2 + \sum_{i, j}\sum_k c_{ik}\bar c_{j k}\langle y_i, y_j\rangle ,
$$
so it suffices to show that the last sum above is nonnegative. But this is easy because
$$
\sum_{i, j}\sum_k c_{ik}\bar c_{j k}\langle y_i, y_j\rangle =
\sum_k \left\langle \sum_i c_{ik}y_i,\sum_jc_{j k}y_j\right\rangle =
\sum_k \left\|\sum_i c_{ik}y_i\right\|^2 \geq 0.
$$
QED
Adopting the notation used by the OP, we may now easily prove that $S_1\otimes S_2$ is bounded, and indeed that
$\|S_1\otimes S_2\|\leq \|S_1\|\|S_2\|$, as follows
$$
\|S_1\otimes S_2\| = \|(S_1\otimes I)(I\otimes S_2)\| \leq \|S_1\otimes I\|\|I\otimes S_2\| \leq \|S_1\|\|S_2\|.
$$
Best Answer
Let me give the following counterexample. For $H$ we choose the Sobolev space $H^1(\mathbb{R})$ (sometimes denoted by $W^{1,2}(\mathbb{R})$). This is the subspace of $L^2(\mathbb{R})$ which has weak derivatives that are again in $L^2(\mathbb{R})$. The norm is given by $$ \Vert f \Vert_{H^1(\mathbb{R})} = \sqrt{\Vert f \Vert_{L^2(\mathbb{R}}^2 + \Vert f' \Vert_{L^2(\mathbb{R})}^2},$$ respectively, the scalar product is given by $$ \langle f, g \rangle_{H^1(\mathbb{R})} = \int_\mathbb{R} \overline{f} g + \int_\mathbb{R} \overline{f'} g'. $$ Now consider the inclusion map $$ i: (H^1(\mathbb{R}), \Vert \cdot \Vert_{H^1(\mathbb{R})}) \rightarrow (L^2(\mathbb{R}), \Vert \cdot \Vert_{L^2(\mathbb{R})}, f \mapsto f. $$ The map is clearly injective and it is also bounded as $$ \Vert i(f) \Vert_{L^2(\mathbb{R})} = \Vert f \Vert_{L^2(\mathbb{R})} = \sqrt{\Vert f \Vert_{L^2(\mathbb{R})}} \leq \sqrt{\Vert f \Vert_{L^2(\mathbb{R}}^2 + \Vert f' \Vert_{L^2(\mathbb{R})}^2} = \Vert f \Vert_{H^1(\mathbb{R})}.$$ However, the map is not surjective. For this we need to exhibit an element $L^2(\mathbb{R}) \setminus H^1(\mathbb{R})$. By Morrey's inequality all functions in $H^1(\mathbb{R})$ admit a Hölder continuous representative, thus, $1_{[0;1]}$ would be an example (of course we could check this in a more elementary fashion, but it is good to know Morrey's inequality).
As mentioned in the comments above, the inclusion map is not even an isomorphism of normed spaces on its image. If it was, then there would exist a constant $C>0$ such that for all $f\in H^1(\mathbb{R})$ holds $$ \Vert f \Vert_{H^1(\mathbb{R})} \leq C \Vert f \Vert_{L^2(\mathbb{R})}. $$ For this start with your favourite function $f\in C_c^\infty(\mathbb{R})\setminus \{0\}$ (note that $C_c^\infty(\mathbb{R}) \subseteq H^1(\mathbb{R})$) and for any $\varepsilon>0$ define $f_\varepsilon(x) = \sqrt{\varepsilon} f(\varepsilon x)$. Change of variables tells us $$ \Vert f_\varepsilon \Vert_{L^2(\mathbb{R})} = \Vert f \Vert_{L^2(\mathbb{R})}. $$ However, the derivative picks up an additional factor of $\varepsilon$ and hence, we get $$ \Vert f_\varepsilon \Vert_{H^1(\mathbb{R})}^2 = \Vert f_\varepsilon \Vert_{L^2(\mathbb{R})}^2 + \Vert f_\varepsilon' \Vert_{L^2(\mathbb{R})}^2 = \Vert f \Vert_{L^2(\mathbb{R})}^2 + \varepsilon^2 \Vert f' \Vert_{L^2(\mathbb{R})}^2. $$ Thus, if the inverse of $i$ restricted to its image was continuous, we would have for any $\varepsilon>0$ $$ \Vert f \Vert_{L^2(\mathbb{R})}^2 + \varepsilon^2 \Vert f' \Vert_{L^2(\mathbb{R})}^2 \leq C^2 \Vert f \Vert_{L^2(\mathbb{R})}^2. $$ If $\Vert f' \Vert_{L^2(\mathbb{R})} \neq 0$, then we can send $\varepsilon \rightarrow \infty$ (well, bad naming... if somebody is offended by a large $\varepsilon$, feel free to edit) and get a contradiction. However, $f'$ is continuous, thus, if its $L^2$-norm vanishes, we get that $f'$ vanishes identically and hence, $f$ is constant. This is not possible as $f$ has compact support and is not identically zero.