Okay, after spending far too much time studying polarization identities and trying to make sense of an exercise that I presume was adapted from an entry for the International Obfuscated Math Notation Competition, I believe I finally understand what this question is asking and how to get the answer they provided. First of all, it is sufficient to prove that $\langle \lambda | A | \lambda \rangle = 0 \ \forall | \lambda \rangle \in \mathsf S_s(\mathcal H)$ iff $A = 0$ to show that an operator $A \in \mathsf{Lin}(\mathcal{H})$ is completely defined by its expectations. To see this, refer to this similar proof for a dense subspace and apply linearity to get it from a non-zero spherical shell.
The authors of this book in their answer seem to be making a much stronger statement though, that we can express any matrix-element $\langle y | A | x \rangle$ in terms of its diagonal elements. This is true, and can be shown simply enough using polarization identities of inner products. However, for some reason that I cannot for the life of me comprehend, in this book, they decided to express the polarization identity in terms of outer products, and while this is mathematically valid, it does not seem especially useful. I spent quite a lot of time fiddling with products of two matrix-elements to see if I could exploit the outer product that appears in the middle there, and while I did eventually reach a solution using that method, I do not care to reproduce the several pages of algebra this required here. Here then is a much simpler proof directly applying the complex polarization identity to a matrix-element calculation. If anyone would like to attempt a proof using the specific equation this book provides, be my guest. This proof takes heavy inspiration from the discussion on this question and makes the derivation a bit more explicit.
Lemma 1. For any $x$ in any linear space, we have $$\sum_{k=0}^3 i^kx = 0$$
Proof. Comes from cancellation. $$\sum_{k=0}^3 i^kx = x + ix - x - ix = 0 \ \Box$$
Lemma 2. For any Hilbert space $\mathcal H$, any $| \mu \rangle \in \mathcal H \setminus \{0\}$, any operator $A \in \mathsf{Lin}(\mathcal H)$, and any $s > 0$, there exists a real number $r > 0$ and $| \lambda \rangle \in \mathsf S_s(\mathcal H)$ such that $$| \mu \rangle = r | \lambda \rangle$$
Proof. Let $r = \frac{||\mu||}{s}$ and $| \lambda \rangle = \frac{| \mu \rangle}{r}$ and this follows directly from linearity. $\Box$
Theorem. For any $| x \rangle, |y \rangle \in \mathcal H$, any $s > 0$, and any operator $A \in \mathsf{Lin}(\mathcal H)$, it is possible to express the matrix-element $\langle y | A | x \rangle$ entirely in terms of expectations $\langle \lambda | A | \lambda \rangle$ with $| \lambda \rangle \in \mathsf S_s(\mathcal H)$.
Proof. Let $| x \rangle, |y \rangle \in \mathcal H$, $s > 0$, and $A \in \mathsf{Lin}(\mathcal H)$. Define $| \mu_k \rangle = | x \rangle + i^k | y \rangle$. Then $\langle \mu_k | = | \mu_k \rangle^\dagger = \langle x | + (-i)^k \langle y |$. By Lemma 2, if $| \mu_k \rangle \ne 0$, then there exist $r_k > 0$ and $| \lambda_k \rangle \in \mathsf S_s(\mathcal H)$ such that $| \mu_k \rangle = r_k | \lambda_k \rangle$. If $| \mu_k \rangle = 0$, then we set $r_k = | \lambda_k \rangle = 0$. In either case, $\langle \mu_k | A | \mu_k \rangle = r_k^2 \langle \lambda_k | A | \lambda_k \rangle$. We can then take the sum $$\frac{1}{4} \sum_{k=0}^3 i^k r_k^2 \langle \lambda_k | A | \lambda_k \rangle = \frac{1}{4} \sum_{k=0}^3 i^k \langle \mu_k | A | \mu_k \rangle \\ = \frac{1}{4} \sum_{k=0}^3 i^k \left( \langle x | + (-i)^k \langle y | \right) A \left( | x \rangle + i^k | y \rangle \right) \\ = \frac{1}{4} \sum_{k=0}^3 \left[ i^k \langle x | A | x \rangle + (-1)^k \langle x | A | y \rangle + \langle y | A | x \rangle + i^k \langle y | A | y \rangle \right]$$
By Lemma 1, all of the terms of this sum are $0$ except for $\langle y | A | x \rangle$, leaving $$\langle y | A | x \rangle = \frac{1}{4} \sum_{k=0}^3 i^k r_k^2 \langle \lambda_k | A | \lambda_k \rangle \ \Box$$
Since a linear operator in a finite-dimensional space is completely defined by its matrix-elements, this suffices to show that it is also completely defined by its expectations $\langle \lambda | A | \lambda \rangle$ over $| \lambda \rangle \in \mathsf S_s(\mathcal H)$ for any $s > 0$.
Best Answer
The canonical commutation relations are not well-defined on finite-dimensional Hilbert spaces. The canonical prescription is
$$ [x,p] = \mathrm{i}\hbar\mathbf{1}$$
and, recalling that the trace of a commutator must vanish, but the trace of the identity is the dimension of the space if it is finite-dimensional, we conclude that we have a space for which the trace of the identity is not well-defined, which is then necessarily infinite-dimensional.