I don't have the book with me but I'm guessing that there are v observables $\xi_{1}$...$\xi_{v}$ which have discrete eigenvalues and u observables $\xi_{v+1}$...$\xi_{v+u}$ which have continuous eigenvalues. Assuming that they're talking about a complete set of commuting observables, the eigenkets are labelled $|\xi_{1}..\xi_{v},\xi_{v+1}..\xi_{v+u}\rangle$. Just restricting to the first 2 discrete observables for simplicity:
We have observables $\xi_{1}$ and $\xi_{2}$. Suppose $\xi_{1}$ has just 2 distinct eigenvalues $E_{10}$ and $E_{11}$ and $\xi_{2}$ has just 2 distinct eigenvalues $E_{20}$ and $E_{21}$. Then, since $\xi_{1}$ and $\xi_{2}$ are complete and commuting, the state space is spanned by the 4 vectors $|E_{10}E_{20}\rangle$, $|E_{11}E_{20}\rangle$, $|E_{10}E_{21}\rangle$ and $|E_{11}E_{21}\rangle$.
If I have understood it correctly, your worry is that there may be a case where you have just three independent eigenkets. Suppose this is the case, so we have $$|E_{11}E_{21}\rangle = \alpha|E_{10}E_{20}\rangle + \beta|E_{11}E_{20}\rangle + \gamma|E_{10}E_{21}\rangle$$
for some $\alpha, \beta, \gamma$.
Looking at the relationship of $|E_{11}E_{21}\rangle$ with the other 3 vectors in turn:
$$\xi_{1}|E_{11}E_{21}\rangle = E_{11}|E_{11}E_{21}\rangle; \xi_{1}|E_{10}E_{20}\rangle = E_{10}|E_{10}E_{20}\rangle$$
so $|E_{11}E_{21}\rangle$ and $|E_{10}E_{20}\rangle$ belong to different eigenvalues of $\xi_{1}$ so must be orthogonal
$$\xi_{2}|E_{11}E_{21}\rangle = E_{21}|E_{11}E_{21}\rangle; \xi_{2}|E_{11}E_{20}\rangle = E_{20}|E_{11}E_{20}\rangle$$
so $|E_{11}E_{21}\rangle$ and $|E_{11}E_{20}\rangle$ belong to different eigenvalues of $\xi_{2}$ so must be orthogonal
$$\xi_{1}|E_{11}E_{21}\rangle = E_{11}|E_{11}E_{21}\rangle; \xi_{1}|E_{10}E_{21}\rangle = E_{10}|E_{10}E_{21}\rangle$$
so $|E_{11}E_{21}\rangle$ and $|E_{10}E_{21}\rangle$ belong to different eigenvalues of $\xi_{1}$ so must be orthogonal
So $|E_{11}E_{21}\rangle$ can't be expressed as a linear combination of the other three vectors - it must be an independent vector.
Analogously in the continuous eigenvalue case, the eigenvalues of the different observables can be treated as independent, and hence integrated over independently.
There are at least three notions of basis depending on the mathematical structure you are considering. I will quickly discuss three cases relevant in physics (topological vector spaces are relevant too, but I will not consider them for the shake of brevity).
(1) Pure algebraic structure (i.e. vector space structure over the field $\mathbb K=$ $\mathbb R$ or $\mathbb C$, actually the definition applies also to modules).
Basis in the sense of Hamel.
Given a vector space $V$ over the field $\mathbb K$, a set $B \subset V$ is called algebraic basis or Hamel basis, if its elements are linearly independent and every $v \in V$ can be decomposed as: $$v = \sum_{b \in B} c_b b$$
for a finite set of non-vanishing numbers $c_b$ in $\mathbb K$ depending on $v$.
Completeness of $B$ means here that the set of finite linear combinations of elements in $B$ includes (in fact coincide to) the whole space $V$.
Remarks.
This definition applies to infinite dimensional vector spaces, too. Existence of algebraic bases arises from Zorn's lemma.
It is possible to prove that all algebraic bases have same cardinality.
Decomposition of $v$ over the basis $B$ turns out to be unique.
(2) Banach space structure (i.e. the vector space over $\mathbb K$ admits a norm $||\:\:|| : V \to \mathbb R$ and it is complete with respect to the metric topology induced by that norm).
Basis in the sense of Schauder.
Given an infinite dimensional Banach space $V$ over the field $\mathbb K = \mathbb C$ or $\mathbb R$, a countable ordered set $B := \{b_n\}_{n\in \mathbb N} \subset V$ is called Schauder basis, if every $v \in V$ can be uniquely decomposed as: $$v = \sum_{n \in \mathbb N} c_n b_n\quad (2)$$
for a set, generally infinite, of numbers $c_n \in \mathbb K$ depending on $v$ where the convergence of the sum is referred both to the Banach space topology and to the order used in labelling $B$. Identity (2) means:
$$\lim_{N \to +\infty} \left|\left|v - \sum_{n=1}^N c_{n} b_n\right|\right| =0$$
Completeness of $B$ means here that the set of countably infinite linear combinations of elements in $B$ includes (in fact coincide to) the whole space $V$.
Remarks.
The elements of a Schauder basis are linearly independent (both for finite and infinite linear combinations).
An infinite dimensional Banach space also admits Hamel bases since it is a vector space too. However it is possible to prove that Hamel bases are always uncountable differently form Schauder ones.
Not all infinite dimensional Banach space admit Schauder bases. A necessary, but not sufficient, condition is that the space must be separable (namely it contains a dense countable subset).
(3) Hilbert space structure (i.e. the vector space over $\mathbb K$ admits a scalar product $\langle \:\:| \:\:\rangle : V \to \mathbb K$ and it is complete with respect to the metric topology induced by the norm
$||\:\:||:= \sqrt{\langle \:\:| \:\:\rangle }$).
Basis in the sense of Hilbert (Riesz- von Neumann).
Given an infinite dimensional Hilbert space $V$ over the field $\mathbb K = \mathbb C$ or $\mathbb R$, a set $B \subset V$ is called Hilbert basis, if the following conditions are true:
(1) $\langle z | z \rangle =1$ and $\langle z | z' \rangle =0$
if $z,z' \in B$ and $z\neq z'$, i.e. $B$ is an orthonormal system;
(2) if $\langle x | z \rangle =0$ for all $z\in B$ then $x=0$ (i.e. $B$ is maximal with respect to the orthogonality requirment).
Hilbert bases are also called complete orthonormal systems (of vectors).
The relevant properties of Hilbert bases are fully encompassed within the following pair of propositions.
Proposition. If $H$ is a (complex or real) Hilbert space and $B\subset H$ is an orthonormal system (not necessarily complete) then, for every $x \in H$ the set of non-vanishing elements $\langle x| z \rangle$ with $z\in B$ is at most countable.
Theorem. If $H$ is a (complex or real) Hilbert space and $B\subset H$ is a Hilbert basis, then the following identities hold, where the order employed in computing the infinite sums (in fact countable sums due to the previous proposition) does not matter:
$$||x||^2 = \sum_{z\in B} |\langle x| z\rangle|^2\:, \qquad \forall x \in H\:,\qquad (3)$$
$$\langle x| y \rangle = \sum_{z\in B} \langle x|z \rangle \langle z| y\rangle\:, \qquad \forall x,y \in H\:,\qquad (4)$$
$$\lim_{n \to +\infty} \left|\left| x - \sum_{n=0}^N z_n \langle z_n|x \rangle \right|\right| =0\:, \qquad \forall x \in H \:,\qquad (5)$$
where the $z_n$ are the elements in $B$ with $\langle z|x\rangle \neq 0$.
If an orthonormal system verifies one of the three requirements above then it is a Hibertian basis.
Completeness of $B$ means here that the set of infinite linear combinations of elements in $B$ includes (in fact coincide to) the whole space $H$.
Remarks.
The elements of a Hilbert basis are linearly independent (both for finite and infinite linear combinations).
All Hilbert spaces admit corresponding Hilbert bases. In a fixed Hilbert space all Hilbert bases have the same cardinality.
An infinite dimensional Hilbert space is separable (i.e. it contains a dense countable subset) if and only if it admits a countable Hilbert basis.
An infinite dimensional Hilbert space also admits Hamel bases, since it is a vector space as well.
In a separable infinite dimensional Hilbert space a Hilbert basis is also a Schauder basis (the converse is generally false).
FINAL COMMENTS.
Identities like this:
$$ \sum_n \phi_n(x) \phi_n^*(x') = \frac{1}{w(x)}\delta(x-x') \,\qquad (6)$$
stay for the completeness property of a Hilbert basis in $L^2(X, w(x)dx)$: Identity (6) is nothing but a formal version of equation (4) above.
However such an identity is completely formal and, in general it does not hold if $\{\phi_n\}$ is a Hilbert basis of $L^2(X, w(x)dx)$ (also because the value of $\phi_n$ at $x$ does not make any sense in $L^2$ spaces, as its elements are defined up to zero measure sets and $\{x\}$ has zero measure). That identity sometime holds rigorously if (1) the functions $\phi_n$ are sufficiently regular and (2) the identity is understood in distributional sense, working with suitably smooth test functions like ${\cal S}(\mathbb R)$ in $\mathbb R$.
In $L^2(\mathbb R, d^nx)$ spaces all Hilbert bases are countable. Think of the basis of eigenvectors of the Hamiltonian operator of an Harmonic oscillator in $L^2(\mathbb R)$ (in $\mathbb R^n$ one may use a $n$ dimensional harmonic oscillator). However, essentially for for practical computations it is convenient also speaking of formal eigenvectors of, for example, the position operator: $|x\rangle$. In this case, $x \in \mathbb R$ so it could seem that $L^2(\mathbb R)$ admits also uncountable bases. It is false! $\{|x\rangle\}_{x\in \mathbb R}$ is not an orthonormal basis. It is just a formal object, (very) useful in computations.
If you want to make rigorous these objects, you should picture the space of the states as a direct integral over $\mathbb R$ of finite dimensional spaces $\mathbb C$, or as a rigged Hilbert space. In both cases however $\{|x\rangle\}_{x\in \mathbb R}$ is not an orthonormal Hilbertian basis. And $|x\rangle$ does not belong to $L^2(\mathbb R)$.
Hilbert bases are not enough to state and prove the spectral decomposition theorem for normal operators in a complex Hilbert space. Normal operators $A$ are those verifying $AA^\dagger= A^\dagger A$, unitary and self-adjoint ones are particular cases.
The notion of Hilbert basis is however enough for stating the said theorem for normal compact operators or normal operators whose resolvent is compact. In that case, the spectrum is a pure point spectrum (with only a possible point in the continuous part of the spectrum). It happens, for example, for the Hamiltonian operator of the harmonic oscillator. In general one has to introduce the notion of spectral measure or PVM (projector valued measure) to treat the general case.
Best Answer
It's complete if there is only one basis of common eigenvectors. That means, There is only one basis in which the matrices are diagonal matrices.
Let's start with only 2: operators $A$ and $B$. If $[A,B]=0$, there is at least one orthonormal basis of common eigenvectors.
If the eigenvalues of $A$ have no degenerancy, then the basis is unique (except for global phase factors), and hence the set is complete.
If $A$ has degenerate eigenvalues, then they form subspaces (the matrix has boxes along the diagonal). $B$ acts on each subspace without merging with others.
Inside every subspace, you can find a basis of $B$ which makes sub-sub-spaces (sub-boxes).
If those subspaces are more than 1-dimensional, then the system is not complete, but there's a third commuting observable $C$ that might make the matrices diagonal.
There might be more than 1 CSCO with different eigenvalues.
For a given CSCO, eigenvalues of all operators specify one only common eigenvector.
As for the practical question, you can show it in the particular case.
But one can show that any spherically-symmetric setup satisfies $[H,\vec{L}]=0$, and therefore there's $[H, L^2]=0, \ [H, L_z]=0$.
This is because any system which is invariant under rotations verifies that $H$ conmutes with rotations, and rotations are a function of $\vec{L}$, so $[H, \vec{L}]=0$. It's long to prove, but it is a very beautiful topic.
Note: invariant under rotations refers that you'll get the same result by a) Let it evolve in time and then rotate it. b) Rotate it first and then let it evolve.