I think the author was not well-advised to omit the proof.
We begin by recalling some facts from linear algebra:
Let $V$ be a vector space over a field $K$. Each matrix $A = (a_{ij}) \in M_n(K)$ induces a linear map $A : V^n \to V^n, A(v_1,\dots,v_n) = (\sum_{j=1}^n a_{1j}v_j,\dots,\sum_{j=1}^n a_{nj}v_j)$. Note that in this definition $\dim(V)$ is arbitrary. If $\mathbf{v} = (v_1,\dots,v_n) \in V^n$ forms a basis of $V$ (which requires $\dim(V) = n$), then each $\mathbf{w} = (w_1,\dots,w_n) \in V^n$ admits a unique $A(\mathbf{v},\mathbf{w}) \in M_n(K)$ such that $A(\mathbf{v},\mathbf{w})(\mathbf{v}) = \mathbf{w}$. We have $A(\mathbf{v},\mathbf{w}) \in GL_n(K)$ if and only if $(w_1,\dots,w_n)$ forms a basis of $V$. In that case $A(\mathbf{v},\mathbf{w})$ realizes the change of basis from $\mathbf{v}$ to $\mathbf{w}$.
Each linear map $f : V \to W$ induces a linear map
$$f^n : V^n \to W^n, f^n(v_1,\dots,v_n) = (f(v_1),\dots,f(v_n)) .$$
It is readily verified that
$$f^n(A(\mathbf{v})) = A(f^n(\mathbf{v})) .$$
We now come to the proof of continuity. Given a fixed basis $\mathbf{x} = (x_1,\dots,x_n)$ of $X$, the orthogonal projection $\pi : \mathbb{R}^q \to X$ can be written as $\pi(y) = \sum_{j=1}^n \xi_j(y) x_j$ with linear maps $\xi_j :\mathbb{R}^q \to \mathbb{R}$. This gives us a linear map
$$\phi : (\mathbb{R}^q)^n \to M_n(\mathbb{R}), \phi(y_1,\dots,y_n)_{ij} = \xi_j(y_i) .$$
For each $\mathbf{y} = (y_1,\dots,y_n) \in q^{-1}(U)$ the span $q(\mathbf{y})$ is mapped by $\pi$ isomorphically onto $X$. Hence $\pi^n(\mathbf{y}) = (\pi(y_1),\dots,\pi(y_n)) = (\sum_{j=1}^n \xi_j(y_1) x_j,\dots,\sum_{j=1}^n \xi_j(y_n) x_j)$ is a basis of $X$. Thus the matrix $\phi(\mathbf{y})$ realizes the change of basis from $\mathbf{x}$ to $\pi^n(\mathbf{y})$, i.e. we have $\phi(\mathbf{y})(\mathbf{x}) = \pi^n(\mathbf{y})$. We conclude
$$\phi(q^{-1}(U)) \subset GL_n(\mathbb{R}) .$$
Since linear maps between finite-dimensional vector spaces (endowed with any norm) are continuous and inverting matrices in $ GL_n(\mathbb{R})$ is continuous, we see that
$$\psi : q^{-1}(U) \to GL_n(\mathbb{R}), \psi(\mathbf{y}) = \phi(\mathbf{y})^{-1}$$
is continuous. The matrix $\psi(\mathbf{y})$ realizes the change of basis from $\pi^n(\mathbf{y})$ to $\mathbf{x}$, i.e. we have $\psi(\mathbf{y})(\pi^n(\mathbf{y})) = \mathbf{x}$.
For $\mathbf{y} \in q^{-1}(U)$ define
$$D(\mathbf{y}) = \psi(\mathbf{y})(\mathbf{y})$$
where we regard $\psi(\mathbf{y})$ as a linear map $q(\mathbf{y})^n \to q(\mathbf{y})^n$. Since $\mathbf{y}$ is a basis of $q(\mathbf{y})$, also $D(\mathbf{y})$ is a basis of $q(\mathbf{y}) \in U$. Hence $D(\mathbf{y}) \in q^{-1}(U)$, i.e. we have defined a function
$$D : q^{-1}(U) \to q^{-1}(U) .$$
We have
$$\pi^n(D(\mathbf{y})) = \pi^n(\psi(\mathbf{y})(\mathbf{y})) = \psi(\mathbf{y})(\pi^n(\mathbf{y})) = \mathbf{x}$$
which shows that our $D$ is the same as the author's.
To see that $D$ is continuous note that the coordinate functions $D_i : q^{-1}(U) \to \mathbb{R}^q$ are given by $D_i(\mathbf{y}) = \sum_{j=1}^n \psi(\mathbf{y})_{ij}y_j = \sum_{j=1}^n \psi(\mathbf{y})_{ij}p_j(\mathbf{y})$ with (continuous!) coordinate projections $p_j : (\mathbb{R}^q)^n \to \mathbb{R}^q$.
The finite-dimensional Grassmannians do classify a subclass of vector bundles. $\text{Gr}_n(F^m)$ classifies exactly the rank $n$ subbundles of the trivial bundle $F^m$; it follows that the classifying map of a vector bundle $V$ of rank $n$ factors through $\text{Gr}_n(F^m)$ iff there exists another vector bundle $W$ of rank $m-n$ such that $V \oplus W \cong F^m$ is trivial(izable).
It's known that if $X$ is a $d$-dimensional CW complex then such a $W$ always exists of rank at most $d$, so the classifying map of a vector bundle of rank $n$ factors through $\text{Gr}_n(F^{n+d})$.
Finding the smallest $W$ is delicate. There are obstructions coming from characteristic classes. For $F = \mathbb{R}$, $X$ a smooth manifold of dimension $n$, and $V$ the tangent bundle of $X$, it's known that in the worst case ($X$ a suitable product of real projective spaces) the smallest $W$ has rank $n - \alpha(n)$ where $\alpha(n)$ is the number of $1$s in the binary expansion of $n$. This is closely related to the question of the minimal dimension of a smooth immersion of $X$ into $\mathbb{R}^m$; see, for example, these notes (which contain a proof of the claim in the second paragraph).
Best Answer
Consider the collection $M$ of matrices of $n\times (n+k)$ of rank $n$ inside matrices of this size. There is a map $M \to G(n,n+k)$ that sends a matrix $x$ to its column span. This onto and induces a topology on $G(n,n+k)$. The reduced row echelon form gives you canonical representatives for the class of a matrix.