For part (a), this is just development (Laplace expansion) of the determinant by the first row. Actually the $\det()$ factors should have alternating signs. Since the only occurrences of $x$ are in that first row, all the $\det()$ expressions are constants, and one gets a polynomial of degree at most $n-1$ (from the final term) in $x$.
For part (b), that $P(a_i)=0$ for $i=2,3,\ldots,n$ is just the fact that $P(a_i)$ equals the determinant of the matrix obtained by substituting $a_i$ for $x$, so from the original matrix $a_1$ has been replaced by $a_i$, and as this matrix has its rows $1$ and $i$ identical, its determinant vanishes. all this uses is that the Laplace expansion used commutes with such substitution. Furthermore a polynomial of degree at most $n-1$ with $n-1$ specified roots $a_2,\ldots,a_n$ can only be a scalar multiple of $(x-a_2)\ldots(x-a_n)$.
For part (c), this is just remarking that the $\det()$ in question is $(-1)^{n-1}$ times the determinant of the lower-left $(n-1)\times(n-1)$ submatrix, which determinant precisely matches the definition of $V_{n-1}(a_2,\ldots,a_n)$.
For part (d) write $(-1)^{n-1}\prod_{i=2}^n(x-a_i)=\prod_{i=2}^n(a_i-x)$ to get
$$
V(x,a_2,\ldots,a_n)=
(-1)^{n-1}V_{n-1}(a_2,\ldots,a_n)\prod_{i=2}^n(x-a_i)
=V_{n-1}(a_2,\ldots,a_n)\prod_{i=2}^n(a_i-x),
$$
and then set $x=a_1$ to get
$$
V(a_1,a_2,\ldots,a_n)
=V_{n-1}(a_2,\ldots,a_n)\prod_{i=2}^n(a_i-a_1),
$$
Part (e) applies induction on $n$ to $V_{n-1}(a_2,\ldots,a_n)$ (the starting case is $V_0()=1=\prod_{1\leq i<j\leq 0}1$, an empty product, or if you fear $n=0$ it is $V_1(a)=1=\prod_{1\leq i<j\leq 1}1$, still an empty product), to get
$$
V(a_1,a_2,\ldots,a_n)
=\left(\prod_{2\leq i<j\leq n}(a_j-a_i)\right)\prod_{j=2}^n(a_j-a_1)
=\prod_{1\leq i<j\leq n}(a_j-a_i).
$$
Call that matrix $A$ and notice that it is a permutation of a circulant matrix,
$$
A = CP
$$
Where $P$ is a permutation matrix with ones on the anti-diagonal, and zeros in all other positions. Then
$$
\det[A] = \det[CP] = \det[C]\det[P]
$$
The determinant of the permutation part can be shown to depend on the size $n$. It can be written as
$$
\det[P] = (-1)^{\left\lfloor\frac{n}{2}\right\rfloor}
$$
Now $C$ is
$$
\begin{bmatrix}
a_{n} & a_{n-1} & a_{n-2} & \cdots & a_1\\
a_{1} & a_{n} & a_{n-1} & \cdots & a_2\\
a_{2} & a_{1} & a_n & \cdots & a_3\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
a_{n-1} & a_{n-2} & a_{n-3} & \cdots & a_{n}\\
\end{bmatrix}
$$
$C$ is a circulant matrix. Define the associated polynomial
$$
f(\omega) = a_n + \sum_{k=1}^{n-1} a_k\omega^k
$$
Then using the product formula on the Wikipedia page for circulant matrices,
$$
\det[C] = \prod_{j=0}^{n-1}f(\omega_j),
$$
where $\omega_j=e^{\frac{2\pi i j}{n}}$ and $i=\sqrt{-1}$. Then the final formula is
$$
\det[A] = (-1)^{\left\lfloor\frac{n}{2}\right\rfloor}\prod_{j=0}^{n-1}\left(a_n + \sum_{k=1}^{n-1} a_k\omega_j^k\right)
$$
Best Answer
Here is a proof by induction on $n$: For $n=1$ this is trivial (why?). Now write $$ V_n = \begin{bmatrix} 1 & 1 & \cdots & 1\\ a_1 & a_2 & \cdots & a_n\\ a_1^2 & a_2^2 & \cdots & a_n^2\\ \vdots & \vdots & & \vdots\\ a_1^{n-1} & a_2^{n-1} & \cdots & a_n^{n-1}\\ \end{bmatrix} $$ and assume that $V_n$ is invertible and consider the matrix $V_{n+1}$. Now we apply a sequence of elementary row operations: Starting from row $n+1$ and finishing with row $2$, substract from row $j$ row $j-1$ multiplied by $a_{n+1}$. In terms of elementary matrices this amounts to the same as multiplying $V_{n+1}$ to the left by a sequence of $n$ elementary matrices and hence by an invertible matrix $A_1$. Thus We obtain $$ A_1V_{n+1} = \begin{bmatrix} 1 & 1 & \cdots & 1 & 1\\ a_1-a_{n+1} & a_2-a_{n+1} & \cdots & a_n-a_{n+1} & 0\\ a_1(a_1-a_{n+1}) & a_2(a_2-a_{n+1}) & \cdots & a_n(a_n-a_{n+1}) & 0\\ a_1^2(a_1-a_{n+1}) & a_2^2(a_2-a_{n+1}) & \cdots & a_n^2(a_n-a_{n+1}) & 0\\ \vdots & \vdots & & \vdots & \vdots\\ a_1^{n-1}(a_1-a_{n+1}) & a_2^{n-1}(a_2-a_{n+1}) & \cdots & a_n^{n-1}(a_n-a_{n+1}) & 0 \end{bmatrix}. $$ Now consider the traspose matrix $$ V_{n+1}^TA_1^T = (A_1V_{n+1})^T = \begin{bmatrix} 1 & a_1-a_{n+1} & a_1(a_1-a_{n+1}) & a_1^2(a_1-a_{n+1}) & \cdots & a_1^{n-1}(a_1-a_{n+1})\\ 1 & a_2-a_{n+1} & a_2(a_2-a_{n+1}) & a_2^2(a_2-a_{n+1}) & \cdots & a_2^{n-1}(a_2-a_{n+1})\\ \vdots & \vdots & \vdots & \vdots & & \vdots\\ 1 & a_n-a_{n+1} & a_n(a_n-a_{n+1}) & a_n^2(a_n-a_{n+1}) & \cdots & a_n^{n-1}(a_1-a_{n+1})\\ 1 & 0 & 0 & 0 & \cdots & 0 \end{bmatrix}. $$ Since $a_j-a_{n+1}\neq 0$ for $j=1,\dots,n$ we can divide row $j$ ($1\leq j \leq n$) by the scalar $a_j-a_{n+1}$ by multiplying to the left by elementary matrices. Thus there is an invertible matrix $A_2$ such that $$ A_2 V_{n+1}^T A_1^T = \begin{bmatrix} 1 & 1 & a_1 & a_1^2 & \cdots & a_1^{n-1}\\ 1 & 1 & a_2 & a_2^2 & \cdots & a_2^{n-1}\\ \vdots & \vdots & \vdots & \vdots & & \vdots\\ 1 & 1 & a_n & a_n^2 & \cdots & a_n^{n-2}\\ 1 & 0 & 0 & 0 & \cdots & 0 \end{bmatrix}. $$ Now we interchange rows to put row $n+1$ on top. Thus there is an invertible matrix $A_3$ such that $$ A_3A_2V_{n+1}^T A_1^T = \begin{bmatrix} 1 & 0 & 0 & 0 & \cdots & 0\\ 1 & 1 & a_1 & a_1^2 & \cdots & a_1^{n-1}\\ 1 & 1 & a_2 & a_2^2 & \cdots & a_2^{n-1}\\ \vdots & \vdots & \vdots & \vdots & & \vdots\\ 1 & 1 & a_n & a_n^2 & \cdots & a_n^{n-2} \end{bmatrix}. $$ We now substract row $1$ from row $j$ for $j=2,\dots,n+1$, so there is an invertible matrix $A_4$ such that $$ A_4A_3A_2V_{n+1}^TA_1^T = \begin{bmatrix} 1 & 0 & 0 & 0 & \cdots & 0\\ 0 & 1 & a_1 & a_1^2 & \cdots & a_1^{n-1}\\ 0 & 1 & a_2 & a_2^2 & \cdots & a_2^{n-1}\\ \vdots & \vdots & \vdots & \vdots & & \vdots\\ 0 & 1 & a_n & a_n^2 & \cdots & a_n^{n-2} \end{bmatrix}. $$ Taking transposes, we can write, in block notation $$ A_1V_{n+1}B = \begin{bmatrix} 1 & \mathbf{0}_{1\times n}\\ \mathbf{0}_{n\times 1} & V_n \end{bmatrix}, $$ where $\mathbf{0}_{i\times j}$ is a $i\times j$ matrix of zeroes and $B=(A_4A_3A_2)^T$ is an invertible matrix. By induction hypothesis $V_n$ is invertible. Thus consider the block matrix $$ C = \begin{bmatrix} 1 & \mathbf{0}_{1\times n}\\ \mathbf{0}_{n\times 1} & V_n^{-1}, \end{bmatrix} $$ and a simple multiplication gives $$ A_1V_{n+1}BC=CA_1V_{n+1}B = I_{n+1} $$ where $I_{n+1}$ is the identity matrix of size $(n+1)\times(n+1)$. This shows that $V_{n+1}$ is invertible and that it equals $A_1^{-1}C^{-1}B^{-1}$.
Note. For the ones familiar with the computation of Vandermonde's determinant, it is clear that I just followed the computation of the determinant, so there is nothing new here but to make it a computation of the inverse. Also, taking transposes is to avoid the use of column operations, for the OP's comfort.
ADDED
As I said in the comments, here is a proof using linear maps. Let $F_n[x]$ be the vector space of polynomials of degree strictly less than $n$. We know that $$ \dim_F F_n[x]=n. $$ Consider the linear map $$ T:F_n[x]\to F^{n}, \quad f(x)\mapsto (f(x_1),f(x_2),\dots,f(x_n)). $$ This map is surjective. Indeed, let $(a_1,\dots,a_n)\in F^n$. For each $j\in\{1,\dots,n\}$ let $$ L_j(x) = \frac{(x-x_1)\cdots (x-x_{j-1})(x-x_{j+1})\cdots (x-x_n)}{(x_j-x_1)\cdots (x_j-x_{j-1})(x_j-x_{j+1})\cdots (x_j-x_n)}\in F_n[x]. $$ Note that $L_j(x_k)=\delta_{jk}$, where $\delta_{jk}$ is the Kronecker symbol. Now let $$ f(x) = \sum_{j=1}^n a_j L_j(x)\in F_n[x], $$ then $T(f)=(a_1,\dots,a_n)$. This shows that $T$ is surjective. Since both $F_n[x]$ and $F^n$ have dimension $n$, then $T$ is a vector space isomorphism, in particular the matrix of $T$ in any basis is invertible. Now let $B_1=(1,x,x^2,\dots,x^{n-1})$ which is a basis of $F_n[x]$ and let $B_2=(e_1,\dots,e_n)$ be the canonical basis of $F^n$. Then $$ T(1)=e_1+e_2+\cdots+e_n $$ and for $j\in\{1,\dots,n-1\}$, $$ T(x^j) = x_1^j e_1 + x_2^je_2+\cdots + x_n^j e_n. $$ This means that the matrix of $T$ in this bases is $$ [T] = \begin{bmatrix} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1}\\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1}\\ \vdots & \vdots & \vdots & & \vdots\\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1} \end{bmatrix} $$ and thus $[T]$ is invertible. But $[T]$ is the transpose of the Vandermonde matrix, so the latter is invertible.