This is not entirely dissimilar to the answer already posted by Chris Godsil, but I'll post this anyway, maybe it can provide slightly different angle for someone trying to understand this.
We want to show that the matrix
$$\begin{pmatrix}
x_0^0 & \cdots & x_0^n \\
\vdots & \ddots & \vdots \\
x_n^0 & \cdots & x_n^n
\end{pmatrix}$$
is invertible.
It suffices to show that the rows columns of this matrix are linearly independent.
So let us assume that $c_0v_0+c_1v_1+\dots+c_nv_n=\vec 0=(0,0,\dots,0)$, where $v_j=(x_0^j,x_1^j,\dots,x_n^j)$ is the $j$-the row column written as a vector and $c_0,\dots,c_n\in\mathbb R$.
Then we get on the $k$-th coordinate
$$c_0+c_1x_k+c_2x_k^2+\dots+c_nx_k^n=0,$$
which means that $x_k$ is a root of the polynomial $p(x)=c_0+c_1x+c_2x^2+\dots+c_nx^n$.
Now if the polynomial $p(x)$ of degree at most $n$ has $(n+1)$ different roots $x_0,x_1,\dots,x_n$, it must be the zero polynomial and we get that $c_0=c_1=\dots=c_n=0$.
This proves that the vectors $v_0,v_1,\dots,v_n$ are linearly independent. (And, in turn, we get that the given matrix is invertible.)
The first practical application that comes to mind should be handling of matrix equations and solving of a system of linear equations, see here. If you have an equation of the form $$Ax=y$$ with $A\in\mathbb R^{n\times n},x,y\in\mathbb R^{n\times 1}$ and you know that $A$ is invertible, you can find the solution via multiplication: $$Ax=y \iff A^{-1}Ax=A^{-1}y \iff I_nx=A^{-1}y \iff x=A^{-1}y.$$
But this only deals with one application, there is much more theory to matrices to discover e.g. in linear algebra. Assume we have $\mathcal V$ and $\mathcal W$ finite-dimensional vector spaces over the same field $F$, then every linear map from $\mathcal V$ to $\mathcal W$ can be represented by a matrix $A\in F^{m\times n}$ with $\dim(\mathcal V)=n$ and $\dim(\mathcal W)=m$.
Now let $\dim(\mathcal V)=\dim(\mathcal W)$, then we have a matrix $A\in F^{n\times n}$. If we know that $A$ is invertible, we immediately know that the corresponding linear map $\varphi: \mathcal V\rightarrow\mathcal W$ is bijective and we also know that the corresponding linear map to $A^{-1}$ is $\varphi^{-1}$. Using some properties of a linear map we also know that e.g. $0$ is not an eigenvalue of $\varphi$ nor $\varphi^{-1}$ etc. So just by having $A$ invertible we know lots of things about the corresponding linear map.
Best Answer
I think the simplest way to look at it is considering the dimensions of the Matrices $A$ and $A^{-1 }$ and apply simple multiplication.
So assume, wlog $A$ is $m \times n $, with $n\neq m$ then $A^{-1 }$ has to be $n\times m$ because thats the only way $AA^{-1 }=I_m$
But it must also be true that $A^{-1 } A=I_m$ but now instead of $I_m$ you get $I_n$ wich is not in accordance with the definition of an Inverse ( see ZettaSuro)
Hence $m$ must be equal to $n$