Notice that
$$
D_\beta(\ldots, \underbrace{w_i+w_j}_{k\text{-th place}},\ldots, \underbrace{w_i+w_j}_{m\text{-th place}},\ldots)=0
$$
because there are two equal vectors. On the other hand, by linearity:
$$
D_\beta(\ldots, {w_i+w_j},\ldots, {w_i+w_j},\ldots)=
D_\beta(\ldots, w_i,\ldots, w_j,\ldots)+
D_\beta(\ldots, w_j,\ldots, w_i,\ldots),
$$
because $D_\beta(\ldots, w_i,\ldots, w_i,\ldots)=D_\beta(\ldots, w_j,\ldots, w_j,\ldots)=0$.
You get then:
$$
D_\beta(\ldots, w_i,\ldots, w_j,\ldots)=
-D_\beta(\ldots, w_j,\ldots, w_i,\ldots),
$$
that is your function, applied to basis vectors, changes of sign whenever you exchange two of its arguments. Together with the request $D_\beta(w_1, \ldots, w_n)=1$ this completely determines the values of $D_\beta$ when its arguments are basis vectors. It follows by linearity that $D_\beta$ is uniquely determined.
To show that applying $D_\beta$ to some vectors will be the same as taking the determinant of the coordinate representation of those vectors, one would need a definition of determinant. My favourite definition of determinant is indeed the same as the definition of $D_\beta$.
That seems quite opaque: It's a way of computing a quantity rather than telling what exactly it is or even motivating it. It also leaves completely open the question of why such a function exists and is well-defined. The properties you give are sufficient if you're trying to put a matrix in upper-triangular form, but what about other computations? It also gives no justification for one of the most important properties of the determinant, that $\det(ab) = \det a \det b$.
I think the best way to define the determinant is to introduce the wedge product $\Lambda^* V$ of a finite-dimensional space $V$. Given that, any map $f:V \to V$ induces a map $\bar{f}:\Lambda^n V \to \Lambda^n V$, where $n = \dim V$. But $\Lambda^n V$ is a $1$-dimensional space, so $\bar{f}$ is just multiplication by a scalar (independent of a choice of basis); that scalar is by definition exactly $\det f$. Then, for example, we get the condition that $\det f\not = 0$ iff $f$ is an isomorphism for free: For a basis $v_1, \dots, v_n$ of $V$, we have $\det f\not = 0$ iff $f(v_1\wedge \cdots \wedge v_n) = f(v_1) \wedge \cdots \wedge f(v_n) \not = 0$; that is, iff the $f(v_i)$ are linearly independent. Furthermore, since $h = fg$ has $\bar{h} = \bar{f}\bar{g}$, we have $\det(fg) = \det f \det g$. The other properties follow similarly. It requires a bit more sophistication than is usually assumed in a linear algebra class, but it's the first construction of $\det$ I've seen that's motivated and transparently explains what's otherwise a list of arbitrary properties.
Best Answer
For the $3\times 3$-determinant we can use the Rule of Sarrus. Then the $4\times 4$ determinant reduces to the $3\times 3$ determinant, because of $${\begin{vmatrix}a&b&c&d\\e&f&g&h\\i&j&k&l\\m&n&o&p\end{vmatrix}}=a\,{\begin{vmatrix}f&g&h\\j&k&l\\n&o&p\end{vmatrix}}-b\,{\begin{vmatrix}e&g&h\\i&k&l\\m&o&p\end{vmatrix}}+c\,{\begin{vmatrix}e&f&h\\i&j&l\\m&n&p\end{vmatrix}}-d\,{\begin{vmatrix}e&f&g\\i&j&k\\m&n&o\end{vmatrix}}.$$