So the short answer is that there is not such a model structure. The difficulty arises in trying to show that the class of weak equivalences has all of the necessary properties; in particular, even two-of-three does not hold for the naive definition. The first difficulty arises even before that: on ordinary simplicial sets we can arrange for a model of every set that is "minimal" on the $\pi_0$-level, meaning that every connected component has exactly one $0$-simplex. In simplicial commutative monoids we can no longer do this. However, we could assume that in order to be a weak equivalence we need to be a $\pi_*$-isomorphism when choosing any (coherently chosen) basepoints.
For the purposes of our discussion we are going to assume that $\pi_*$-equivalences use the model $S^n = \Delta^n/\partial\Delta^n$. (This is the model that most closely mimicks the boundary maps in the Dold-Kan correspondence.) Now let $X = S^2$, and let $Y$ be $S^2$ with an extra $0$-simplex connected by a $1$-simplex to the original basepoint. (So it looks like a balloon on a string.) We define a map $X\rightarrow Y$ to be the inclusion of $S^2$ in the obvious manner, and a map $Y\rightarrow X$ to be collapsing the extra $1$-simplex back down. Then the composition of these two maps is the identity on $X$, so obviously a weak equivalence. The map $X\rightarrow Y$ is also a weak equivalence, because adding the "string" can't add any new homotopy groups to $X$. However, the map $Y\rightarrow X$ is not a weak equivalence, as $\pi_2Y$ based at the extra point is a one-point set but $\pi_2X$ at its image is a two-point set.
The problem arose because in order to show that $\pi_*$ was invariant of basepoint in the usual Kan complex model we needed to be able to "pull back" simplices along paths in the simplicial set, which used the Kan condition. The new model does not have such a condition, and thus we can't necessarily pull things back.
Another observation along these lines. Take any connected simplicial set $X$, and let $Y$ be $X$ with a "string" added to it at any basepoint. Then $*\rightarrow Y$ (including into the new point) is a weak equivalence, and $X\rightarrow Y$ (including into itself) is a weak equivalence. Thus in the homotopy category, $X$ is isomorphic to a point (and thus the homotopy category is just the category of sets) ... which is presumably not desired.
-- The Bourbon seminar
I like the simple slogan: homotopical algebra is the nonlinear generalization
of homological algebra. Let me assume that you value and appreciate homological
algebra in the broadest sense as a fundamental, successful and highly applicable tool in many areas of math (otherwise I can't conceive of an argument that would be convincing for this question). At the coarsest level homological algebra is based on the idea of resolutions, i.e. that to perform algebraic operations on objects we should describe them in terms of objects that behave well for the given operations.
Now let's observe that homological algebra is a linear theory,
in the sense that it deals with things like vector spaces, modules over a ring,
and more generally objects of abelian categories. What if your interests
involve more complicated objects that are not linear? for example, rings, algebras,
varieties, manifolds, categories etc? philosophically it still makes sense that
we have much to gain by resolving in some appropriate sense. Homotopical
algebra is the language and toolkit built for this explicit purpose, and with many explicit applications. The $\infty$-language in my mind is just a very convenient and relatively friendly apparatus to understand, navigate and apply this theory.
Some key examples:
$\bullet$ Hodge theory. For me (and I assume many other algebraic geometers) the first
instance of homotopical algebraic thinking I encountered was Deligne's construction
of mixed Hodge structures on the cohomology of complex algebraic varieties, one
of the most powerful tools in modern algebraic geometry. The idea is that the functor "de Rham cohomology" is very wonderfully behaved on smooth complex projective varieties, and
most importantly carries a rich extra structure, a pure Hodge structure. We can take advantage of this for say any singular projective variety if we use the idea of
resolution, in the form of a simplicial object (a convenient nonlinear version of a chain complex) --- we replace the variety by a simplicial smooth projective variety which
is equivalent in the appropriate sense, in particular will produce the same
measurement (cohomology). The existence of such is deep geometry (resolution of singularities) but its explicit applications don't require explicit knowledge of this geometry. It now follows that the singular variety's cohomology carries the appropriate derived version of a pure Hodge structure, namely a mixed Hodge structure.
$\bullet$ The tangent complex. Another seminal circa 1970 application is the
Quillen-Illusie theory of the tangent complex. Again we want to do basic geometry - this time calculus - on a singular variety, or perhaps let's say a commutative ring, so we resolve it in the sense that befits the problem. We like affine spaces for
taking derivatives etc, so if we want to calculate derivatives (tangent spaces) on a singular variety we should resolve it by such --- replace a ring by an appropriate free resolution (this time a COsimplicial variety). This gives us a way to extend the
basic tools of calculus to singular varieties, with many corresponding applications.
$\bullet$ The virtual fundamental class. This is an elaboration on the previous point which is much more recent. We would like now to integrate on a class of singular varieties,
so need a version of the fundamental class. The varieties in question arise as moduli spaces (say in Gromov-Witten or Donaldson-Thomas theories), which means they are relatively
easy to resolve in a natural way (express as a derived moduli problem). As ordinary varieties they are very badly behaved (eg are not even equidimensional) but the derived moduli problem naturally carries a fundamental class.
$\bullet$ In representation theory the key objects
of study are again nonlinear --- associative algebras (or equivalently their categories of
modules). Thus to perform algebraic operations on these algebras we gain much by
allowing ourselves to resolve them. As mentioned above the geometric Langlands program
is one place where homotopical language is extremely useful, but one can find the same issues in studying say modular representations of finite groups (eg the theory of support varieties and stable module categories). More generally Hochschild/cyclic theory, the "calculus" of associative algebras/the fundamental invariants of noncommutative geometry, are natural applications of homotopical algebra. There are many spectacular achievements in this area, one famous one being the Deligne conjecture/Kontsevich formality/deformation quantization circle of ideas. The cobordism hypothesis, in my view one of the pinnacles of homotopical algebra, has among its many facets a vast generalization of Hochschild theory.
Best Answer
As the commenters already argued, I would not regard this book as a self-contained introduction. For instance, from a brief browse through the introductory chapters:
The reader is assumed to be familiar with CW-complexes and several of the major theorems about them already which will be generalized (e.g. the Whitehead theorem).
The reader is assumed to be familiar with homotopy in the classical sense (e.g. they point out that "homotopy" isn't an equivalence relation on maps of simplicial sets as an implicit contrast to the case of spaces).
The reader is assumed to be familiar with other important tools: e.g. they say "Recall that the integral singular homology groups $H_*(X;\Bbb Z)$ of the space $X$ are defined to be ..." (this is on page 5) and they assume structural properties of it are known.
They describe the geometric realization of a simplicial set as $$ |X| = \varinjlim_{(\Delta^n \to X)\text{ in }\Delta \downarrow X} |\Delta^n| $$ which is certainly concise and categorically valid, but assumes that the reader already has some familiarity with the point of this construction and some of its basic properties. For example, more introductory references would discuss how each point of the realization is in the interior of exactly one n-cell, give a proof that the result is a CW-complex, etc.
Simplicial sets are a fundamental tool used basically everywhere in modern homotopy theory. However, the reason for this is that there are concrete technical problems which they solve. I realize that it might be tempting to try to skip ahead to get to the more advanced material, but it can be very difficult for a student to "get the point" without first understanding the more basic material.