I don't know what Toën was talking about, but I suspect that it was about finiteness conditions for Artin stacks: the problem is that the usual finiteness conditions we look at for schemes (like the notion of constructibility for l-adic sheaves) do not extend to stacks in a straightforward way, which gives some trouble if one wants to count points
(i.e. to define things like Euler characteristics). Some notions of finiteness are developed to define Grothendieck rings of Artin stacks (e.g. in Toën's paper arXiv:0509098 and in Ekedahl's paper arXiv:0903.3143), which can be realized by our favourite cohomologies (l-adic, Hodge, etc), but the link with a good notion of finiteness for categories of coefficients over Artin stacks (l-adic sheaves, variation of mixed Hodge structures) does not seem to be fully understood yet, at least conceptually (and by myself).
As for finiteness conditions for sheaves (in some homotopical context), the kind of properties we might want to look at are of the following shape.
Consider a variety of you favourite kind X, and a derived category D(X) of sheaves over
some site S associated to X (e.g. open, or étale, or smooth subvarieties over X etc.).
For instance, D(X) might be the homotopy category of the model category of simplicial sheaves, or the derived category of sheaves of R-modules.
Important finiteness properties can be expressed by saying that for any U in the site S,
we have
(1) hocolimᵢ RΓ(U,Fᵢ)= RΓ(U,hocolimᵢ Fᵢ)
where {Fᵢ} is a filtered diagram of coefficients. If you are in such a context, then
you can look at the compact objects in D(X), i.e. the objects A of D(X) such that
(2) hocolimᵢ RHom(A,Fᵢ)= RHom(A,hocolimᵢ Fᵢ)
for any filtered diagram {Fᵢ}. In good situations, condition (1) will imply that the category of compact objects will coincide with constructible objects (i.e. the smallest subcategory of D(X) stable under finite homotopy colimits (finite meaning: indexed by finite posets) which contains the representable objects).
Sufficient conditions to get (1) are the following:
a) For simplicial sheaves (as well as sheaves of spectra or R-modules...), a sufficient condition is that the topology on S is defined by a cd-structure in the sense of Voevodsky (see arXiv:0805.4578). These include the Zariski topology, the Nisnevich topology, as well as the cdh topology (the latter being generated by Nisnevich coverings as well as by blow-ups in a suitable sense), at least if we work with noetherian schemes of finite dimension. Note also that topologies associated to cd structures define what Morel and Voevodsky call a site of finite type (in the language of Lurie, this means that, for such sites, the notion of descent is the same as the notion of hyperdescent: descent for infinity-stacks over S can be tested only using truncated hypercoverings; this is the issue discussed by David Ben Zvi above).
In practice, the existence of a cd structure allows you to express (hyper)descent using only Mayer-Vietoris-like long exact sequences (the case of Zariski topology was discovered in the 70's by Brown and Gersten, and they used it to prove Zariski descent for algebraic K-theory).
b) For complexes of sheaves of R-modules, a sufficient set of conditions are
i) the site S is coherent (in the sense of SGA4).
ii) any object of the site S is of finite cohomological dimension (with coefficients in R).
The idea to prove (1) under assumption b) is that one proves it first when all the Fᵢ's are concentrated in degree 0 (this is done in SGA4 under assumption b)i)).
This implies the result when the Fᵢ's are uniformly bounded. Then, one uses the fact, that, under condition b)ii), the Leray spectral sequence converges strongly, even for unbounded complexes (this done at the begining of the paper of Suslin and Voevodsky "Bloch-Kato conjecture and motivic cohomology with finite coefficients").
This works for instance for étale sheaves of R-modules, where R=Z/n, with n prime to the residual characteristics. Note moreover that, in the derived category of R-modules, the compact objects (i.e. the complexes A satisfying (2)) are precisely the perfect complexes.
The fact that the six Grothendieck operations preserves constructibility can then be translated into the finiteness of cohomology groups (note however that the notion of constructiblity is more complex then this in general: if we work with l-adic sheaves
(with Ekedahl's construction, for instance), then the notion of constructiblity does not agree with compactness anymore). However, condition (1) is preserved after taking the Verdier quotient of D(X) by any thick subcategory T obtained as the smallest thick subcategory which is closed under small sums and which contains a given small set of compact objects of D(X) (this is Thomason's theorem). This is how such nice properties survive in the context of homotopy theory of schemes for instance.
Note also that, in a stable (triangulated) context, condition (2) for A implies that we have the same property, but without requiring the diagrams {Fᵢ} to be filtering.
For your second question, the extension of a cohomology theory to simplicial varieties is automatic (whenever the cohomology is given by a complex of presheaves), at least if we have enough room to take homotopy limits, which is usually the case (and not difficult to force if necessary). The only trouble is that you might lose the finiteness conditions, unless you prove that your favorite simplicial object A satisfies (2). The fact that Hironaka's resolution of singularities gives the good construction (i.e. gives nice objects for open and/or singular varieties) can be expained by finiteness properties related to descent by blow-ups (i.e. cdh descent), but the arguments needed for this use strongly that we work in a stable context (I don't know any argument like this for simplicial sheaves). The fuzzy idea is that if a cohomology theory satisfies Nisnevich descent and homotopy invariance, then it satisfies cdh descent (there is a nice very general proof of this in Voevodsky's paper arXiv:0805.4576 (thm 4.2, where you will see we need to be able to desuspend)); then, thanks to Hironaka, locally for the cdh topology, any scheme is the complement of a strict normal crossing divisor in a projective and smooth variety. As cdh topology has nice finiteness properties (namely a)), and as
any k-scheme of finite type is coherent in the cdh topos, this explains, roughly, why we get nice extensions of our cohomology theories (as far as you had a good knowledge of smooth and projective varieties). If we work with rational coefficients, the same principle applies for schemes over an excellent noetherian scheme S of dimension lesser or equal to 2, using de Jong's results instead of Hironaka's, and replacing the cdh topology by the h topology (the latter being obtained from the cdh topology by adding finite surjective morphisms): it is then sufficient to have a good control of proper regular S-schemes.
Best Answer
The fact that various finiteness conditions lead to good theorems which are manifestly false in their absence seems like a good explanation of why they are important. (In fact, I am having trouble thinking of a wholly different kind of explanation for why anything in pure mathematics is important.)
I think you are on to something to the extent that we need to give nonexamples and counterexamples along with our theorems in order to give students even a fighting chance at appreciating them. In the realm of commutative algebra this was something that was notoriously underappreciated until relatively recently: I recall well Rota writing about the "hygienic theorems" [Rota, Indiscrete Thoughts, pp. 215-216] in algebra, e.g. things like "Every regular domain is normal". As he wrote, we have no chance of grasping results like this unless we see examples -- preferably several -- of domains which are not regular, not normal, and normal but not regular. In this particular example this is easily done, but unfortunately many of the core counterexamples in the subject have a reputation of being too difficult to show beginners. At this point I feel the need to quote directly from p. 136 of Reid's Undergraduate Commutative Algebra:
This is very well said (well, except that I honestly don't know what's wrong with $\mathfrak{m}$...): most of the standard texts in commutative algebra leave unanswered the natural questions an alert reader will have: is this hypothesis necessary? is the converse of this result true? What happens if we don't assume that $M$ is a finitely generated module over a Noetherian domain? and so forth.
By a coincidence I have just finished -- that is, within the last half hour -- teaching a first graduate course on commutative algebra. I tried to spend a lot of time on examples, and I was not afraid to make "technical" digressions about what happens when $M$ is not a finitely generated....Especially I spent an extra long amount of time on module-theoretic questions, which made me feel closer to the heart of the subject. It is easy to motivate the need for modules to be finitely generated: there is a structure theorem for finitely generated modules over a PID but there is no structure theorem for infinitely generated abelian groups. The example of $\mathbb{Q}_p$ as a $\mathbb{Z}_p$-module shows that even over a DVR infinitely generated modules can have a complicated structure. Then, when I got to Noetherian rings I motivated them in part by showing that the Noetherian condition was equivalent to many seemingly innocuous and desirable properties, like every submodule of a finitely generated module being finitely generated. At the same time I discussed plenty of examples of non-Noetherian rings, including rings which are very nice "except that they are non-Noetherian" like the ring of all algebraic integers. So I think I gave my students at least an opportunity to feel their way around finiteness conditions in the subject.
Let me add that there are some recent texts which do a much better job at this. Most of all I can enthusiastically recommend T.Y. Lam's Lectures on Modules and Rings. As with all of his books, his skill at balancing theory and examples is superior and makes for very pleasant, stimulating reading.
It goes much the same for compactness in elementary analysis, but it seems easier to me to supply the necessary counterexamples: every time you encounter a theorem which holds on a compact interval $[a,b]$, ask yourself whether it holds on noncompact intervals (and, if applicable, compact non-intervals!). In all the instances I can think of now, such counterexamples are well known and relatively easy to supply.