Personally I find this among the most awful terminology in existence. It starts with the ambiguity present in "increasing" and "decreasing" themselves: common sense would have that this means getting ever larger/smaller; yet (if I take Wikipedia as reference) both the terms monotonically increasing function and monotonically increasing sequence allow for (local) constancy. (It seems unlikely that the purpose of "monotonically" is to weaken the notion following it; rather it seems to indicate that a formally defined rather than colloquial notion is meant.) So if there is doubt about what a bare "increasing" meant, the proper remedy would be to always accompany it with a disambiguating "weakly" or "strictly"; this would settle the matter.
For some reason however many people seem to find that "nondecreasing" is preferable to "weakly increasing". I work a lot with integers partitions, which most authors introduce as nonincreasing sequences of integers (with finite sum). Clearly what is meant here is not the absence of "monotonic increase" between successive integers, since that would imply strict decrease. One might conclude that when using negative terminology, people implicitly revert to the colloquial rather than formal meaning of the base notion. For comparison, even here in France, where "négatif" is taken to include $0$ (as does "positif"), few people would be willing to interpret "entier non-négatif" as designating integers${}>0$.
However, even apart from the fact that negation does nothing to remove ambiguity from a notion, there are other drawbacks specific to this case:
- Nonincreasing is not the negation of (strictly) increasing for sequences of length${}>2$, and should therefore be carefully distinguished from "not increasing". The sequence $0,1,-1,2,-2,3,-3,\ldots$ is all of "not increasing",
"not decreasing" and "not constant"; however, it is neither of "nonincreasing" nor "nondecreasing", but it is "nonconstant". A nice mess.
- In the presence of partial ordering, having "nonincreasing" mean "weakly decreasing" is even less justified; here weak decrease is stronger than the absence of strict increase even for sequences of length $2$. I think what is needed in such context is almost never "nonincreasing", even between successive elements. For instance a "plane partition" could be defined as a weakly decreasing sequence of partitions (for the containment-of-diagrams partial ordering); saying "nonincreasing" here would be utterly confusing.
If one must absolutely use negative terminology, then it would have been much better to use "nowhere increasing" rather than "nonincreasing" (and even then only for total orderings).
In conclusion: if you want to be precise, it is better to say what you mean rather than to say what you don't mean (or even to not say what you are nonmeaning).
There is already an answer of @Chappers, but I wanted to remark on the word "compact" and why it can be considered suitable for its purpose. Perhaps you will find it relevant.
In my opinion, compact is a very good term – compact spaces really are the spaces that are closely and neatly packed together, however, not in the common literal meaning of the phrase. Perhaps anybody who tries to apply standard intuitions of these concepts is going to be confused, at least I know I was (but thanks to that I was able to arrive at my current intuition $\ddot\smile$).
For example, the real line is not compact, but we can adjoin two points to get $\mathbb{R}\cup \{-\infty, \infty\}$, and suddenly it is compact. But how can adjoining more points make a larger thing small? How can a large (i.e. non-compact) space ever be embedded in a small (i.e. compact) space? Weirder still, the open interval $(0,1)$ is non-compact, despite being apparently much smaller than all the spaces we have so far considered. But once again, we can add two points to get the compact space $[0,1].$ What's going on here?
The point is that despite the difference in apparent length or size (or whatever you wish to call it), the spaces $[0,1]$ and $\mathbb{R} \cup \{-\infty,\infty\}$ are topologically equivalent. Similarly, spaces $(0,1)$ and $\mathbb{R}$ are also topologically equivalent. Furthermore, each of these spaces can be embedded into each other. So we need a notion of closeness that takes that into account. In other words, no ordinary intuition (i.e. the common understanding of two points being close or neatly packed together) will suffice. Compactness solves this problem.
To make my point I will use two conditions equivalent to compactness:
- A space is compact if every open cover has a finite subcover.
- A space is compact if every net has a converging subnet (nets are generalizations of sequences).
On covers:
Somebody could say: so a compact set has a finite open cover, big issue, $\mathbb{R}$ does have one too! But $(1)$ is much more than that, its every open cover has a finite subcover, and the fact that not a single one was left is important. You can think of it in the following way:
Suppose that we have a compact connected space and that you were to tell me which points in your opinion are close to each other and which are far apart. You do this by covering the space with open sets small enough to satisfy your sense of closeness, so that points which are far apart do not belong to the same open set. Yet, for any such cover I can pick a finite subcover, which means that the distance in units you care about (i.e. the open sets) between any two points of space is smaller than some constant, hence, close to each other (this is somehow similar to how "almost all" may mean "all but a finite number" despite the finite number being big).
Consider the $(0,1)$ open interval: it may seem small, but you can specify your open sets in a way so that the closer you get to $0$, the father apart your points will be (in terms of number of open sets needed to connect them). On the other hand take the the extended real line, you may go on with bigger and bigger numbers, but you have to finally fall into the open set containing $+\infty$, and you will do it in a finite number of steps.
On nets:
The nets are a generalization of sequences, so make this more approachable, let me describe this in terms of sequences, just remember that sequential compactness and compactness are not equivalent (although they are for metric spaces).
Consider a compact space and a sequence of elements of it. You could imagine walking around and visiting different places of that space. We know that it has a convergent subsequence, so, in other words, there has to be a place the neighborhood of which you visit infinitely many times. That means, if you walk long enough, you will have to come back to some neighborhood you have already been to. Such a space has to be rather small, right?
On the other hand, if you were to consider a sequence that does not have any converging subsequence, e.g. $1, 2, 3, 4, \ldots$, then you can go on and on, but the space is not compact anymore - it is not "closely and neatly packed together". Similarly with $(0,1)$, you can pick the neighborhoods (cover) and a specific pattern of walking (dependent on the neighborhoods) so that you won't visit any neighborhood twice. Such a space is not small, but neither it is compact.
Putting this in terms of nets instead of sequences (which means that we are doing steps along arbitrary directed set, not just the natural numbers, for example you could make infinitely many infinitely small steps) might be confusing, but I think it still gives some intuition why you can think of compact spaces are "closely and neatly packed together".
I hope this helps $\ddot\smile$
Best Answer
The ordinary (non-scientific) meaning of "regime" has to do with governments and the laws they impose. That meaning has been carried over to scientific contexts, to refer to those domains in which certain laws or theories are valid. Thus, I might say that a certain calculation in physics is valid in the classical regime, meaning that it relies on the laws of classical physics and would not be valid in the relativistic regime (meaning when velocities are so great or gravitational fields so strong that relativity theory must be used) or the quantum regime (where the entities are so small that quantum theory must be applied). Likewise, I might refer to some range of parameters in a partial differential equation as the elliptic regime, meaning that the equation is elliptic (and I can invoke nice facts like automatic smoothness of weak solutions) when the parameters are in that range.
I hope that, if I've ever used "regime" in my own writing, I've used it in accordance with this meaning. I admit, though, that some people (possibly including me) have used "regime" just because it sounds cool.