Do you by any chance have a computer science background? Your ideal of reducing everything (even operations like limits) to function and sets has a flavor of wanting mathematics to work more or less like a programming language -- this is a flavor that I (being a computer scientist) quite approve of, but you should be aware that the ideal is not *quite* aligned with how real mathematicians write mathematics.

First, even though everything *can* be reduced to sets and functions -- indeed, everything can be reduced to sets alone, with functions just being sets of a particular shape -- doing so is not necessarily a good way to *think* about everything all of the time. Reducing everything to set theory is the "assembly language" of mathematics, and while it will certainly make you a better mathematician to *know* how this reduction works, it is not the level of abstraction you'll want to do most of your daily work at.

In contrast to the "untyped" assembly-level set theory, the day-to-day symbol language of mathematics is a highly *typed* language. The "types" are mostly left implicit in writing (which can be frustrating for students whose temperament lean more towards the explicit typing of most typed computer languages), but they are supremely important in practice -- almost every notation in mathematics has dozens or hundreds of *different meanings*, between which the reader must choose based on what the types of its various sub-expressions are. (Think "rampant use of overloading" from a programming-language perspective). Mostly, we're all trained to do this disambiguation unconsciously.

In most cases, of course, the various meanings of a symbol are generalizations of each other to various degrees. This makes it a particular bad idea to train oneself to think of the symbol of denoting *this* or *that* particular function with such-and-such particular arguments and result. A fuzzier understanding of the *intention* behind the symbol will often make it easier to *guess* which definition it's being used with in a new setting, which makes learning new material easier (even though actual proofwork of course needs to be based on exact, explicit definitions).

In particular, even restricting our attention to real analysis, the various kinds of limits (for $x\to a$, $x\to \infty$, one-sided limits and so forth) are all notated with the same $\lim$ symbols, but they are *technically* different things. Viewing $\lim_{x\to 5}f(x)$ and $\lim_{x\to\infty} f(x)$ as instances of the same joint "limit" function is technically possible, but also clumsy and (more importantly) not even particularly enlightening. It is better to think of the various limits as a loose grouping of *intuitively* similar but *technically* separate concepts.

This is not to say that there's not interesting mathematics to be made from studying ways in which the intuitive similarity between the different kind of limits can be formalized, producing some general notion of limit that has the ordinary limits as special cases. (One solution here is to say that the "$x\to \cdots$" subscript names a variable to bind while *also* denoting a net to take the limit over). All I'm saying is that such a general super-limit concept is not something one ought to think of when doing ordinary real analysis.

Finally (not related to your question about limits), note that the usual mathematical language makes extensive use of *abstract types*. The reals themselves are a good example: it is *possible* to give an explicit construction of the real numbers in terms of sets and functions (and every student of mathematics deserves to know how), but in actual mathematical reasoning numbers such as $\pi$ or $2.6$ are *not sets or functions*, but a separate sort of things that can only be used in the ways explicitly allowed for real numbers. "Under the hood" one might consider $\pi$ to "really be" a certain set of functions between various other sets, but that is an *implementation detail* that is relevant only at the untyped set-theory level.

(Of course, the various similarities between math and programming languages I go on about here are not coincidences. They arose from programming-language design as deliberate attempts to create formal machine-readable notations that would "look and feel" as much like ordinary mathematical symbolism as they could be made to. Mathematics had all of these things first; computer science was just first to need to *name* them).

## Best Answer

Unfortunately, checking my typical python and java packages (the only languages I work in now), I exception handle NaN cases. So my original post wouldn't work unless you also handle them.

If you're looking for a good way to code this, then you might try either

`if x > 0: return 1`

return 0

or, if you're in a language which evaluates True to 1, False to 0, something like

`return int(x>0)`

But in terms of a mathematical function, your description alone is a mathematical function. There is no ambiguity about $f:[0,\infty] \to \mathbb{R}$, $f(x) = 0$ if $x = 0$, and $f(x) = 1$ else.