You could make that definition I suppose, but what use would it have, and how would it relate to the usual notion of limit?
Let's look at what a limit of a function $f$ at a point $x$ should mean (let's say for a real-valued function on a metric space). You want the limit of $f$ at $x$ to be a real number $L$ such that for all $\varepsilon>0$ there exists a $\delta>0$ such that $0<d(x,y)<\delta$ implies that $|f(y)-L|<\varepsilon$. Now if $x$ is an isolated point, then you could take any $L$ you want, because whatever $\varepsilon$ is, you can choose $\delta$ sufficiently small so that $|f(y)-L|<\varepsilon$ is vacuously true for all $y$ with $0<d(x,y)<\delta$; that is, just make sure that there are no $y$ satisfying the latter condition. For non-isolated points, limits are unique. For isolated points, trying to extend the usual definition leads to making every real number a limit.
For one of the applications of limits, namely continuity, not defining the limit at an isolated point causes no problem. You can say that $f$ is continuous at a point $x$ in the domain if for all $\varepsilon>0$ there exists a $\delta>0$ such that $d(x,y)<\delta$ implies $|f(y)-f(x)|<\varepsilon$. If $x$ is an isolated point, then this will always be true, because for sufficiently small $\delta$ the only $y$ with $d(x,y)<\delta$ is $x$.
Added: I was writing when Alex posted, and part of my post makes a similar point to his. Qiaochu's comment on Alex's post gives an answer to my question at the beginning of my post. Making this definition allows continuity to be defined in terms of respecting limits without making isolated points a special case, something I had overlooked.
Nonetheless, continuity can be defined in terms of respecting limits without actually defining the limit of a function at a point. A function $f$ between metric spaces [resp. topological spaces] is continuous at $x$ if for every sequence [resp. net] $(x_n)_n$ in the domain converging to $x$, $\lim_n f(x_n)=f(x)$. In case $x$ is an isolated point, a sequence converging to $x$ is eventually constantly equal to $x$, so this will be satisfied.
If you're willing to accept that $\sum_{n=1}^\infty \frac{x^{n-1}}{n!}$ is uniformly convergent then you can use this to evaluate the first limit
$$
\lim_{x \to 0} \frac{e^x - 1}{x} = \lim_{x \to 0} \sum_{n = 1}^\infty \frac{x^{n-1}}{n!} = \sum_{n=1}^\infty \frac{0^{n-1}}{n!} = 1
$$
In general for problems like these you can use power series to evaluate the limits. In other words if you know the power series representation of a function you can plug it into the limit and do some manipulation there.
For the second series I would take advantage of the continuity of $e^x$ and prove that
$$
\lim_{x \to \infty} x\sqrt[x]{e} = 1
$$
which we can see
$$
\lim_{x \to \infty} x\sqrt[x]{e} = \lim_{x \to \infty} \frac{x}{e^x} = \lim_{x \to \infty} \frac{x}{\sum_{n=0}^\infty \frac{x^n}{n!}} = \lim_{x \to \infty} \frac{1}{\frac{1}{x} + \sum_{n=1}^\infty \frac{x^{n-1}}{n!}} = \lim_{x \to \infty} \frac{1}{1 + \frac{1}{x} + \sum_{n=2}^\infty \frac{x^{n-1}}{n!}} = 1
$$
Generally I like looking at power series to solve these limits.
Just thought of the somewhat traditional way to prove your first limit. Define $e = \lim_{n \to \infty} (1 + 1/n)^{n}$ and notice
$$
\lim_{n \to \infty} (1 + 1/n)^n = \lim_{h \to 0} (1 + h)^{1/h}
$$
so then
$$
\lim_{s \to 0} \frac{e^s - 1}{s} = \lim_{s \to 0} \frac{\lim_{h \to 0} (1+h)^{s/h}-1}{s}
$$
Now take advantage of $e^x$'s continuity to combine the limits
$$
\lim_{s \to 0} \frac{(1+s)^{s/s}-1}{s} = 1
$$
Best Answer
Do you by any chance have a computer science background? Your ideal of reducing everything (even operations like limits) to function and sets has a flavor of wanting mathematics to work more or less like a programming language -- this is a flavor that I (being a computer scientist) quite approve of, but you should be aware that the ideal is not quite aligned with how real mathematicians write mathematics.
First, even though everything can be reduced to sets and functions -- indeed, everything can be reduced to sets alone, with functions just being sets of a particular shape -- doing so is not necessarily a good way to think about everything all of the time. Reducing everything to set theory is the "assembly language" of mathematics, and while it will certainly make you a better mathematician to know how this reduction works, it is not the level of abstraction you'll want to do most of your daily work at.
In contrast to the "untyped" assembly-level set theory, the day-to-day symbol language of mathematics is a highly typed language. The "types" are mostly left implicit in writing (which can be frustrating for students whose temperament lean more towards the explicit typing of most typed computer languages), but they are supremely important in practice -- almost every notation in mathematics has dozens or hundreds of different meanings, between which the reader must choose based on what the types of its various sub-expressions are. (Think "rampant use of overloading" from a programming-language perspective). Mostly, we're all trained to do this disambiguation unconsciously.
In most cases, of course, the various meanings of a symbol are generalizations of each other to various degrees. This makes it a particular bad idea to train oneself to think of the symbol of denoting this or that particular function with such-and-such particular arguments and result. A fuzzier understanding of the intention behind the symbol will often make it easier to guess which definition it's being used with in a new setting, which makes learning new material easier (even though actual proofwork of course needs to be based on exact, explicit definitions).
In particular, even restricting our attention to real analysis, the various kinds of limits (for $x\to a$, $x\to \infty$, one-sided limits and so forth) are all notated with the same $\lim$ symbols, but they are technically different things. Viewing $\lim_{x\to 5}f(x)$ and $\lim_{x\to\infty} f(x)$ as instances of the same joint "limit" function is technically possible, but also clumsy and (more importantly) not even particularly enlightening. It is better to think of the various limits as a loose grouping of intuitively similar but technically separate concepts.
This is not to say that there's not interesting mathematics to be made from studying ways in which the intuitive similarity between the different kind of limits can be formalized, producing some general notion of limit that has the ordinary limits as special cases. (One solution here is to say that the "$x\to \cdots$" subscript names a variable to bind while also denoting a net to take the limit over). All I'm saying is that such a general super-limit concept is not something one ought to think of when doing ordinary real analysis.
Finally (not related to your question about limits), note that the usual mathematical language makes extensive use of abstract types. The reals themselves are a good example: it is possible to give an explicit construction of the real numbers in terms of sets and functions (and every student of mathematics deserves to know how), but in actual mathematical reasoning numbers such as $\pi$ or $2.6$ are not sets or functions, but a separate sort of things that can only be used in the ways explicitly allowed for real numbers. "Under the hood" one might consider $\pi$ to "really be" a certain set of functions between various other sets, but that is an implementation detail that is relevant only at the untyped set-theory level.
(Of course, the various similarities between math and programming languages I go on about here are not coincidences. They arose from programming-language design as deliberate attempts to create formal machine-readable notations that would "look and feel" as much like ordinary mathematical symbolism as they could be made to. Mathematics had all of these things first; computer science was just first to need to name them).