Often traditional reasoning from arithmetic breaks down when you try to think about infinity, and percentages are no exception.
Examples:
You agree to pay me $B(n)/n$ dollars per year, where $B(n)$ is a function giving your total wealth after $n$ years (thanks!). So as time goes by, the portion of your income you're paying me is $1/n$ (or $100/n$ percent). As the years pass, this dwindles down to an "infinitely small percentage."
As time goes by, let's say you're a hard worker and immortal, and you earn more and more, say $B(n) = n$. Then every year you're paying me a dollar. The PERCENTAGE of your income that you are paying me is getting smaller and smaller each year, over time, but you're giving me a dollar a year. So in this case "an infinitely small percentage of infinity" is one.
In experiment two, you're still immortal, but you don't work as hard. Now $B(n) = \sqrt{n}$. Then every year you're paying me $\frac{1}{\sqrt{n}}$ dollars. As time goes by, your income is still going to infinity, but what you're paying me is going to dwindle down to zero. So "an infinitely small percentage of infinity" is zero.
In our final experiment, you're immortal and work really hard, and $B(n) = n^2$. Now you're paying me $n$ dollars a year, and an infinitely small percentage of infinity is infinity.
The set of real numbers are (usually) defined in a way that has nothing to do with decimal representations -- they are defined by their arithmetic and geometric properties. e.g. among other things, if $a$ and $b$ are distinct real numbers, then $(a+b)/2$ is a real number that is between them.
The set of decimals are defined as being sequences of digits: there is one place for every integer. e.g. $0$ corresponds to the one's place, $1$ corresponds to the ten's place, $2$ to the hundred's place, $-1$ to the tenth's place, and so forth. Each place gets a single digit (0 through 9) assigned to it. When we write a decimal like
123.45
we implicitly mean that all of the remaining positions get filled with zeroes. i.e. in the above numeral, the thousands place contains a zero.
The key point is that each place corresponds to an integer: there aren't any other places. If we write $n.\overline{0}$, meaning that the $0$ to the right of the decimal place should be repeated infinitely, this means that we have written a $0$ in every place corresponding to a negative integer. There aren't any places remaining to the right of the decimal place to insert a $1$! So the notation $n.\overline{0}1$ makes no sense if we try to interpret it as expressing a decimal number.
We could define other sorts of radix notation that extend decimals to have additional places to the right of the decimal place, but then we have to figure out what to do with such things.
The ordinary decimals are useful because we have a way to interpret any decimal number that only has finitely many nonzero digits to the left of the decimal place as a real number. And we also have rules for doing arithmetic with them. There are some ambiguities -- e.g. does $1.\overline{0} + 0.\overline{9}$ add up to $1.\overline{9}$ because there are no carries? Or does it add up to $2.\overline{0}$ because there is a carry in every place to the right of the decimal point? -- but these ambiguities are okay because we are interpreting both possibilities as being the same real number.
But if we extend the decimals, we no longer have the ability to relate them to real numbers. And if we want to do arithmetic with such things, we're going to have to do a lot of work to define the arithmetic operations and figure out if they have any of the familiar algebraic properties we're used to and so forth.
We can construct algebras in this way in which every number has a "next" number, but such things are going to have very little to do with real numbers.
Best Answer
You could say that $\frac{1}{\infty} = 0$, so $1-\frac{1}{\infty} = 1$. But then, you're stretching the definition of division past breaking point - division as you know it isn't defined for infinity, so the answer is undefined. Otherwise, you can quickly get yourself into a pickle and end up saying 1=2.
Arithmetic operators - add, subtract, divide, multiply, raise to the power of - are defined on a particular set of numbers: such as real numbers, or complex numbers.
The set you use for definition, will determine what you can and can't say meaningfully. Typically (but not always), infinity is excluded from that set.
If we take the set of real numbers, and look at "raise to the power of", then $1^x$ is equal to 1 for any x, as x -> infinity. So in that case, you could have a convention of saying that $1^\infty = 1$. But $\frac{1}{1} = 1$, so $1^{-\infty}$ would also equal 1. However, when you go about defining these new conventions, you have to be extremely careful - sometimes, a convention will seem obvious, but if you run with it, you end up seeming to prove 1=2, which means that your convention wasn't that helpful.
Let's compare with raising to the power 0.5, i.e. taking the square root. $-1^{0.5}$ is undefined when we are working on the reals - so, just as dividing by infinity, you can't include it in your arithmetic. Only when you expand to the complex numbers and extend your definition of the arithmetic operators to cope, can you say something meaningful about $(-1)^{0.5}$
Similarly, the reals and the complex numbers each exclude infinity, so arithmetic isn't defined for it.
You can extend those sets to include infinity - but then you have to extend the definition of the arithmetic operators, to cope with that extended set. And then, you need to start thinking about arithmetic differently. If you want to learn more about that, then there are lots of friendly places on the web to get into the work of Cantor on the different types of infinity (of which there are an infinite number of different infinities).