I remember attending a lecture given by an ultrafinitist who denied that curves are a set of points, he would only say that any particular point may or not be on the curve. Similarly for algebraic or analytic objects, the only way I know how to define them is as a set with operations on the elements of that set. Since ultrafinitists cannot use definitions with infinite sets, what sort of definitions do they use?
[Math] How are mathematical objects defined from an ultrafinitist perspective
big-picturefoundationslo.logicmathematical-philosophyultrafinitism
Related Solutions
There is a cheap way of doing this, which may not be the optimal approach when a subtle task (such as the foundational question you have in mind) is the goal. But, then again, this may suffice.
Working in an appropriately strong theory, to simplify, the standard way to check that NBG is conservative over ZFC is to see that any model $M$ of ZFC can be extended to a model $N$ of NBG in such a way that the "sets" of $N$ give us back $M$. Again to simplify, assume the model $M$ is transitive. The model $N$ we associate to it is Gödel's $\mathop{\rm Def}(M)$, the collection of subsets of $M$ that are first order definable in $M$ from parameters (The proper classes are the elements of $\mathop{\rm Def}(M)\setminus M$.)
This suggests the simple solution of defining the models of "iterated-NBG" as the result of iterating Gödel's operation. So, given a transitive model $M$ of ZFC, the $\alpha$-th iterate would simply be what we usually denote $L_\alpha(M)$.
I am restricting to transitive models here, but there is a natural first order theory associated to each stage of the iteration just described (at least, for "many" $\alpha$), and I guess one could try to axiomatize it decently if enough pressure is applied.
There are some subtleties in play here. One is that most likely we want to stop the iteration way before we run into serious technicalities ($\alpha$ would have to be a recursive ordinal, for one thing, but I suspect we wouldn't want to venture much beyond the $\omega$-th iteration). Another is that the objects we obtain with this procedure would have wildly varying properties depending on specific properties of $M$.
For example, if $M$ is the least transitive model of set theory, then we "quickly" add a bijection between $M$ and $\omega$. In general, if $M$ is least with some (first order in the set theoretic universe) property, then we "quickly" add a bijection between $M$ and the size of the parameters required to describe this property (this is an old fine-structural observation. "Quickly" can be made pedantically precise, but let me leave it as is).
So you may want to work not with ZFC proper but with a slightly stronger theory (something like ZFC + "there is a transitive model of ZFC" + "there is a transitive model of "ZFC+there is a transitive model of ZFC"" + ...) if you want some stability on the theory of the transitive models produced this way. (Of course, this is an issue of specific models, not of the "iterated-NBG" theory per se).
I should add that I do not know of any serious work in the setting I've suggested, with two exceptions. One, in his book on Class Forcing, Sy Friedman briefly mentions a version of "Hyperclass forcing" appropriate to solve some questions that appear in a natural fashion once we show, for example, that no class forcing over $L$ can add $0^\sharp$. The second is by Reinhardt in the context of large cardinals and elementary embeddings, and is described by Maddy in her article "Believing the axioms. II". As far as I remember, neither work goes beyond hyperclasses, i.e., classes of classes.
Proper classes are not objects. They do not exist. Talking about them is a convenient abbreviation for certain statements about sets. (For example, $V=L$ abbreviates "all sets are constructible.") If proper classes were objects, they should be included among the sets, and the cumulative hierarchy should, as was pointed out in the question, continue much farther, but in fact, it already continues arbitrarily far.
In particular, when I talk about statements being true in $V$, I mean simply that the quantified variables are to be interpreted as ranging over arbitrary sets. It is an unfortunate by-product of the set-theoretic formalization of semantics that many people believe that, in order to talk about variables ranging over arbitrary sets (or arbitrary widgets or whatever), we need an object, a set, that contains all the sets (or all the widgets or whatever). In fact, there is no such need unless we want to formalize this notion of truth within set theory. Anyone who wants to formalize within set theory the notion of "truth in $V$" is out of luck anyway, by Tarski's theorem on undefinability of truth.
Considerations like these are what prompt me to view ZFC together with additional axioms (such as the universe axiom of Grothendieck and Tarski) as a reasonable foundational system, in contrast to Morse-Kelley set theory.
A detailed explanation of how to use proper classes as abbreviations and how to unabbreviate statements involving them is given in an early chapter of Jensen's "Modelle der Mengenlehre". (The idea goes back at least to Quine, who used it not only for proper classes but even for sets, developing a way to understand talk about sets as being about "virtual sets" and avoiding any ontological commitment to sets.)
Finally, I should emphasize, just in case it's not obvious, that what I have written here is my (current) philosophical opinion, not by any stretch of the imagination mathematical fact.
Best Answer
Point-set definitions are the most common modern way of defining a geometrical object like a line, but they're not the only way.
In Euclid, lines and circles are primitives.
The axiomatization of Euclidean geometry in Tarski 1999 has only points, betweenness, and congruence as primitives, and sets are not even referred to in the axioms except for the axiom of continuity, which basically says lines are Dedekind-complete. I don't imagine that ultrafinitists would even want this kind of continuity, so they'd probably leave this axiom out. (Even if you interpret the axiom as a sheaf of axioms over first-order formulas defining the relevant sets, that would be an infinite sheaf, which I think would be unacceptable.) Tarski's axioms, interpreted using classical logic, imply that there are infinitely many points, but classical logic isn't the appropriate logic for ultrafinitism. So I don't really see any reason to believe that there's any problem with doing an ultrafinitist geometry this way.
As an example, take the unit circle in the Cartesian plane, defined by a first-order formula (not as a set). I can definitely prove to an ultrafinitist's satisfaction that it contains at least four points (i.e., there are at least four points satisfying that formula). Given a line through the origin O (defined by picking some point P outside the circle that the line goes through), the lack of the axiom of continuity means that it's probably not possible to give an ultrafinitist proof that it intersects the circle (i.e., that there is a point between O and P that satisfies the formula defining the circle). However, one can certainly prove that there exist a point on the line and a point on the circle such that the distance between them is no more than 0.0001. There will be points such that it's neither true nor false that the point is on both the line and the circle.
Even if you're working in a system that has infinities, it's not necessary to describe geometry in terms of point sets. The original presentation of the surreal line didn't use sets, and the surreals are too big to be a ZFC set. In smooth infinitesimal analysis, a curve is not representable as a set of points.
Tarski and Givant, 1999, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.9012