Under a not unreasonable assumption about cardinal arithmetic, namely $2^{<c}=c$ (which follows from the continuum hypothesis, or Martin's Axiom, or the cardinal characteristic equation t=c), the number of non-isomorphic possibilities for *R of cardinality c is exactly 2^c. To see this, the first step is to deduce, from $2^{<c} = c$, that there is a family X of 2^c functions from R to R such that any two of them agree at strictly fewer than c places. (Proof: Consider the complete binary tree of height (the initial ordinal of cardinality) c. By assumption, it has only c nodes, so label the nodes by real numbers in a one-to-one fashion. Then each of the 2^c paths through the tree determines a function f:c \to R, and any two of these functions agree only at those ordinals $\alpha\in c$ below the level where the associated paths branch apart. Compose with your favorite bijection R\to c and you get the claimed maps g:R \to R.) Now consider any non-standard model *R of R (where, as in the question, R is viewed as a structure with all possible functions and predicates) of cardinality c, and consider any element z in *R. If we apply to z all the functions *g for g in X, we get what appear to be 2^c elements of *R. But *R was assumed to have cardinality only c, so lots of these elements must coincide. That is, we have some (in fact many) g and g' in X such that *g(z) = *g'(z). We arranged X so that, in R, g and g' agree only on a set A of size $<c$, and now we have (by elementarity) that z is in *A. It follows that the 1-type realized by z, i.e., the set of all subsets B of R such that z is in *B, is completely determined by the following information: A and the collection of subsets B of A such that z is in *B. The number of possibilities for A is $c^{<c} = 2^{<c} = c$ by our cardinal arithmetic assumption, and for each A there are only c possibilities for B and therefore only 2^c possibilities for the type of z. The same goes for the n-types realized by n-tuples of elements of *R; there are only 2^c n-types for any finite n. (Proof for n-types: Either repeat the preceding argument for n-tuples, or use that the structures have pairing functions so you can reduce n-types to 1-types.) Finally, since any *R of size c is isomorphic to one with universe c, its isomorphism type is determined if we know, for each finite tuple (of which there are c), the type that it realizes (of which there are 2^c), so the number of non-isomorphic models is at most (2^c)^c = 2^c.
To get from "at most" to "exactly" it suffices to observe that (1) every non-principal ultrafilter U on the set N of natural numbers produces a *R of the desired sort as an ultrapower, (2) that two such ultrapowers are isomorphic if and only if the ultrafilters producing them are isomorphic (via a permutation of N), and (3) that there are 2^c non-isomorphic ultrafilters on N.
If we drop the assumption that $2^{<c}=c$, then I don't have a complete answer, but here's some partial information. Let \kappa be the first cardinal with 2^\kappa > c; so we're now considering the situation where \kappa < c. For each element z of any *R as above, let m(z) be the smallest cardinal of any set A of reals with z in *A. The argument above generalizes to show that m(z) is never \kappa and that if m(z) is always < \kappa then we get the same number 2^c of possibilities for *R as above. The difficulty is that m(z) might now be strictly larger than \kappa. In this case, the 1-type realized by z would amount to an ultrafilter U on m(z) > \kappa such that its image, under any map m(z) \to \kappa, concentrates on a set of size < \kappa. Furthermore, U could not be regular (i.e., (\omega,m(z))-regular in the sense defined by Keisler long ago). It is (I believe) known that either of these properties of U implies the existence of inner models with large cardinals (but I don't remember how large). If all this is right, then it would not be possible to prove the consistency, relative to only ZFC, of the existence of more than 2^c non-isomorphic *R's.
Finally, Joel asked about a structure theory for such *R's. Quite generally, without constraining the cardinality of *R to be only c, one can describe such models as direct limits of ultrapowers of R with respect to ultrafilters on R. The embeddings involved in such a direct system are the elementary embeddings given by Rudin-Keisler order relations between the ultrafilters. (For the large cardinal folks here: This is just like what happens in the "ultrapowers" with respect to extenders, except that here we don't have any well-foundedness.) And this last paragraph has nothing particularly to do with R; the analog holds for elementary extensions of any structure of the form (S, all predicates and functions on S) for any set S.
Best Answer
To my way of thinking, there are at least three distinct perspectives one can naturally take on when undertaking work in nonstandard analysis. In addition, each of these perspectives can be varied on two other dimensions, independently. Those dimensions are, first, the order of nonstandardness (whether one wants nonstandardness only for objects, or also for functions and predicates, or also for sets of functions and sets of those and so on), and second, how many levels of standardness and nonstandardness one desires. Let me describe the three views I have in mind, and give them names, and then discuss how they relate to one another.
Classical model-construction perspective. In this approach, one thinks of the nonstandard universe as the result of an explicit construction, such as an ultrapower construction. In the most basic instance, one has the standard real field structure $\newcommand\R{\mathbb{R}}\newcommand\Z{\mathbb{Z}}\langle\R,+,\cdot,0,1,\Z\rangle$, and you perform the ultrapower construction with respect to an fixed ultrafilter on the natural numbers (or on some other set, if this was desirable). In time, one is led to want more structure in the pre-ultrapower model, so as to be able to express more ideas, which will each have nonstandard counterparts. And so very soon one will have constants for every real, a predicate for the integers $\Z$, or indeed for every subset of $\mathbb{R}$ and a function symbol for every function on the reals, and so on. Before long, one wants nonstandard analogues of the power set $P(\R)$ and higher iterates. In the end, what one realizes is that one might as well take the ultrapower of the entire set-theoretic universe $V\to V^{\omega}/U$, which amounts to doing nonstandard analysis with second-order logic, third-order, $\alpha$-order for every ordinal $\alpha$. One then has the copy of the standard universe $V$ inside the nonstandard realm $V^*$, which one analyzes and understands by means of the ultrapower construction itself.
Some applications of nonstandard analysis have required one to take not just a single ultrapower, but an iterated ultrapower construction along a linear order. Such an ultrapower construction gives rise to many levels of nonstandardness, and this is sometimes useful. Ultimately, as one adds additional construction methods, this amounts as Terry Tao mentioned to just adopting all of model theory as one toolkit. One will want to employ advanced saturation properties or embeddings or the standard system and so on. There is a very well-developed theory of models of arithmetic that uses quite advanced methods.
To give a sample consequence of saturation: every infinite graph, no matter how large, arises as an induced subgraph of a nonstandard-finite graph in any sufficiently saturated model of nonstandard analysis. This often allows you to undertake finitary constructions with infinite graphs, modulo the move to a nonstandard context.
Standard Axiomatic approach. Most applications of nonstandard analysis, however, do not rely on the details of the ultrapower or iterated ultrapower constructions, and so it is often thought worthwhile to isolate the general principles that make the nonstandard arguments succeed. Thus, one writes down the axioms of the situation. In the basic case, one has the standard structure $\R$ and so on, perhaps with constants for every real (and for all subsets and functions in the higher-order cases), with a map to the nonstandard structure $\R^*$, so that every real number $a$ has its nonstandard version $a^*$ and every function $f$ on the reals has its nonstandard version $f^*$. Typically, the main axioms would include the transfer principle, which asserts that any property expressible in the language of the original structure holds in the standard universe just in case it holds of the nonstandard analogues of those objects in the nonstandard realm. The transfer principle amounts precisely to the elementarity of the map $a\mapsto a^*$ from standard objects to their nonstandard analogues. One often also wants a saturation principle, expressing that any sufficiently realizable type is actually realized in the nonstandard model, and this just axiomatizes the saturation properties of the ultrapower. Sometimes one wants more saturation than one would get from an ultrapower on the natural numbers, but one can still achieve this by larger ultrapowers or other model-theoretic methods.
Essentially the same axiomatic approach works with the high-order approach, where one has nonstandard version of every set-theoretic object, and a map $V\to V^*$, with nonstandard structures of any order.
And similarly, one can axiomatize the features one wants to use in the iterated ultrapower case, with various levels of standardness.
As with most mathematical situations where one has a construction approach and an axiomatic approach, it is usually thought to be better to argue from the axioms, when possible, than to use details of the construction. And most applications of nonstandard analysis that I have seen can be undertaken using only the usual nonstandard axioms.
Nonstandard Axiomatic approach. This is a more radical perspective, which makes a foundational change in how one thinks about one's mathematical ontology. Namely, rather than thinking of the standard structures as having analogues inside a nonstandard world, one essentially thinks of the nonstandard world as the real world, with "standardness" structures picking out parts of it. So one has the real numbers including both infinite and infinitesimal reals, and one can say when two finite real numbers have the same standard part and so on. With this perspective, we think of the real real numbers as what on the other perspective would be the nonstandard reals, and then we have a predicate on that, which amounts to the range of the star map in the other approach. So some real numbers are standard, and some functions are standard and so on.
One sometimes sees this kind of perspective used in arguments of finite combinatorics, where one casually considers the case of an infinite integer or an infinitesimal rational. (I have a colleague who sometimes talks this way.) That kind of talk may seem alien for someone not used to the perspective, but for those that adopt the perspective it is very useful. In a sense, one goes whole-hog into the nonstandard realm.
More extreme versions of this idea adopt many levels of standardness and nonstandardness, extended to all orders. Karel Hrbáček has a very well-developed theory like this for nonstandard set theory, with an infinitely deep hierarchy of levels of standardness. He spoke on this last year at the CUNY set theory seminar, and I refer you to his articles on this topic. In Karel's system, one doesn't start with a standard universe and go up to the nonstandard universe, but rather, one starts with the full universe (which is fundamentally nonstandard) and goes down to deeper and deeper levels of standardness. Every model of ZFC, he proved, is the standard universe inside another model of the nonstandard set theories he considers.
Ultimately, my view is that the choice between the perspectives I mentioned is a matter of taste, and that in principle any achievement of nonstandard analysis that can be undertaken with one of the perspectives has natural analogues that can be expressed with the other perspectives.