To elaborate a bit on Tobias answer. The notion of isomorphism depends on which structure (category actually) you are studying.
Edit: Pete L. Clark pointed out that I was too sloppy with my original answer.
The idea of an isomorphism is that isomorphisms preserve all structure that one is studying. This means that if $X,Y$ objects in some category, then there exists morphisms $f:X\rightarrow Y$, $g:Y\rightarrow X$ such that $f\circ g$ is the identity on $Y$, and $g\circ f$ is the identity on $X$.
To be a bit more explicit, if $X$ and $Y$ are sets, and there is a bijective function $X\rightarrow Y$, then we can construct the inverse $f^{-1}:Y\rightarrow X$. This inverse function is defined by $f^{-1}(y)=x$ iff $f(x)=y$. We have that $f\circ f^{-1}=id_Y$ and $f^{-1}\circ f=id_X$.
But if we are talking of vector spaces, we demand more. We want two vector spaces to be isomorphic iff we can realize the above situation by linear maps. This is not always possible, even though there exists a bijection (you cannot construct a invertible linear map $\mathbb{R}\rightarrow \mathbb{R}^2$). In the linear case; if a function is invertible and linear, its inverse is also linear.
In general however, it need not be the case that the inverse function of some structure preserving map preserves the structure. Pete pointed out that the function $x\mapsto x^3$ is an invertible function. It is also differentiable. but its inverse is not differentiable in zero. Thus $x\mapsto x^3$ is not an isomorphism in the category of differentiable manifolds and differentiable maps.
I would like to conclude with the following. We cannot blatantly say that two things are isomorphic. It depends on the context. The isomorphism is always in a category. In the category of sets, isomorphisms are bijections, in the category of vector spaces isomorphisms are invertible linear maps, in the category of groups isomorphisms are group isomorphisms. This can be confusing. For example $\mathbb{R}$ can be seen as lot of things. It is a set. It is a one dimensional vector space over $\mathbb{R}$. It is a group under addition. it is a ring. It is a differentiable manifold. It is a Riemannian manifold. In all these $\mathbb{R}$ can be isomorphic (bijective, linearly isomorphic, group isomprhic, ring isomorphic, diffeomorphic, isometric) to different things. This all depends on the context.
It's worth recalling what Martin-Löf wrote in [Intuitionistic type theory]: (But note that Martin-Löf writes "set" for what we would call "type"!)
- What is a set?
- What is it that we must know in order to have the right to judge something to be a set?
- What does a judgement of the form "$A$ is a set" mean?
The first is the ontological (ancient Greek), the second the epistemological (Descartes, Kant, ...) and the third semantical (modern) way of posing essentially the same question. [...] So:
- a set $A$ is defined by prescribing how a canonical element of $A$ is formed as well as how two equal canonical elements of $A$ are formed.
This is the explanation of the meaning of a judgement of the form $A$ is a set.
My own view is that it doesn't make sense to compare sets and types at all – they are entities in very different worlds. The right question to ask is:
What is the difference between a set theory and a type theory?
The answer, which is hinted at by the quote above, is syntax. Set theory is a logical theory, built on top of a preexisting deductive system such as first-order logic, while type theory is a deductive system in its own right. (Well, you could formalise type theory as a certain kind of first-order theory, but I consider that to be a kind of abstraction inversion.)
One of the key differences is that terms in type theories are first-class citizens: they do not denote elements, they are elements. (That is not to say that different terms can't be equal, though – more on that later.) Of course, what one means by "element" also has to be somewhat more general than in set theory, since terms do not have to be closed: for instance, the term $\mathsf{succ} (x)$, in the context $x : \mathbb{N}$, is of type $\mathbb{N}$; so interpreted literally, it says that $\mathbb{N}$ has an "element" $\mathsf{succ} (x)$, the successor of a "variable element" $x$. One way of making concrete sense of this is to employ the notion of "generalised element" from category theory: the "variable element" $x$ is interpreted as the identity function $\mathbb{N} \to \mathbb{N}$, and similarly $\mathsf{succ} (x)$ is interpreted as the successor function $\mathbb{N} \to \mathbb{N}$. This is to be distinguished from (the interpretation of) the closed term $\lambda x : \mathbb{N} . \mathsf{succ} (x)$, which is of type $\mathbb{N} \to \mathbb{N}$ in the empty context. (This, by the way, is one reason why ETCS is considered a set theory and not a type theory – though, granted, it draws heavily from type-theoretic practices.)
A much more subtle difference between set theory and type theory is the treatment of equality. In type theory, it is possible (but not necessary) to distinguish between so-called judgemental equality (denoted by $\equiv$) and propositional equality (denoted by $=$). Judgemental equality concerns equality of terms qua terms: for instance, $1 \equiv \mathsf{succ}(0)$ because the former is an abbreviation for the latter, and (assuming consistency) we never have $x \equiv y$ when $x$ and $y$ are two different variables. This is sometimes called "definitional equality", because one usually deduces judgemental equalities by repeatedly applying definitions ("$\beta$-reduction"). No matter what you call it, judgemental equality is (supposed to be) an external metalinguistic notion, and needless to say, judgemental equality is absent in set theory.
On the other hand, propositional equality concerns semantics. Of course, judgemental equality implies propositional equality, but the converse need not hold. (If it does, then the type theory is said to be extensional.) Here's a somewhat involved example. Let $x$ and $y$ be variables of type $\mathbb{N}$. Then $x + y$ and $y + x$ are terms of type $\mathbb{N}$, defined by induction in the usual way:
\begin{align*}
x + 0 & \equiv x &
x + \mathsf{succ}(y) & \equiv \mathsf{succ}(x + y) \\
y + 0 & \equiv y &
y + \mathsf{succ}(x) & \equiv \mathsf{succ}(y + x)
\end{align*}
Now, by repeatedly applying definitions, one can show that $x + y \equiv y + x$ after substituting closed numerals for $x$ and $y$, so e.g. $2 + 3 \equiv 5 \equiv 3 + 2$. But that does not mean that $x + y \equiv y + x$; in fact, this judgement cannot be derived in intensional type theory! Rather, one has to use induction on $x$ and $y$, and the only kinds of equalities provable by induction are propositional equalities.
I think, to really get a good feel for type theories as a mathematician, one should either go play around with proof assistants like Coq or Agda, or otherwise try to learn what it takes to build a model of intensional type theory. My own understanding was seriously hindered by intuitions drawn from extensional type theory, which is in many ways more like set theory than not.
Best Answer
There seem to be two problems here.
First, if $A$ and $B$ have the same elements (every element of $A$ is an element of $B$ and vice versa), then they are the same set, the strongest possible form of equivalence.
And the identity function (which sends every $x\in A$ to itself) is certainly a bijection $A\to A$ and preserves whichever structure we care to equip $A$ with. So $A$ is always isomorphic to itself.
Second, the word "isomorphic" denotes many different concepts, depending on which kind of structure we require the isomorphism to preserve. If we say that $A$ and $B$ are isomorphic as sets, we only require that it's a bijection, and "isomorphic" is then the same as "has the same cardinality".
But we can also speak about being isomorphic as groups, or as rings, or as partially ordered sets, or as graphs, or as a lot of other things. In each of those cases we're strictly speaking using sloppy language to speak not only of $A$ and $B$ in themselves as sets (that is, which elements they have), but also additionally (and implicitly) about some structure we have chosen to consider for each of $A$ and $B$. The structure might be a binary operation on the set (when we're speaking of isomorphic-as-groups), or two binary operation (for isomorphic-as-rings), or an order relation (for isomorphic-as-posets), and so forth.
For example, if we say that $A$ and $B$ are isomorphic as groups, then what we really mean is that we have chosen operations $*:A\times A\to A$ and $\circledast:B\times B\to B$ such that $\langle A,{*}\rangle$ and $\langle B,{\circledast}\rangle$ are groups (and having made those choices is neccessary before we can even ask whether $A$ and $B$ are isomorphic groups) and that there is an $f:A\to B$ that is a bijection and satisfies $f(a_1*a_2)=f(a_1)\circledast f(a_2)$.
What we really should be saying is "the groups $\langle A,{*}\rangle$ and $\langle B,{\circledast}\rangle$ are isomorphic". But we're employing an informal shorthand where we can say $A$ when we mean $\langle A,{*}\rangle$, provided that it is clear which ${*}$ we mean.
Note that it is possible for $A$ and $B$ to be the same set, yet because we have chosen different $*$ and $\circledast$ they are not isomorphic by structure. For example, $\langle \mathbb R,{+}\rangle$ and $\langle \mathbb R,{\times}\rangle$ are not isomorphic as monoids, even though the underlying set $\mathbb R$ is the same.