I see two questions here. The first is why self-inductance is not considered when solving Faraday's law problems, and the second is why an EMF can ever produce a current in a circuit with non-zero self-inductance. I will answer both of these in turn.
1. Why self-inductance is not considered when solving Faraday's law problems
Self inductance should be considered, but is left out for simplicity. So for example, if you have a planar circuit with inductance $L$, resistance $R$, area $A$, and there is a magnetic field of strength $B$ normal to the plane of the circuit, then the EMF is given by $\mathcal{E}=-L \dot{I} - A \dot{B}$.
This means, for example, that if $\dot{B}$ is constant, then, setting $IR=\mathcal{E}$, we find $\dot{I} = -\frac{R}{L} I - \frac{A}{L} \dot{B}$. If the current is $0$ at $t=0$, then for $t>0$ the current is given by $I(t)=-\frac{A}{R} \dot{B} \left(1-\exp(\frac{-t}{L/R}) \right)$. At very late times $t \gg \frac{L}{R}$, the current is $-\frac{A \dot{B}}{R}$, as you would find by ignoring the inductance. However, at early times, the inductance prevents a sudden jump of the current to this value, so there is a factor of $1-\exp(\frac{-t}{L/R})$, which causes a smooth increase in the current.
2. Why an EMF can ever produce a current in a circuit with non-zero self-inductance.
You are worried that EMF caused by the circuit's inductance will prevent any current from flowing. Consider the planar circuit as in part one, and suppose there is a external emf $V$ applied to the circuit (and no longer any external magnetic field). The easiest way to see that current will flow is by making an analogy with classical mechanics: the current $I$ is analogous to a velocty $v$; the resistance is analogous to a drag term, since it represents dissipation; the inductance is like mass, since the inductance opposes a change in the current the same way a mass opposes a change in velocity; and the EMF $V$ is analogous to a force. Now you have no problem believing that if you push on an object in a viscous fluid it will start moving, so you should have no problem believing that a current will start to flow.
To analyze the math, all we have to do is replace $-A \dot{B}$ by $V$ in our previous equations, we find the current is $I(t) = \frac{V}{R} \left(1-\exp(\frac{-t}{L/R}) \right)$, so as before the current increases smoothly from $0$ to its value $\frac{V}{R}$ at $t=\infty$.
Here is one way to think about it:
When a charged particle travels in a magnetic field, it experiences a force. If the particle is stationary but the field is moving, then in the frame of reference of the field the particle should see the same force.
Now let's take a conductor wound into a coil. In order to increase the magnetic field inside, I could take a dipole magnet and move it close to the coil. As I do so, magnetic field lines cross the conductor, and generate a force on the charge carriers.
It is a convenient trick for figuring out "what goes where" to know that the induced current will flow so as to oppose the magnetic field change that generated it. In the perfect case of a superconductor, this "opposing" is perfect - this is the basis of magnetic levitation. For resistive conductors, the induced current is not quite sufficient to oppose the magnetic field, so some magnetic field is left.
The point is that the flowing of the current is instantaneous - it happens as the magnetic field tries to establish in the coil. So it's not "Apply field in coil. Coil notices, and generates an opposing field. " - instead, it is "Start to apply field in coil. Coil notices and prevents field getting to expected strength".
Not sure if this makes things any clearer...
Best Answer
Magnetic flux and Faraday's law are defined purely mathematically. Because mathematical definitions typically deal with idealized concepts, subtle issues start to show up when we try to apply such definitions to more complicated physical situations. In particular, we usually need some physical principles to go with the mathematics; the physics tells us how to apply the mathematics.
Magnetic flux and Faraday's Law provide good examples of such subtleties and complications. I'll walk through the definitions carefully, pointing out the stumbling blocks, and then come back to Faraday's wheel. Most importantly, when talking about Faraday's wheel, there's one subtlety that requires additional input from physics, which is how to define a circuit.
We start with the mathematical definition of flux: \begin{equation} \Phi_B = \iint_S \vec{B} \cdot d\vec{A}. \end{equation} You probably know that $\vec{B}$ is the magnetic field vector, $d\vec{A}$ is the differential area element (the direction of which is normal to the surface with some continuous choice of either "outward" or "inward" direction), and $S$ is the surface over which the integral is performed. There are numerous idealizations built into this integral. Among them, we are assuming that we know exactly how to delimit this surface with exactly zero thickness and perfectly distinct boundaries.
This can raise subtle issues when we try to talk about real physical situations. For example, we might want to talk about the flux through a wire loop. But a physical wire has nonzero thickness. Should the surface of integration end at the center of the wire? At its outer edge? How do you specify the outer edge if the wire is not in a plane? You'll frequently hear people saying that Faraday's law only applies to "infinitely thin" wires. I would suggest that it's more correct to say that Faraday's law only applies naively to thin wires; it can be used in more general situations with careful application.
Next, we come to Faraday's Law, which relates the electromotive force $\mathcal{E}$ and the the boundary of your surface to the time-derivative of the magnetic flux as \begin{equation} \mathcal{E} = \int_{\partial S} \vec{E} \cdot d\vec{l} = -\frac{\mathrm{d}\Phi_B} {\mathrm{d}t}, \end{equation} where $\partial S$ is the boundary of the surface we used to define the flux. There are some more subtleties here. First, and quite simply, you have to be very specific about the surface and the boundary that you are dealing with. Second, the derivative is the total derivative, rather than the partial derivative. This can be important when the surface of integration is changing in time, and when the magnetic field is changing in space and time.
That last part is a really subtle issue, even though it's about something so major as how we define a circuit. I'll quote Jackson to explain it: "The electric field $\vec{E}$ is the electric field at $d\vec{l}$ in the coordinate system or medium in which $d\vec{l}$ is at rest, since it is that field that causes current to flow if a circuit is actually present." [Classical Electrodynamics, third edition, section 5.15.] So for a circuit in which the path of electrons is changing relative to the conductor through which they're moving, you have to use a changing circuit path.
Now, applied to Faraday's wheel (the homopolar generator), the first point is that you have to specifically define the surface that you're talking about — and hence the boundary of that surface. Students frequently rush past this point, and take the disc itself as the surface. But this is wrong because the boundary of that surface is the edge of the wheel, so technically you would be calculating the electromotive force around the wheel. But you're measuring the potential between the center and the edge, so that surface can't work. Instead, you need to take a surface whose boundary passes between the center contact and the contact on the edge, then goes out from each of them, and over to some galvanometer or something. A standard choice for the surface is a square that it is perpendicular to the wheel, with one edge along the wheel's axis and the next edge along a radius of the wheel. Now the paradox comes up when you realize that $\vec{B}$ is actually parallel to this surface, so $\vec{B} \cdot d\vec{A}$ is zero everywhere and always. So it would be reasonable to think that \begin{align} \frac{d \Phi_B}{dt} &= \frac{d}{dt} \iint_S \vec{B} \cdot d\vec{A} \\ &= \iint_S \frac{d}{dt} \left( \vec{B} \cdot d\vec{A} \right) \\ &= \iint_S \frac{d}{dt} \left( 0 \right) \\ &= \iint_S 0 \\ &= 0, \end{align} which would imply $\mathcal{E} = 0$. But that's not what's measured. It turns out that the derivation I just gave is wrong.
The (wrong) derivation above used the partial derivative, and ignored movement of the surface. You can resolve this paradox if you remember that the derivative in question is the total derivative, and that the surface might be moving. Now, Jackson told us that we need to use a path that's stationary with respect to the medium. So the surface we're integrating over is moving with respect to stationary magnetic field we've been told about, and we need to differentiate the stationary field with respect to the moving surface. The standard way of describing the total derivative in a material that's moving with velocity $\vec{v}$ at a given point is to use the "material derivative" (also commonly known as the "convective derivative"): \begin{equation} \frac{d}{dt} = \frac{\partial}{\partial t} + \vec{v} \cdot \vec{\nabla}. \end{equation} To be a little more specific, you use this material derivative to get the derivative with respect to moving coordinates when you're given a field in static coordinates. Now we get \begin{align} \frac{d \Phi_B}{dt} &= \frac{d}{dt} \iint_S \vec{B} \cdot d\vec{A} \\ &= \iint_S \left[ \left( \vec{v} \cdot \vec{\nabla} \right) \vec{B} \right] \cdot d\vec{A} \\ &= \iint_S \left[ \vec{\nabla} \times \left( \vec{B} \times \vec{v} \right) + \vec{v} (\vec{\nabla} \cdot \vec{B}) \right] \cdot d\vec{A} \\ &= \iint_S \left[ \vec{\nabla} \times \left( \vec{B} \times \vec{v} \right) \right] \cdot d\vec{A} \\ &= \int_{\partial S} \left( \vec{B} \times \vec{v} \right) \cdot d\vec{l} \end{align} which is nonzero. [To go from the second to third line, I used a standard vector calculus identity with assumptions about $\vec{v}$ for our case. To go from the third to fourth, I used the fact that $\vec{\nabla} \cdot \vec{B} = 0$. To get to the last line, I used the Stokes theoreom. Remember that $\partial S$ is the boundary of $S$.]
Alternatively, we could differentiate with respect to stationary coordinates, but note that the surface is changing in those coordinates. If we look at a little segment of the boundary $\Delta \vec{l}$, and suppose it's moving with velocity $\vec{v}$, then after time $\Delta t$ has passed, this has introduced a new amount of area into the surface given by \begin{equation} \Delta \vec{A} = (\vec{v} \Delta t) \times \Delta \vec{l}. \end{equation} This changes the flux integral by \begin{equation} \Delta \Phi_B = \vec{B} \cdot \Delta \vec{A} = \vec{B} \cdot (\vec{v} \times \Delta \vec{l}) \Delta t, \end{equation} which implies \begin{equation} \frac{\Delta \Phi_B}{\Delta t} = \vec{B} \cdot (\vec{v} \times \Delta \vec{l}). \end{equation} Now we can take the limits of small $\Delta t$ and small $\Delta \vec{l}$, then integrate up the contributions from all those little $\Delta \vec{l}$ segments around $\partial S$ to get the total derivative of the flux: \begin{align} \frac{d \Phi_B}{dt} &= \int_{\partial S} \vec{B} \cdot (\vec{v} \times d\vec{l}) \\ &= \int_{\partial S} \left( \vec{B} \times \vec{v} \right) \cdot d\vec{l}. \end{align} To get from the first to second line, I just used the usual scalar triple product. And this is the same result as the one we got using the material derivative — again, it's nonzero.
I'm not saying that any of this is obvious at all; it's certainly not obvious from the simple mathematical definitions I've given above. But that's all just mathematics. The key point to remember when applying it to physics is Jackson's point: You need to define your circuit with respect to the medium through which the electrons are actually moving.
There's really nothing that tells us the answers a priori — that's why humans had to do the experiments to figure out how physics works. There's a pretty good discussion of subtleties regarding Faraday's Law on Wikipedia's Faraday paradox page, but the better discussion of the wheel comes on the Faraday induction page. And of course, we should all read Jackson more carefully.