I would begin with the excellent book by Gang Tao: http://www.amazon.com/Adaptive-Analysis-Learning-Processing-Communications/dp/0471274526/ref=sr_1_1?ie=UTF8&qid=1365779105&sr=8-1&keywords=gang+tao+adaptive+control
You can very easily implement his algorithms in MATLAB/Simulink. For instance, the adaptive dead-zone and backlash algorithms are fairly straightforward.
Shiyu's answer, regrettably, contains a number of false assertions.
In any physical system, you are going to encounter parametric uncertainty. Consider a control actuator in a vehicle. You can spend millions of dollars attempting to identify what the free-play in the actuator will be. Then, you'll drive the vehicle, and the actuator will wear, and that free-play gap will change. Then, you'll bring it in for maintenance, and that gap changes again.
Then, the next vehicle rolls off the assembly line, and it will have a different operational/maintenance history, and have entirely different characteristics. Therefore, any parameter you identify experimentally will be obsolete, and using that value as a ground-truth for your parameterized system could actually be harmful.
MRAC is designed to identify and adapt to these parameters. Therefore, the controller will adapt to whatever the parameter happens to be. Furthermore, in many algorithms, convergence is guaranteed!
Model-building is a necessary step in any control scheme. MRAC allows you to use those same models for your adaptive controller. In fact, adaptive control schemes can sit on top of extant control laws without the need to modify the stable controller at all.
MRAC is not used in flight controls not because it is too slow or too risky; it's because Validation and Verification (V&V) is so expensive and difficult. But for a remote-controlled quadrotor, an adaptive controller is a completely suitable approach. The algorithms are real-time algorithms, so there is no need for multiple iterations per time step. If you have a simple linearized model, all you'll need is to be able to clear a few matrix-vector multiplications in each time step. That's not terribly difficult. Obviously, it depends on the dimensionality of your model, but your quadrotor model shouldn't be that complex.
Given a state space model of the following form,
$$
\dot{x} = A\,x + B\,u, \tag{1}
$$
$$
y = C\,x + D\,u. \tag{2}
$$
The openloop transfer function of this system can be found by taking the Laplace transform and assuming all initial conditions to be zero (such that $\mathcal{L}\{\dot{x}(t)\}$ can just be written as $s\,X(s)$). Doing this for equation $(1)$ yields,
$$
s\,X(s) = A\,X(s) + B\,U(s), \tag{3}
$$
which can be rewritten as,
$$
X(s) = (s\,I - A)^{-1} B\,U(s). \tag{4}
$$
Substituting this into equation $(2)$ and defining the openloop transfer function $G(s)$ as the ratio between output ($Y(s)$) and input ($U(s)$) yields,
$$
G(s) = C\,(s\,I - A)^{-1} B + D. \tag{5}
$$
In a normal block diagram representation the controller has as an input $r-y$, with $r$ the reference value you would like to have for $y$, and an output $u$, which would be the input to $G(s)$. For now $r$ can be set to zero, so the controller can be defined as the transfer function from $-y$ to $u$.
For an observer based controller ($L$ and $K$ such that $A-B\,K$ and $A-L\,C$ are Hurwitz) for a state space model we can write the following dynamics,
$$
u = -K\,\hat{x}, \tag{6}
$$
$$
\dot{x} = A\,x - B\,K\,\hat{x}, \tag{7}
$$
$$
\dot{\hat{x}} = A\,\hat{x} + B\,u + L(y - C\,\hat{x} - D\,u) = (A - B\,K - L\,C + L\,D\,K) \hat{x} + L\,y. \tag{8}
$$
Similar to equations $(1)$, $(2)$ and $(5)$, the transfer function of the controller $C(s)$, defined as the ratio of $U(s)$ and $-Y(s)$, can be found to be,
$$
C(s) = K\,(s\,I - A + B\,K + L\,C - L\,D\,K)^{-1} L. \tag{9}
$$
If you want to find the total openloop transfer function from "$-y$" to "$y$" you have to keep in mind that in general $G(s)$ and $C(s)$ are matrices of transfer functions, so the order of multiplication matters. Namely you first multiply the error ($r-y$) with the controller and then the plant, the openloop transfer function can be written as $G(s)\,C(s)$. The closedloop transfer function can then be found with,
$$
\frac{Y(s)}{R(s)} = (I + G(s)\,C(s))^{-1} G(s)\,C(s). \tag{10}
$$
It can also be found directly using equations $(2)$ and $(6)$, and the closedloop state space model dynamics,
$$
\begin{bmatrix}
\dot{x} \\ \dot{\hat{x}}
\end{bmatrix} = \begin{bmatrix}
A & -B\,K \\
L\,C & A - B\,K - L\,C
\end{bmatrix} \begin{bmatrix}
x \\ \hat{x}
\end{bmatrix} + \begin{bmatrix}
0 \\ -L
\end{bmatrix} r, \tag{11}
$$
$$
\frac{Y(s)}{R(s)} = \begin{bmatrix}
C & -D\,K
\end{bmatrix} \begin{bmatrix}
s\,I - A & B\,K \\
-L\,C & s\,I - A + B\,K + L\,C
\end{bmatrix}^{-1} \begin{bmatrix}
0 \\ -L
\end{bmatrix}. \tag{12}
$$
Best Answer
Your goal is to stabilize the roll angle $\varphi$.
Its dynamics are described via $$ \ddot{\varphi}I_{xx} = u $$ where $u$ is the control input (motor torque in your case).
The 1$^{\text{st}}$ flaw I found is that you're using four integrators instead of two. Assuming that the measured output that is used for an output feedback control is $$y = \int\int\ddot{\varphi}\: \text{d}t\text{d}t= \varphi,$$
only two states are required (denoted
phi
anddphi
in the figure). Stated as a SL model this would beThe 2$^{\text{nd}}$ problem I noted is the choice of your controller. Applying a PD controller with the transfer function
$$ K_{PD}(s) = \frac{Vs}{1+Ts} $$
renders one of your critically stable poles ($s=0$) uncontrollable due to a pole-zero cancellation in the open loop
$$ L(s) = \frac{Vs}{1+Ts} \frac{1}{s^2} = \frac{V}{s(1 + Ts)}. $$
An output-feedback controller $K$ that does the job of stabilizing the $\varphi$ can be achieved via open loop shaping. The main design goal is to retain a reasonable phase reserve of about 60°. This can be achieved by a lead/lag controller design
$$ K_{LL}(s) = V_{LL}\frac{1+T_{lead}s}{1+T_{lag}s}$$
where I chose for your nominal system ($I_{xx} = 1$) the following parameters:
$$ V_{LL} = 100, \qquad T_{lead} = \frac{1}{5}, \qquad T_{lag} = \frac{1}{50}. $$
This choice yields the following Bode plot which shows a phase reserve of around 55° at a breakthrough frequency of $\omega_{c} \approx 18 \frac{\text{rad}}{\text{s}}$.
The corresponding step response gives a good idea of the performance of the controller. Note that the integrators of your model covers a vanishing control error $\lim\limits_{t\rightarrow \infty}e = 0$ for the chosen output feedback loop but I don't know if you're feeling comfortable with an overshoot of $20$%+.
In case your IMU has the capability of measuring roll velocity as well you could switch over to a state feedback controller design which has the advantage of pole placement. This means that you can freely choose thoroughly new dynamics of your roll angle behaviour (limited by input constraints and measuring noise, of course..) at the cost of losing the structural property of vanishing control error.
The Ackermann algorithm for doing so is $$ k^T = [0, 1]S^{-1}(A,b)(p_0 + p_1A + p_2A^2), $$ where $k^T$ is the feedback vector, $S^{-1}(A,b)$ is the inverse of the controllability matrix, $A$ is your system matrix and $p_i$ are the coefficients of your desired characteristic polynomial.
Your system has the following parameters: $$ A = \left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \\ \end{array} \right), \qquad b = \left( \begin{array}{c} 0 \\ \frac{1}{I_{xx}} \\ \end{array} \right), \qquad S(A,b) = \left( b, Ab \right). $$
I.e. your nominal system yields a feedback vector $k^T = [k_1, k_2] = [400, 40]$, given desired eigenvalues of $\lambda_1 = \lambda_2 = -20$.
It can be implemented in the following fashion:
The step response needs output scaling, though:
Don't forget to reassure that you don't run into any input constraints when designing the controller.