Basically, Decision Trees are a pure classification techniques. These techniques aim at labelling records of unknown class making use of their
features. They basically map the set of record features $\mathcal{F} = {F_1 , \dots, F_m }$ (attributes, variables) into the class attribute $C$ (target variable), the object of the classification. The relationship between $\mathcal{F}$ and $C$ is learned using a set of labelled records, defined as the training set. The ultimate purpose of classification models is to minimise the mis-classification error on unlabelled records, where the class predicted by the model differs from the real one. The features $F$ can be categorical or continuous.
Association analysis first applications were about market basket analysis, in these application you are interested in finding out association between items with no particular focus on a target one. Datasets commonly used are the transactional ones: a collection of transaction were each of those contains a set of items. For example:
$$ t_1 = \{i_1,i_2 \} \\
t_2 = \{i_1, i_3, i_4, i_5 \} \\
t_3 = \{i_2, i_3, i_4, i_5 \} \\
\vdots \\
t_n = \{ i_2, i_3, i_4, i_5 \}
$$
You are interested in finding out rules such as
$$ \{ i_3, i_5 \} \rightarrow \{ i_4 \} $$
It turns out that you can use association analysis for some specific classification tasks, for example when all your features are categorical. You have just to see items as features, but this is not what association analysis was born for.
Best Answer
There are several algorithms for creating classifiers out of association rules. The most basic approach will only add a default rule as the last rule, otherwise the resulting classifier may not be able to make prediction for some instances. More sophisticated algorithms such as CBA (Classification based on Associations) by Bing Liu also perform pruning, which selects only some of the discovered association rules.
Benchmark of CBA against C4.5 can be found in the original paper:
Ma, Bing Liu Wynne Hsu Yiming. "Integrating classification and association rule mining." Proceedings of the fourth international conference on knowledge discovery and data mining. 1998.
According to this evaluation (by CBA author) CBA has slightly better accuracy (1% difference) than C4.5.
A limited evaluation of the simplest approach that essentially just adds a default rule to the end of the rule set, is covered in:
Kliegr, Tomáš, et al. Learning business rules with association rule classifiers. Rules on the Web. From Theory to Applications. Springer International Publishing, 2014. 236-250.
This evaluation shows that using the list of association rules as is drops the accuracy compared to the prune baseline, typically by more than 1%. This indicates that C4.5 would perform better.
Disclaimer: I am co-author of the latter paper.