Basically, Decision Trees are a pure classification techniques. These techniques aim at labelling records of unknown class making use of their
features. They basically map the set of record features $\mathcal{F} = {F_1 , \dots, F_m }$ (attributes, variables) into the class attribute $C$ (target variable), the object of the classification. The relationship between $\mathcal{F}$ and $C$ is learned using a set of labelled records, defined as the training set. The ultimate purpose of classification models is to minimise the mis-classification error on unlabelled records, where the class predicted by the model differs from the real one. The features $F$ can be categorical or continuous.
Association analysis first applications were about market basket analysis, in these application you are interested in finding out association between items with no particular focus on a target one. Datasets commonly used are the transactional ones: a collection of transaction were each of those contains a set of items. For example:
$$ t_1 = \{i_1,i_2 \} \\
t_2 = \{i_1, i_3, i_4, i_5 \} \\
t_3 = \{i_2, i_3, i_4, i_5 \} \\
\vdots \\
t_n = \{ i_2, i_3, i_4, i_5 \}
$$
You are interested in finding out rules such as
$$ \{ i_3, i_5 \} \rightarrow \{ i_4 \} $$
It turns out that you can use association analysis for some specific classification tasks, for example when all your features are categorical. You have just to see items as features, but this is not what association analysis was born for.
Ok.. pretty straightforward now I know what was the problem.
For anybody who encounters a similar problem. The problem was that confidence should (of course) be set at the ruleInduction() step and not when finding all itemsets. Only support is relevant then. Because I didn't give a value for confidence at the ruleInduction() step, the default value for confidence of 0.8 was used and thus less rules were found.
So doing:
txs <- as(inputDataTable,"transactions")
itemsets <- apriori(txs, parameter = list(support = 0.05, target="frequent itemsets"))
rules <- ruleInduction(itemsets, confidence = 0.7)
and
txs <- as(inputDataTable,"transactions")
rules <- apriori(txs, parameter = list(support = 0.05, confidence = 0.7, target="rules"))
does lead to the same result. :)
Best Answer
Here is a good description: http://www.slideshare.net/wanaezwani/apriori-and-eclat-algorithm-in-association-rule-mining In particular, apriori is probably the first association rule mining and computationally complex. This leads to the introduction of further fast algorithms.