Solved – Deep learning vs. Decision trees and boosting methods

adaboostcartdeep learningmachine learningrestricted-boltzmann-machine

I am looking for papers or texts that compare and discuss (either empirically or theoretically):

with

More specifically, does anybody know of a text that discusses or compares these two blocks of ML methods in terms of speed, accuracy or convergence? Also, I am looking for texts that explain or summarize the differences (e.g. pros and cons) between the models or methods in the second block.

Any pointers or answers addressing such comparisons directly would be greatly appreciated.

Best Answer

Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest.

I'm also not sure how to compare methods like boosting and DL, as boosting is really just a collection of methods. What other algorithms are you using with the boosting?

In general, DL techniques can be described as layers of encoder/decoders. Unsupervised pre-training works by first pre-training each layer by encoding the signal, decoding the signal, then measuring the reconstruction error. Tuning can then be used to get better performance (e.g. if you use denoising stacked-autoencoders you can use back-propagation).

One good starting point for DL theory is:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.73.795&rep=rep1&type=pdf

as well as these:

http://portal.acm.org/citation.cfm?id=1756025

(sorry, had to delete last link due to SPAM filtration system)

I didn't include any information on RBMs, but they are closely related (though personally a little more difficult to understand at first).

Related Question