Solved – Using randomized search algorithms to find weights for neural network

neural networksoptimization

I am currently taking a class in machine learning. I had mentioned to a coworker that we were learning about randomized optimization, specifically randomized hill climbing (RHC). He said that it was possible to use RHC instead of backpropagation to find good weights for a neural network. Unfortunately, we didn't get to finish the conversation, and it's been bugging me all weekend. Does anyone know what he meant by that?

I've been playing around with Weka, using the weka.classifiers.bayes.net.search.local.K2 classifier…but I'm just not seeing how that would give me weights that could be used by a neural network, such as the multilayer perceptron.

Best Answer

Recent papers show that there is nearly always some small step you can take to head toward a good solution: Saddle point problem It seems to me that something like random hill climbing (RHC) is fine for that. In practice I am starting to see that in my code too. For deep neural networks I wouldn't be surprised if RHC and back-propagation were comparable in the amount of computation required.