Solved – the difference between manual search and grid search for hyperparameters

hyperparametermachine learningoptimization

I am reading about the hyperparameters optimization for a machine learning model. I am reading "Practical Recommendations for Gradient-Based Training of Deep Architectures" paper. Author speaks about manual search, grid search and random search. What is the difference between Manual serach and grid search. What I understand is in both cases, we define a zone of interest and we select values from it to test our model on validation set. The best values of hyperparameters are choosen by minimizing a criteria, for example, error classification on validation set. May be in a grid search approach, we try more values! Any explanation?

Best Answer

The difference isn't especially enlightening. Grid-search pre-specifies some set of tuples up front and tries all of them. In manual search, a human adjusts the parameters, possibly incorporating knowledge about how those adjustments will influence the behavior of the model and estimation procedure.