The following are the three primary hyperparameter optimization algorithms:
It’s a method of detecting a family of models with a grid of parameters. From the values of hyperparameters supplied, it trains the model for all conceivable combinations.
It does this by searching the sample space at random and evaluating the sets using a probability distribution. The model is only run a set number of times in this case.
It uses Bayes’ theorem to guide the search for the smallest or largest goal function. It’s best for objective functions that are difficult to assess because they’re complicated, noisy, or costly.