官术网_书友最值得收藏!

Hyperparameter tuning with scikit-optimize

In machine learning, a hyperparameter is a parameter whose value is set before the training process begins. For example, the choice of learning rate of a gradient boosting model and the size of the hidden layer of a multilayer perceptron, are both examples of hyperparameters. By contrast, the values of other parameters are derived via training. Hyperparameter selection is important because it can have a huge effect on the model's performance.

The most basic approach to hyperparameter tuning is called a grid search. In this method, you specify a range of potential values for each hyperparameter, and then try them all out, until you find the best combination. This brute-force approach is comprehensive but computationally intensive. More sophisticated methods exist. In this recipe, you will learn how to use Bayesian optimization over hyperparameters using scikit-optimize. In contrast to a basic grid search, in Bayesian optimization, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from specified distributions. More details can be found at https://scikit-optimize.github.io/notebooks/bayesian-optimization.html.

主站蜘蛛池模板: 新余市| 固原市| 华阴市| 双柏县| 锦屏县| 四会市| 砀山县| 会宁县| 思南县| 嘉义市| 正定县| 丹凤县| 花莲市| 承德县| 康定县| 天气| 会宁县| 依安县| 广宁县| 浦北县| 都匀市| 林周县| 花莲县| 板桥市| 衡水市| 开远市| 新建县| 新津县| 南汇区| 长子县| 澜沧| 天镇县| 固原市| 湄潭县| 开化县| 客服| 衡阳市| 准格尔旗| 碌曲县| 中西区| 海城市|