官术网_书友最值得收藏!

Hyperparameter tuning with scikit-optimize

In machine learning, a hyperparameter is a parameter whose value is set before the training process begins. For example, the choice of learning rate of a gradient boosting model and the size of the hidden layer of a multilayer perceptron, are both examples of hyperparameters. By contrast, the values of other parameters are derived via training. Hyperparameter selection is important because it can have a huge effect on the model's performance.

The most basic approach to hyperparameter tuning is called a grid search. In this method, you specify a range of potential values for each hyperparameter, and then try them all out, until you find the best combination. This brute-force approach is comprehensive but computationally intensive. More sophisticated methods exist. In this recipe, you will learn how to use Bayesian optimization over hyperparameters using scikit-optimize. In contrast to a basic grid search, in Bayesian optimization, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from specified distributions. More details can be found at https://scikit-optimize.github.io/notebooks/bayesian-optimization.html.

主站蜘蛛池模板: 舟曲县| 灵川县| 阳谷县| 广宗县| 呼和浩特市| 武功县| 绥德县| 新晃| 皮山县| 秦皇岛市| 保定市| 印江| 新绛县| 盱眙县| 长治市| 天镇县| 苍山县| 蒙自县| 蛟河市| 新泰市| 沿河| 揭阳市| 盐源县| 新宁县| 通化市| 桂阳县| 吉安县| 峨边| 丘北县| 蒲江县| 台前县| 响水县| 沅陵县| 攀枝花市| 报价| 遂昌县| 宜宾市| 轮台县| 项城市| 汕头市| 哈密市|