- Machine Learning for Cybersecurity Cookbook
- Emmanuel Tsukerman
- 177字
- 2021-06-24 12:29:01
Hyperparameter tuning with scikit-optimize
In machine learning, a hyperparameter is a parameter whose value is set before the training process begins. For example, the choice of learning rate of a gradient boosting model and the size of the hidden layer of a multilayer perceptron, are both examples of hyperparameters. By contrast, the values of other parameters are derived via training. Hyperparameter selection is important because it can have a huge effect on the model's performance.
The most basic approach to hyperparameter tuning is called a grid search. In this method, you specify a range of potential values for each hyperparameter, and then try them all out, until you find the best combination. This brute-force approach is comprehensive but computationally intensive. More sophisticated methods exist. In this recipe, you will learn how to use Bayesian optimization over hyperparameters using scikit-optimize. In contrast to a basic grid search, in Bayesian optimization, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from specified distributions. More details can be found at https://scikit-optimize.github.io/notebooks/bayesian-optimization.html.
- 大數據戰爭:人工智能時代不能不說的事
- Java實用組件集
- 反饋系統:多學科視角(原書第2版)
- PIC單片機C語言非常入門與視頻演練
- DevOps:Continuous Delivery,Integration,and Deployment with DevOps
- 自動控制理論(非自動化專業)
- 西門子變頻器技術入門及實踐
- Spatial Analytics with ArcGIS
- Linux系統下C程序開發詳解
- Web璀璨:Silverlight應用技術完全指南
- Embedded Linux Development using Yocto Projects(Second Edition)
- 軟測之魂
- Java Deep Learning Projects
- 大話數據科學:大數據與機器學習實戰(基于R語言)
- 人工智能:重塑個人、商業與社會