- Machine Learning with Swift
- Alexander Sosnovshchenko
- 231字
- 2021-06-24 18:55:05
Choosing a good k
It is important to pick a proper value of hyperparameter k, since it can improve a model's performance as well as degrade it when chosen incorrectly. One popular rule of thumb is to take a square root of the number of training samples. Many popular software packages use this heuristic as a default k value. Unfortunately, this doesn't always work well, because of the differences in the data and distance metrics.
There is no mathematically-grounded way to come up with the optimal number of neighbors from the very beginning. The only option is to scan through a range of ks, and choose the best one according to some performance metric. You can use any performance metric that we've already described in the previous chapter: accuracy, F1, and so on. The cross-validation is especially useful when the data is scarce.
In fact, there is a variation of KNN, which doesn't require k at all. The idea is to make the algorithm take the radius of a ball to search the neighbors within. The k will be different for each point then, depending on the local density of points. This variation of the algorithm is known as radius-based neighbor learning. It suffers from the n-ball volume problem (see next section), because the more features you have, the bigger the radius should be to catch at least one neighbor.
- 新型電腦主板關鍵電路維修圖冊
- SDL Game Development
- INSTANT Wijmo Widgets How-to
- 嵌入式系統設計教程
- OUYA Game Development by Example
- 單片機系統設計與開發教程
- Building 3D Models with modo 701
- SiFive 經典RISC-V FE310微控制器原理與實踐
- 圖解計算機組裝與維護
- Wireframing Essentials
- 新編電腦組裝與硬件維修從入門到精通
- Intel FPGA權威設計指南:基于Quartus Prime Pro 19集成開發環境
- IP網絡視頻傳輸:技術、標準和應用
- 嵌入式系統原理及應用:基于ARM Cortex-M4體系結構
- 計算機組成技術教程