官术网_书友最值得收藏!

Choosing a good k

It is important to pick a proper value of hyperparameter k, since it can improve a model's performance as well as degrade it when chosen incorrectly. One popular rule of thumb is to take a square root of the number of training samples. Many popular software packages use this heuristic as a default k value. Unfortunately, this doesn't always work well, because of the differences in the data and distance metrics.

There is no mathematically-grounded way to come up with the optimal number of neighbors from the very beginning. The only option is to scan through a range of ks, and choose the best one according to some performance metric. You can use any performance metric that we've already described in the previous chapter: accuracy, F1, and so on. The cross-validation is especially useful when the data is scarce.

In fact, there is a variation of KNN, which doesn't require k at all. The idea is to make the algorithm take the radius of a ball to search the neighbors within. The k will be different for each point then, depending on the local density of points. This variation of the algorithm is known as radius-based neighbor learning. It suffers from the n-ball volume problem (see next section), because the more features you have, the bigger the radius should be to catch at least one neighbor.

主站蜘蛛池模板: 尉氏县| 新邵县| 根河市| 教育| 公主岭市| 东丰县| 马山县| 洪雅县| 丰都县| 清丰县| 新昌县| 德惠市| 南充市| 京山县| 贵溪市| 石棉县| 苍南县| 武山县| 钟祥市| 嘉义市| 达日县| 淄博市| 高阳县| 社旗县| 湖州市| 武夷山市| 马公市| 辉南县| 界首市| 孟津县| 和静县| 杨浦区| 新兴县| 吉隆县| 潞城市| 峡江县| 凌云县| 樟树市| 腾冲县| 五台县| 屯门区|