官术网_书友最值得收藏!

Choosing a good k

It is important to pick a proper value of hyperparameter k, since it can improve a model's performance as well as degrade it when chosen incorrectly. One popular rule of thumb is to take a square root of the number of training samples. Many popular software packages use this heuristic as a default k value. Unfortunately, this doesn't always work well, because of the differences in the data and distance metrics.

There is no mathematically-grounded way to come up with the optimal number of neighbors from the very beginning. The only option is to scan through a range of ks, and choose the best one according to some performance metric. You can use any performance metric that we've already described in the previous chapter: accuracy, F1, and so on. The cross-validation is especially useful when the data is scarce.

In fact, there is a variation of KNN, which doesn't require k at all. The idea is to make the algorithm take the radius of a ball to search the neighbors within. The k will be different for each point then, depending on the local density of points. This variation of the algorithm is known as radius-based neighbor learning. It suffers from the n-ball volume problem (see next section), because the more features you have, the bigger the radius should be to catch at least one neighbor.

主站蜘蛛池模板: 大关县| 永平县| 大新县| 武义县| 利川市| 白银市| 天镇县| 库伦旗| 辽宁省| 呼伦贝尔市| 咸丰县| 汕头市| 台东县| 平阳县| 灵台县| 舟曲县| 额敏县| 四川省| 泰来县| 皮山县| 德清县| 清镇市| 宝鸡市| 会理县| 洛阳市| 如皋市| 德钦县| 伊宁县| 文成县| 马尔康县| 昌平区| 东山县| 通海县| 正蓝旗| 南城县| 中江县| 墨脱县| 石景山区| 秦皇岛市| 兴山县| 敦煌市|