- Machine Learning with Swift
- Alexander Sosnovshchenko
- 387字
- 2021-06-24 18:55:04
Implementing KNN in Swift
The KNN classifier works with virtually any type of data since you define distance metric for your data points. That's why we define it as a generic structure parameterized with types for features and labels. Labels should conform to a Hashable protocol, as we're going to use them for dictionary keys:
struct kNN<X, Y> where Y: Hashable { ... }
KNN has two hyperparameters: k—the number of neighbors var k: Int, and distance metric. We'll define it elsewhere, and pass during the initialization. Metric is a function, returning double distance for any two samples x1 and x2:
var distanceMetric: (_ x1: X, _ x2: X) -> Double
During the initialization, we just record the hyperparameters inside our structure. The definition of init looks like this:
init (k: Int, distanceMetric: @escaping (_ x1: X, _ x2: X) -> Double) { self.k = k self.distanceMetric = distanceMetric }
KNN stores all its training data points. We are using the array of pairs (features, label) for this purposes:
private var data: [(X, Y)] = []
As usual with supervised learning models, we'll stick to the interface with two methods, train and predict, which reflect the two phases of a supervised algorithm's life. The train method in the case of KNN just saves the data points to use them later in the predict method:
mutating func train(X: [X], y: [Y]) { data.append(contentsOf: zip(X, y)) }
The predict method takes the data point and predicts the label for it:
func predict(x: X) -> Y? { assert(data.count > 0, "Please, use method train() at first to provide training data.") assert(k > 0, "Error, k must be greater then 0.")
For this, we iterate through all samples in the training dataset, and compare them with the input sample x. We use (distance, label) tuples to keep track of distances to each of the training samples. After this, we sort all the samples descending by distances, and take the (prefix) first k elements:
let tuples = data .map { (distanceMetric(x, $0.0), $0.1) } .sorted { $0.0 < $1.0 } .prefix(upTo: k)
Now we arrange majority voting among top k samples. We count the frequency of each label, and sort them from descending:
let countedSet = NSCountedSet(array: tuples.map{$0.1}) let result = countedSet.allObjects.sorted { countedSet.count(for: $0) > countedSet.count(for: $1) }.first return result as? Y }
The result variable holds a predicted class label.
- 24小時學會電腦組裝與維護
- 用“芯”探核:龍芯派開發實戰
- Windows phone 7.5 application development with F#
- Cortex-M3 + μC/OS-II嵌入式系統開發入門與應用
- Istio入門與實戰
- Intel FPGA/CPLD設計(高級篇)
- 嵌入式系統設計教程
- 深入淺出SSD:固態存儲核心技術、原理與實戰(第2版)
- 分布式微服務架構:原理與實戰
- Rapid BeagleBoard Prototyping with MATLAB and Simulink
- WebGL Hotshot
- Hands-On Deep Learning for Images with TensorFlow
- The Artificial Intelligence Infrastructure Workshop
- Spring Security 3.x Cookbook
- Deep Learning with Keras