We'll use a straightforward approach here to calculate the confusion matrix; however, this would not work for multiclass classification. Here, p stands for predicted value, and t is for ground truth:
let pairs: [(Int, Int)] = zip(predictions, yVecTest).map{ ($0.0, $0.1) }
var confusionMatrix = [[0,0], [0,0]]
for (p, t) in pairs {
switch (p, t) {
case (0, 0):
confusionMatrix[0][0] += 1
case (0, _):
confusionMatrix[1][0] += 1
case (_, 0):
confusionMatrix[0][1] += 1
case (_, _):
confusionMatrix[1][1] += 1
}
}
let totalCount = Double(yVecTest.count)
Normalize the matrix by total count:
let normalizedConfusionMatrix = confusionMatrix.map{$0.map{Double($0)/totalCount}}
As we already know, accuracy is a number of true predictions divided by the total number of cases.
To calculate accuracy, try using the following code:
let truePredictionsCount = pairs.filter{ $0.0 == $0.1 }.count
let accuracy = Double(truePredictionsCount) / totalCoun
To calculate true positive, false positive, and false negative counts, you can use the numbers from the confusion matrix, but let's do it the proper way:
Congratulations! We've trained two machine learning algorithms, deployed them to the iOS, and evaluated their accuracy. Interesting that while decision tree metrics match perfectly, the random forest performance is slightly worse on Core ML. Don't forget to always validate your model after any type of conversion.