In this chapter, we implemented a working machine learning solution for motion data classification and trained it end-to-end on a device. The simplest of the instance-based models is the nearest neighbors classifier. You can use it to classify any type of data, the only tricky thing is to choose a suitable distance metric. For feature vectors (points in n-dimensional space), many metrics have been invented, such as the Euclidean and Manhattan distances. For strings, editing distances are popular. For time series, we applied DTW.
The nearest neighbors method is a non-parametric model, which means that we can apply it without regard to statistical data distributions. Another advantage is that it is well suited for online learning and is easy to parallelize. Among the shortcomings is the curse of dimensionality and the algorithmic complexity of predictions (lazy learning).
In the next chapter, we're going to proceed with instance-based algorithms, this time focusing on the unsupervised clustering task.