- Machine Learning with Swift
- Alexander Sosnovshchenko
- 509字
- 2021-06-24 18:55:03
DTW
Despite its Sci-Fi name, DTW has little to do with time travel, except for the fact that this technique was popular for speech recognition back in the 1980s. Imagine two signals as two springs oriented along the time axis. We place them next to each other on the table, and want to measure how similar (or how different... what's the same?) they are. One of them will serve as a template. And we start stretching and compressing another one, piece by piece, until it looks exactly as the first one (or the most similar). Then we account for how much effort we put into align two springs—we sum up all tensions and stretches together, and get the DTW distance.
DTW distance between two sound signals tells us how similar to each other they are. For example, having the record of an unknown voice command, we can compare it to voice commands in the database, and find the most similar one. DTW can be used not only with audio, but with many other types of signals. We will use it to calculate distance between signals from motion sensors:

Let's demonstrate this with a simple example. Say we have two arrays: [5, 2, 1, 3] and [10, 2, 4, 3]. How do we calculate the distance between two arrays of length one: [5] and [10]? You can use squared difference as a measure; for example, (5 - 10)2 = 25. Okay, now let's extend one of them: [5, 2] and [10], and calculate the cumulative difference:

Let's extend another array to have [5, 2] and [10, 2]. Now, how to calculate the cumulative difference is not as clear as it was before, but let's assume that we are interested in the simplest way to transform one array into another (minimal distance, in other words):

By extending arrays in such a way further, eventually we will get the following table:

The bottom-right cell of the table contains the quantity we're interested in: DTW distance between two arrays, the measure of how hard it is to transform one array into another. We've just checked all the possible ways to transform arrays, and found the easiest of them (marked with a gray shading in the table). Movement along the diagonal of the table indicates the perfect match between arrays, while horizontal direction stands for deletion of the elements from the first array, and vertical movement indicates insertion into it (compare with Figure 3.3). The final array alignment looks like this:
[5, 2, 1, 3, -]
[10, 2, -, 4, 3]
- 龍芯應用開發標準教程
- 現代辦公設備使用與維護
- 計算機應用與維護基礎教程
- 深入淺出SSD:固態存儲核心技術、原理與實戰(第2版)
- Artificial Intelligence Business:How you can profit from AI
- scikit-learn:Machine Learning Simplified
- Visual Media Processing Using Matlab Beginner's Guide
- 計算機組裝與維護(第3版)
- Managing Data and Media in Microsoft Silverlight 4:A mashup of chapters from Packt's bestselling Silverlight books
- Hands-On Deep Learning for Images with TensorFlow
- 單片微機原理及應用
- FPGA實驗實訓教程
- The Reinforcement Learning Workshop
- 現代多媒體技術及應用
- ActionScript Graphing Cookbook