- Machine Learning Algorithms
- Giuseppe Bonaccorso
- 238字
- 2021-07-02 18:53:31
Atom extraction and dictionary learning
Dictionary learning is a technique which allows rebuilding a sample starting from a sparse dictionary of atoms (similar to principal components). In Mairal J., Bach F., Ponce J., Sapiro G., Online Dictionary Learning for Sparse Coding, Proceedings of the 29th International Conference on Machine Learning, 2009 there's a description of the same online strategy adopted by scikit-learn, which can be summarized as a double optimization problem where:

Is an input dataset and the target is to find both a dictionary D and a set of weights for each sample:

After the training process, an input vector can be computed as:

The optimization problem (which involves both D and alpha vectors) can be expressed as the minimization of the following loss function:

Here the parameter c controls the level of sparsity (which is proportional to the strength of L1 normalization). This problem can be solved by alternating the least square variable until a stable point is reached.
In scikit-learn, we can implement such an algorithm with the class DictionaryLearning (using the usual MNIST datasets), where n_components, as usual, determines the number of atoms:
from sklearn.decomposition import DictionaryLearning
>>> dl = DictionaryLearning(n_components=36, fit_algorithm='lars', transform_algorithm='lasso_lars')
>>> X_dict = dl.fit_transform(digits.data)
A plot of each atom (component) is shown in the following figure:

- SPSS數據挖掘與案例分析應用實踐
- 摩登創客:與智能手機和平板電腦共舞
- CentOS 7 Linux Server Cookbook(Second Edition)
- Web Development with Django Cookbook
- Network Automation Cookbook
- 秒懂設計模式
- Python 3破冰人工智能:從入門到實戰
- Mastering macOS Programming
- Unity Game Development Scripting
- RSpec Essentials
- 圖數據庫實戰
- AI自動化測試:技術原理、平臺搭建與工程實踐
- Xamarin Cross-Platform Development Cookbook
- Yii2 By Example
- JavaScript Concurrency