官术网_书友最值得收藏!

Atom extraction and dictionary learning

Dictionary learning is a technique which allows rebuilding a sample starting from a sparse dictionary of atoms (similar to principal components). In Mairal J., Bach F., Ponce J., Sapiro G., Online Dictionary Learning for Sparse Coding, Proceedings of the 29th International Conference on Machine Learning, 2009 there's a description of the same online strategy adopted by scikit-learn, which can be summarized as a double optimization problem where:

Is an input dataset and the target is to find both a dictionary D and a set of weights for each sample:

After the training process, an input vector can be computed as:

The optimization problem (which involves both D and alpha vectors) can be expressed as the minimization of the following loss function:

Here the parameter c controls the level of sparsity (which is proportional to the strength of L1 normalization). This problem can be solved by alternating the least square variable until a stable point is reached.

In scikit-learn, we can implement such an algorithm with the class DictionaryLearning (using the usual MNIST datasets), where n_components, as usual, determines the number of atoms:

from sklearn.decomposition import DictionaryLearning

>>> dl = DictionaryLearning(n_components=36, fit_algorithm='lars', transform_algorithm='lasso_lars')
>>> X_dict = dl.fit_transform(digits.data)

A plot of each atom (component) is shown in the following figure:

This process can be very long on low-end machines. In such a case, I suggest limiting the number of samples to 20 or 30.
主站蜘蛛池模板: 依安县| 永仁县| 太白县| 军事| 邵阳县| 萨嘎县| 赫章县| 怀柔区| 应用必备| 绥滨县| 邯郸市| 手游| 安乡县| 二连浩特市| 长泰县| 孝义市| 噶尔县| 句容市| 怀安县| 达州市| 青岛市| 赤城县| 于都县| 东宁县| 于都县| 宜州市| 伊通| 舟山市| 石屏县| 芮城县| 始兴县| 高台县| 大安市| 陆川县| 定结县| 任丘市| 马边| 阿合奇县| 汽车| 北票市| 保德县|