官术网_书友最值得收藏!

Atom extraction and dictionary learning

Dictionary learning is a technique which allows rebuilding a sample starting from a sparse dictionary of atoms (similar to principal components). In Mairal J., Bach F., Ponce J., Sapiro G., Online Dictionary Learning for Sparse Coding, Proceedings of the 29th International Conference on Machine Learning, 2009 there's a description of the same online strategy adopted by scikit-learn, which can be summarized as a double optimization problem where:

Is an input dataset and the target is to find both a dictionary D and a set of weights for each sample:

After the training process, an input vector can be computed as:

The optimization problem (which involves both D and alpha vectors) can be expressed as the minimization of the following loss function:

Here the parameter c controls the level of sparsity (which is proportional to the strength of L1 normalization). This problem can be solved by alternating the least square variable until a stable point is reached.

In scikit-learn, we can implement such an algorithm with the class DictionaryLearning (using the usual MNIST datasets), where n_components, as usual, determines the number of atoms:

from sklearn.decomposition import DictionaryLearning

>>> dl = DictionaryLearning(n_components=36, fit_algorithm='lars', transform_algorithm='lasso_lars')
>>> X_dict = dl.fit_transform(digits.data)

A plot of each atom (component) is shown in the following figure:

This process can be very long on low-end machines. In such a case, I suggest limiting the number of samples to 20 or 30.
主站蜘蛛池模板: 贵南县| 酉阳| 平潭县| 冕宁县| 渑池县| 吉水县| 陈巴尔虎旗| 鲜城| 满洲里市| 吉木萨尔县| 武隆县| 南昌市| 华阴市| 修水县| 岐山县| 宜兰县| 通化市| 江西省| 新邵县| 康马县| 和龙市| 常德市| 文登市| 丰宁| 丹阳市| 博爱县| 信阳市| 安溪县| 集贤县| 奉新县| 两当县| 荣成市| 图片| 晋州市| 隆尧县| 九台市| 林西县| 安义县| 仁怀市| 达孜县| 南充市|