官术网_书友最值得收藏!

Data reduction

Data reduction deals with abundant attributes and instances. The number of attributes corresponds to the number of dimensions in our dataset. Dimensions with low prediction power contribute very little to the overall model, and cause a lot of harm. For instance, an attribute with random values can introduce some random patterns that will be picked up by a machine learning algorithm. It may happen that data contains a large number of missing values, wherein we have to find the reason for missing values in large numbers, and on that basis, it may fill it with some alternate value or impute or remove the attribute altogether. If 40% or more values are missing, then it may be advisable to remove such attributes, as this will impact the model performance.

The other factor is variance, where the constant variable may have low variance, which means the data is very close to each other or there is not very much variation in the data.

To deal with this problem, the first set of techniques removes such attributes and selects the most promising ones. This process is known as feature selection, or attributes selection, and includes methods such as ReliefF, information gain, and the Gini index. These methods are mainly focused on discrete attributes.

Another set of tools, focused on continuous attributes, transforms the dataset from the original dimensions into a lower-dimensional space. For example, if we have a set of points in three-dimensional space, we can make a projection into a two-dimensional space. Some information is lost, but in a situation where the third dimension is irrelevant, we don't lose much, as the data structure and relationships are almost perfectly preserved. This can be performed by the following methods:

  • Singular value decomposition (SVD)
  • Principal component analysis (PCA)
  • Backward/forward feature elimination
  • Factor analysis
  • Linear discriminant analysis (LDA)
  • Neural network autoencoders

The second problem in data reduction is related to too many instances; for example, they can be duplicates or come from a very frequent data stream. The main idea is to select a subset of instances in such a way that distribution of the selected data still resembles the original data distribution, and more importantly, the observed process. Techniques to reduce the number of instances involve random data sampling, stratification, and others. Once the data is prepared, we can start with the data analysis and modeling.

主站蜘蛛池模板: 深圳市| 西城区| 曲沃县| 米脂县| 望都县| 六盘水市| 达拉特旗| 高平市| 镶黄旗| 涞水县| 盘锦市| 通化市| 临江市| 正定县| 绍兴市| 呼和浩特市| 吉首市| 万载县| 全州县| 海原县| 监利县| 临邑县| 安国市| 凌云县| 将乐县| 喀什市| 扶风县| 许昌县| 讷河市| 大兴区| 寿光市| 镇远县| 武清区| 克山县| 渭南市| 科技| 沂源县| 子长县| 庄河市| 永吉县| 白沙|