官术网_书友最值得收藏!

Data transformation

Data transformation techniques tame the dataset to a format that a machine learning algorithm expects as input and may even help the algorithm to learn faster and achieve better performance. It is also known as data munging or data wrangling. Standardization, for instance, assumes that data follows Gaussian distribution and transforms the values in such a way that the mean value is 0 and the deviation is 1, as follows:

Normalization, on the other hand, scales the values of attributes to a small, specified range, usually between 0 and 1:

Many machine learning toolboxes automatically normalize and standardize the data for you.

The last transformation technique is discretization, which divides the range of a continuous attribute into intervals. Why should we care? Some algorithms, such as decision trees and Naive Bayes prefer discrete attributes. The most common ways to select the intervals are as follows:

  • Equal width: The interval of continuous variables is divided into k equal width intervals
  • Equal frequency: Supposing there are N instances, each of the k intervals contains approximately N or k instances
  • Min entropy: This approach recursively splits the intervals until the entropy, which measures disorder, decreases more than the entropy increase, introduced by the interval split (Fayyad and Irani, 1993)

The first two methods require us to specify the number of intervals, while the last method sets the number of intervals automatically; however, it requires the class variable, which means it won't work for unsupervised machine learning tasks.

主站蜘蛛池模板: 屯留县| 江源县| 忻州市| 垫江县| 焦作市| 建昌县| 巍山| 县级市| 翼城县| 津南区| 富宁县| 扎兰屯市| 敦煌市| 罗江县| 淮南市| 积石山| 江陵县| 吉木萨尔县| 阿拉善盟| 田东县| 四子王旗| 射洪县| 丰宁| 舒兰市| 张家界市| 中超| 光山县| 马山县| 平遥县| 青田县| 唐河县| 微山县| 五指山市| 海盐县| 安溪县| 南丰县| 桂东县| 天门市| 定安县| 上犹县| 金乡县|