官术网_书友最值得收藏!

Comparing the entropy differences (information gain)

To know which variable to choose for the first split, we calculate the information gain G when going from the original data to the corresponding subset as the difference between the entropy values:

Here, S(f1) is the entropy of the target variable and S(f1,f2) is the entropy of each feature with respect to the target variable. The entropy values were calculated in the previous subsections, so we use them here:

  • If we choose Outlook as the first variable to split the tree, the information gain is as follows:

G(Train outside,Outlook) = S(Train outside) - S(Train outside,Outlook)
                                                 = 0.94-0.693=0.247

  • If we choose Temperature, the information gain is as follows:

G(Train outside,Temperature) = S(Train outside) - S(Train outside,Temperature)
                                                           = 0.94-0.911=0.029

  • If we choose Humidity, the information gain is as follows:

G(Train outside,Humidity) = S(Train outside) - S(Train outside,Humidity)
                                                     = 0.94-0.788=0.152

  • Finally, choosing Windy gives the following information gain:

G(Train outside,Windy) = S(Train outside) - S(Train outside,Windy)
                                                  = 0.94-0.892=0.048

All these calculations are easily performed in a worksheet using Excel formulas.

The variable to choose for the first splitting of the tree is the one showing the largest information gain, that is, Outlook. If we do this, we will notice that one of the resulting subsets after the splitting has zero entropy, so we don't need to split it further.

To continue building the tree following a similar procedure, the steps to take are as follows:

  1. Calculate S(Sunny), S(Sunny,Temperature), S(Sunny,Humidity), and S(Sunny,Windy).
  2. Calculate G(Sunny,Temperature), G(Sunny,Humidity), and G(Sunny,Windy).
  3. The larger value will tell us what feature to use to split Sunny.
  4. Calculate other gains, using S(Rainy), S(Rainy,Temperature), S(Rainy,Humidity), and S(Rainy,Windy).
  5. The larger value will tell us what feature to use to split Rainy.
  6. Continue iterating until there are no features left to use.

As we will see later in this book, trees are never built by hand. It is important to understand how they work and which calculations are involved. Using Excel, it is easy to follow the full process and each step. Following the same principle, we will work through an unsupervised learning example in the next section.

主站蜘蛛池模板: 通化市| 隆安县| 灵寿县| 乐陵市| 库伦旗| 靖西县| 新巴尔虎右旗| 阜新市| 古蔺县| 全椒县| 紫阳县| 玉溪市| 屏边| 项城市| 当阳市| 陆良县| 新竹市| 灵璧县| 安阳县| 钦州市| 南郑县| 黄龙县| 巴马| 环江| 莎车县| 丽江市| 抚宁县| 石屏县| 孝感市| 寻乌县| 开封县| 红安县| 福建省| 和龙市| 宾川县| 大方县| 广水市| 尚义县| 双辽市| 广安市| 茌平县|