官术网_书友最值得收藏!

Using decision trees

We can import the DecisionTreeClassifier class and create a Decision Tree using scikit-learn:

from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(random_state=14)

We used 14 for our random_state again and will do so for most of the book. Using the same random seed allows for replication of experiments. However, with your experiments, you should mix up the random state to ensure that the algorithm's performance is not tied to the specific value.

We now need to extract the dataset from our pandas data frame in order to use it with our scikit-learn classifier. We do this by specifying the columns we wish to use and using the values parameter of a view of the data frame. The following code creates a dataset using our last win values for both the home team and the visitor team:

X_previouswins = dataset[["HomeLastWin", "VisitorLastWin"]].values

Decision trees are estimators, as introduced in Chapter 2, Classifying using scikit-learn Estimators, and therefore have fit and predict methods. We can also use the cross_val_score method to get the average score (as we did previously):

from sklearn.cross_validation import cross_val_score
import numpy as np
scores = cross_val_score(clf, X_previouswins, y_true,
scoring='accuracy')
print("Accuracy: {0:.1f}%".format(np.mean(scores) * 100))

This scores 59.4 percent: we are better than choosing randomly! However, we aren't beating our other baseline of just choosing the home team. In fact, we are pretty much exactly the same. We should be able to do better. Feature engineering is one of the most difficult tasks in data mining, and choosing good features is key to getting good outcomes—more so than choosing the right algorithm!

主站蜘蛛池模板: 博客| 石渠县| 高淳县| 石楼县| 万山特区| 电白县| 兰西县| 宜阳县| 视频| 昌图县| 东莞市| 昭苏县| 奉化市| 宜昌市| 汉中市| 唐河县| 贵南县| 中卫市| 满洲里市| 荣昌县| 安西县| 南宫市| 巨鹿县| 千阳县| 呈贡县| 利川市| 齐河县| 许昌县| 荥阳市| 漳浦县| 绩溪县| 白山市| 祁阳县| 南乐县| 体育| 许昌市| 松原市| 永福县| 鸡西市| 神木县| 平阴县|