官术网_书友最值得收藏!

Using decision trees

We can import the DecisionTreeClassifier class and create a Decision Tree using scikit-learn:

from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(random_state=14)

We used 14 for our random_state again and will do so for most of the book. Using the same random seed allows for replication of experiments. However, with your experiments, you should mix up the random state to ensure that the algorithm's performance is not tied to the specific value.

We now need to extract the dataset from our pandas data frame in order to use it with our scikit-learn classifier. We do this by specifying the columns we wish to use and using the values parameter of a view of the data frame. The following code creates a dataset using our last win values for both the home team and the visitor team:

X_previouswins = dataset[["HomeLastWin", "VisitorLastWin"]].values

Decision trees are estimators, as introduced in Chapter 2, Classifying using scikit-learn Estimators, and therefore have fit and predict methods. We can also use the cross_val_score method to get the average score (as we did previously):

from sklearn.cross_validation import cross_val_score
import numpy as np
scores = cross_val_score(clf, X_previouswins, y_true,
scoring='accuracy')
print("Accuracy: {0:.1f}%".format(np.mean(scores) * 100))

This scores 59.4 percent: we are better than choosing randomly! However, we aren't beating our other baseline of just choosing the home team. In fact, we are pretty much exactly the same. We should be able to do better. Feature engineering is one of the most difficult tasks in data mining, and choosing good features is key to getting good outcomes—more so than choosing the right algorithm!

主站蜘蛛池模板: 新密市| 托克逊县| 望城县| 禄劝| 朔州市| 塔城市| 新化县| 大洼县| 连山| 洞口县| 满城县| 庆阳市| 永春县| 军事| 凤冈县| 杭州市| 眉山市| 藁城市| 滕州市| 平谷区| 山东省| 错那县| 读书| 盘山县| 阿尔山市| 鄢陵县| 东乡县| 阿克陶县| 调兵山市| 固阳县| 龙岩市| 乌拉特前旗| 明溪县| 交口县| 昌吉市| 荣昌县| 乐陵市| 尤溪县| 南通市| 巨鹿县| 赫章县|