官术网_书友最值得收藏!

Using decision trees

We can import the DecisionTreeClassifier class and create a Decision Tree using scikit-learn:

from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(random_state=14)

We used 14 for our random_state again and will do so for most of the book. Using the same random seed allows for replication of experiments. However, with your experiments, you should mix up the random state to ensure that the algorithm's performance is not tied to the specific value.

We now need to extract the dataset from our pandas data frame in order to use it with our scikit-learn classifier. We do this by specifying the columns we wish to use and using the values parameter of a view of the data frame. The following code creates a dataset using our last win values for both the home team and the visitor team:

X_previouswins = dataset[["HomeLastWin", "VisitorLastWin"]].values

Decision trees are estimators, as introduced in Chapter 2, Classifying using scikit-learn Estimators, and therefore have fit and predict methods. We can also use the cross_val_score method to get the average score (as we did previously):

from sklearn.cross_validation import cross_val_score
import numpy as np
scores = cross_val_score(clf, X_previouswins, y_true,
scoring='accuracy')
print("Accuracy: {0:.1f}%".format(np.mean(scores) * 100))

This scores 59.4 percent: we are better than choosing randomly! However, we aren't beating our other baseline of just choosing the home team. In fact, we are pretty much exactly the same. We should be able to do better. Feature engineering is one of the most difficult tasks in data mining, and choosing good features is key to getting good outcomes—more so than choosing the right algorithm!

主站蜘蛛池模板: 泽州县| 平乡县| 襄樊市| 丰城市| 龙山县| 亳州市| 桂平市| 合水县| 陈巴尔虎旗| 岑巩县| 西藏| 祁门县| 上犹县| 修武县| 崇文区| 霸州市| 手机| 哈巴河县| 时尚| 清镇市| 英德市| 和静县| 林甸县| 岫岩| 龙州县| 茌平县| 太仓市| 页游| 遂宁市| 水富县| 台州市| SHOW| 晋江市| 陕西省| 金门县| 河南省| 波密县| 奉贤区| 白朗县| 乐亭县| 驻马店市|