官术网_书友最值得收藏!

Measuring the performance of our model

Now that our MLP has been trained, we can start to understand how good it is. I'll make a prediction on our Train, Val, and Test datasets to do so. The code for the same is as follows:

print("Model Train MAE: " + str(mean_absolute_error(data["train_y"], model.predict(data["train_X"]))))
print("Model Val MAE: " + str(mean_absolute_error(data["val_y"], model.predict(data["val_X"]))))
print("Model Test MAE: " + str(mean_absolute_error(data["test_y"], model.predict(data["test_X"]))))

For our MLP, this is how well we did:

Model Train MAE: 0.190074701809
Model Val MAE: 0.213255747475
Model Test MAE: 0.199885450841

Keep in mind that our data has been scaled to 0 mean and unit variance. The Train MAE is 0.19, and our Val MAE is 0.21. These two errors are pretty close to each other, so over fitting isn't something I'd be too concerned about. Because I am expecting some amount of over fitting that I don't see (usually over fitting is the bigger problem), I hypothesize this model might have too much bias. Said another way, we might not be able to fit the data closely enough. When this occurs, we need to add more layers, more neurons, or both to our model. We need to go deeper. Let's do that next. 

We can attempt to reduce network bias by adding parameters to the network, in the form of more neurons. While you might be tempted to start tuning your optimizer, it's usually better to find a network architecture you're comfortable with first. 
主站蜘蛛池模板: 大渡口区| 山东省| 博白县| 襄樊市| 科技| 兰考县| 吴忠市| 鄂州市| 乐至县| 多伦县| 福清市| 永靖县| 砀山县| 呼图壁县| 宜都市| 正安县| 都安| 买车| 宁蒗| 曲麻莱县| 仁布县| 侯马市| 长岛县| 当雄县| 图木舒克市| 高安市| 曲靖市| 宜丰县| 赣州市| 寻甸| 织金县| 石狮市| 含山县| 沈丘县| 韶山市| 阳朔县| 安陆市| 霍林郭勒市| 西贡区| 丰都县| 峨山|