官术网_书友最值得收藏!

Measuring the performance of our model

Now that our MLP has been trained, we can start to understand how good it is. I'll make a prediction on our Train, Val, and Test datasets to do so. The code for the same is as follows:

print("Model Train MAE: " + str(mean_absolute_error(data["train_y"], model.predict(data["train_X"]))))
print("Model Val MAE: " + str(mean_absolute_error(data["val_y"], model.predict(data["val_X"]))))
print("Model Test MAE: " + str(mean_absolute_error(data["test_y"], model.predict(data["test_X"]))))

For our MLP, this is how well we did:

Model Train MAE: 0.190074701809
Model Val MAE: 0.213255747475
Model Test MAE: 0.199885450841

Keep in mind that our data has been scaled to 0 mean and unit variance. The Train MAE is 0.19, and our Val MAE is 0.21. These two errors are pretty close to each other, so over fitting isn't something I'd be too concerned about. Because I am expecting some amount of over fitting that I don't see (usually over fitting is the bigger problem), I hypothesize this model might have too much bias. Said another way, we might not be able to fit the data closely enough. When this occurs, we need to add more layers, more neurons, or both to our model. We need to go deeper. Let's do that next. 

We can attempt to reduce network bias by adding parameters to the network, in the form of more neurons. While you might be tempted to start tuning your optimizer, it's usually better to find a network architecture you're comfortable with first. 
主站蜘蛛池模板: 故城县| 重庆市| 钦州市| 漳州市| 清流县| 同江市| 临沧市| 兰考县| 洪泽县| 承德市| 石渠县| 桐柏县| 沈阳市| 巴中市| 沙河市| 随州市| 金平| 常宁市| 武义县| 突泉县| 东莞市| 乐亭县| 杨浦区| 阿合奇县| 和林格尔县| 浦县| 安顺市| 泾源县| 盘山县| 鹿邑县| 清水县| 新竹县| 都安| 宁津县| 双峰县| 普定县| 团风县| 高台县| 大竹县| 胶州市| 河津市|