官术网_书友最值得收藏!

Building a Deep Feedforward Neural Network

In this chapter, we will cover the following recipes:

  • Training a vanilla neural network
  • Scaling the input dataset
  • Impact of training when the majority of inputs are greater than zero
  • Impact of batch size on model accuracy
  • Building a deep neural network to improve network accuracy
  • Varying the learning rate to improve network accuracy
  • Varying the loss optimizer to improve network accuracy
  • Understanding the scenario of overfitting
  • Speeding up the training process using batch normalization

In the previous chapter, we looked at the basics of the function of a neural network. We also learned that there are various hyperparameters that impact the accuracy of a neural network. In this chapter, we will get into the details of the functions of the various hyperparameters within a neural network.

All the codes for this chapter are available at https://github.com/kishore-ayyadevara/Neural-Networks-with-Keras-Cookbook/blob/master/Neural_network_hyper_parameters.ipynb

主站蜘蛛池模板: 报价| 凤庆县| 元阳县| 田林县| 新竹县| 寻乌县| 昭觉县| 德惠市| 漳浦县| 滨州市| 澄江县| 新源县| 博客| 闽侯县| 黄陵县| 府谷县| 阜新| 靖远县| 札达县| 康马县| 上蔡县| 长宁区| 黑河市| 临夏市| 西平县| 东城区| 南阳市| 繁峙县| 定兴县| 凤翔县| 绥棱县| 土默特左旗| 公主岭市| 巨鹿县| 大姚县| 平陆县| 南充市| 彭泽县| 台江县| 孙吴县| 莒南县|