官术网_书友最值得收藏!

Building a Deep Feedforward Neural Network

In this chapter, we will cover the following recipes:

  • Training a vanilla neural network
  • Scaling the input dataset
  • Impact of training when the majority of inputs are greater than zero
  • Impact of batch size on model accuracy
  • Building a deep neural network to improve network accuracy
  • Varying the learning rate to improve network accuracy
  • Varying the loss optimizer to improve network accuracy
  • Understanding the scenario of overfitting
  • Speeding up the training process using batch normalization

In the previous chapter, we looked at the basics of the function of a neural network. We also learned that there are various hyperparameters that impact the accuracy of a neural network. In this chapter, we will get into the details of the functions of the various hyperparameters within a neural network.

All the codes for this chapter are available at https://github.com/kishore-ayyadevara/Neural-Networks-with-Keras-Cookbook/blob/master/Neural_network_hyper_parameters.ipynb

主站蜘蛛池模板: 宜州市| 黄石市| 白银市| 通州区| 台山市| 威信县| 滦平县| 沈阳市| 连江县| 砀山县| 克东县| 长兴县| 太湖县| 深泽县| 山阳县| 敦煌市| 奎屯市| 孝昌县| 嘉荫县| 庆阳市| 太仓市| 富源县| 开鲁县| 大埔县| 含山县| 罗定市| 城口县| 麻江县| 化德县| 阳曲县| 永昌县| 溧水县| 桂平市| 博野县| 修文县| 仁化县| 二手房| 新兴县| 苗栗县| 准格尔旗| 大新县|