官术网_书友最值得收藏!

Building a Deep Feedforward Neural Network

In this chapter, we will cover the following recipes:

  • Training a vanilla neural network
  • Scaling the input dataset
  • Impact of training when the majority of inputs are greater than zero
  • Impact of batch size on model accuracy
  • Building a deep neural network to improve network accuracy
  • Varying the learning rate to improve network accuracy
  • Varying the loss optimizer to improve network accuracy
  • Understanding the scenario of overfitting
  • Speeding up the training process using batch normalization

In the previous chapter, we looked at the basics of the function of a neural network. We also learned that there are various hyperparameters that impact the accuracy of a neural network. In this chapter, we will get into the details of the functions of the various hyperparameters within a neural network.

All the codes for this chapter are available at https://github.com/kishore-ayyadevara/Neural-Networks-with-Keras-Cookbook/blob/master/Neural_network_hyper_parameters.ipynb

主站蜘蛛池模板: 巴塘县| 页游| 阿合奇县| 台中县| 临湘市| 迭部县| 天镇县| 山阴县| 东丰县| 句容市| 天长市| 辽宁省| 鸡东县| 仁寿县| 靖边县| 玉树县| 聂拉木县| 铅山县| 喀什市| 尤溪县| 临夏市| 苍溪县| 乳山市| 永修县| 罗城| 宕昌县| 漳州市| 吉木乃县| 绥宁县| 安吉县| 个旧市| 昌江| 抚顺市| 滨州市| 嵩明县| 阿拉尔市| 鹤岗市| 广州市| 太仆寺旗| 苗栗市| 巴楚县|