- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 215字
- 2021-07-02 12:46:30
Speeding up the training process using batch normalization
In the previous section on the scaling dataset, we learned that optimization is slow when the input data is not scaled (that is, it is not between zero and one).
The hidden layer value could be high in the following scenarios:
- Input data values are high
- Weight values are high
- The multiplication of weight and input are high
Any of these scenarios can result in a large output value on the hidden layer.
Note that the hidden layer is the input layer to output layer. Hence, the phenomenon of high input values resulting in a slow optimization holds true when hidden layer values are large as well.
Batch normalization comes to the rescue in this scenario. We have already learned that, when input values are high, we perform scaling to reduce the input values. Additionally, we have learned that scaling can also be performed using a different method, which is to subtract the mean of the input and divide it by the standard deviation of the input. Batch normalization performs this method of scaling.
Typically, all values are scaled using the following formula:




Notice that γ and β are learned during training, along with the original parameters of the network.
- Python機(jī)器學(xué)習(xí):數(shù)據(jù)分析與評(píng)分卡建模(微課版)
- Mastering OpenCV Android Application Programming
- Rake Task Management Essentials
- Rust Cookbook
- Processing互動(dòng)編程藝術(shù)
- Learning Python by Building Games
- 精通Python設(shè)計(jì)模式(第2版)
- 編程與類型系統(tǒng)
- Scala編程(第5版)
- Puppet:Mastering Infrastructure Automation
- Python預(yù)測分析與機(jī)器學(xué)習(xí)
- ASP.NET開發(fā)寶典
- Mastering Leap Motion
- Python計(jì)算機(jī)視覺與深度學(xué)習(xí)實(shí)戰(zhàn)
- Raspberry Pi Robotic Projects