- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 214字
- 2021-07-02 12:46:30
How to do it...
In code, batch normalization is applied as follows:
Note that we will be using the same data-preprocessing steps as those we used in step 1 and step 2 in the Scaling the input dataset recipe.
- Import the BatchNormalization method as follows:
from keras.layers.normalization import BatchNormalization
- Instantiate a model and build the same architecture as we built when using the regularization technique. The only addition is that we perform batch normalization in a hidden layer:
model = Sequential()
model.add(Dense(1000, input_dim=784,activation='relu', kernel_regularizer = l2(0.01)))
model.add(BatchNormalization())
model.add(Dense(10, activation='softmax', kernel_regularizer = l2(0.01)))
- Build, compile, and fit the model as follows:
from keras.optimizers import Adam
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=1024, verbose=1)
The preceding results in training that is much faster than when there is no batch normalization, as follows:

The previous graphs show the training and test loss and accuracy when there is no batch normalization, but only regularization. The following graphs show the training and test loss and accuracy with both regularization and batch normalization:

Note that, in the preceding two scenarios, we see much faster training when we perform batch normalization (test dataset accuracy of ~97%) than compared to when we don't (test dataset accuracy of ~91%).
Thus, batch normalization results in much quicker training.
- LaTeX Cookbook
- AngularJS Testing Cookbook
- INSTANT Weka How-to
- 機械工程師Python編程:入門、實戰與進階
- 編譯系統透視:圖解編譯原理
- 響應式Web設計:HTML5和CSS3實戰(第2版)
- 編寫高質量代碼:改善Objective-C程序的61個建議
- 動手打造深度學習框架
- IoT Projects with Bluetooth Low Energy
- C語言程序設計
- PHP 8從入門到精通(視頻教學版)
- 進入IT企業必讀的324個Java面試題
- Getting Started with hapi.js
- 深度學習的數學:使用Python語言
- 新手學ASP動態網頁開發