- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 206字
- 2021-07-02 12:46:29
How to do it
L1/L2 regularization is implemented in Keras, as follows:
model = Sequential()
model.add(Dense(1000,input_dim=784,activation='relu',kernel_regularizer=l2(0.1)))model.add(Dense(10, activation='softmax',kernel_regularizer=l2(0.1)))
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=500, batch_size=1024, verbose=1)
Note that the preceding involves invoking an additional hyperparameter—kernel_regularizer—and then specifying whether it is an L1/L2 regularization. Furthermore, we also specify the lambda value that gives the weight to regularization.
We notice that, post regularization, the training dataset accuracy does not happen to be at ~100%, while the test data accuracy is at 98%. The histogram of weights post-L2 regularization is visualized in the next graph.
The weights of connecting the hidden layer to the output layer are extracted as follows:
model.get_weights()[0].flatten()
Once the weights are extracted, they are plotted as follows:
plt.hist(model.get_weights()[0].flatten())

We notice that the majority of weights are now much closer to zero when compared to the previous scenario, thus presenting a case to avoid the overfitting issue. We would see a similar trend in the case of L1 regularization.
Notice that the weight values when regularization exists are much lower when compared to the weight values when regularization is performed.
Thus, the L1 and L2 regularizations help us to avoid the overfitting issue on top of the training dataset.
- Testing with JUnit
- Python深度學習
- 算法大爆炸:面試通關步步為營
- SEO實戰密碼
- ArcGIS By Example
- Android底層接口與驅動開發技術詳解
- Python編程實戰
- RSpec Essentials
- Mastering Data Mining with Python:Find patterns hidden in your data
- Learning AngularJS for .NET Developers
- 軟件測試綜合技術
- Laravel Application Development Blueprints
- OpenCV 3 Blueprints
- 面向對象程序設計及C++(第3版)
- WCF全面解析