- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 167字
- 2021-07-02 12:46:28
Getting ready
To understand the impact of varying the optimizer on network accuracy, let's contrast the scenario laid out in previous sections (which was the Adam optimizer) with using a stochastic gradient descent optimizer in this section, while reusing the same MNIST training and test datasets that were scaled (the same data-preprocessing steps as those of step 1 and step 2 in the Scaling the dataset recipe):
model = Sequential()
model.add(Dense(1000, input_dim=784, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=32, verbose=1)
Note that when we used the stochastic gradient descent optimizer in the preceding code, the final accuracy after 100 epochs is ~98% (the code to generate the plots in the following diagram remains the same as the code we used in step 8 of the Training a vanilla neural network recipe):

However, we should also note that the model achieved the high accuracy levels much more slowly when compared to the model that used Adam optimization.
- 在最好的年紀學Python:小學生趣味編程
- 圖解Java數據結構與算法(微課視頻版)
- JavaScript+jQuery網頁特效設計任務驅動教程(第2版)
- Expert Android Programming
- 軟件工程
- 精通Linux(第2版)
- Android玩家必備
- 響應式Web設計:HTML5和CSS3實戰(第2版)
- Python Data Science Cookbook
- Solutions Architect's Handbook
- Extending Unity with Editor Scripting
- 30天學通C#項目案例開發
- Groovy 2 Cookbook
- Three.js權威指南:在網頁上創建3D圖形和動畫的方法與實踐(原書第4版)
- Python編程快速上手2