- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 167字
- 2021-07-02 12:46:28
Getting ready
To understand the impact of varying the optimizer on network accuracy, let's contrast the scenario laid out in previous sections (which was the Adam optimizer) with using a stochastic gradient descent optimizer in this section, while reusing the same MNIST training and test datasets that were scaled (the same data-preprocessing steps as those of step 1 and step 2 in the Scaling the dataset recipe):
model = Sequential()
model.add(Dense(1000, input_dim=784, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=32, verbose=1)
Note that when we used the stochastic gradient descent optimizer in the preceding code, the final accuracy after 100 epochs is ~98% (the code to generate the plots in the following diagram remains the same as the code we used in step 8 of the Training a vanilla neural network recipe):

However, we should also note that the model achieved the high accuracy levels much more slowly when compared to the model that used Adam optimization.
- Vue.js設(shè)計(jì)與實(shí)現(xiàn)
- Instant Testing with CasperJS
- C程序設(shè)計(jì)簡(jiǎn)明教程(第二版)
- Java Web開(kāi)發(fā)之道
- 名師講壇:Java微服務(wù)架構(gòu)實(shí)戰(zhàn)(SpringBoot+SpringCloud+Docker+RabbitMQ)
- 3D少兒游戲編程(原書(shū)第2版)
- 零基礎(chǔ)入門(mén)學(xué)習(xí)Python(第2版)
- Creating Mobile Apps with jQuery Mobile(Second Edition)
- Learning Docker Networking
- Illustrator CC平面設(shè)計(jì)實(shí)戰(zhàn)從入門(mén)到精通(視頻自學(xué)全彩版)
- 精通MySQL 8(視頻教學(xué)版)
- 深入解析Java編譯器:源碼剖析與實(shí)例詳解
- 關(guān)系數(shù)據(jù)庫(kù)與SQL Server 2012(第3版)
- 例解Python:Python編程快速入門(mén)踐行指南
- Maya Programming with Python Cookbook