- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 167字
- 2021-07-02 12:46:28
Getting ready
To understand the impact of varying the optimizer on network accuracy, let's contrast the scenario laid out in previous sections (which was the Adam optimizer) with using a stochastic gradient descent optimizer in this section, while reusing the same MNIST training and test datasets that were scaled (the same data-preprocessing steps as those of step 1 and step 2 in the Scaling the dataset recipe):
model = Sequential()
model.add(Dense(1000, input_dim=784, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=32, verbose=1)
Note that when we used the stochastic gradient descent optimizer in the preceding code, the final accuracy after 100 epochs is ~98% (the code to generate the plots in the following diagram remains the same as the code we used in step 8 of the Training a vanilla neural network recipe):

However, we should also note that the model achieved the high accuracy levels much more slowly when compared to the model that used Adam optimization.
- Spring Cloud Alibaba核心技術(shù)與實(shí)戰(zhàn)案例
- Apache ZooKeeper Essentials
- 程序員面試算法寶典
- 小程序,巧運(yùn)營:微信小程序運(yùn)營招式大全
- 從學(xué)徒到高手:汽車電路識圖、故障檢測與維修技能全圖解
- Java 11 Cookbook
- Node.js Design Patterns
- 焊接機(jī)器人系統(tǒng)操作、編程與維護(hù)
- Mastering Xamarin.Forms(Second Edition)
- Microsoft Azure Storage Essentials
- HTML 5與CSS 3權(quán)威指南(第3版·上冊)
- 搞定J2EE:Struts+Spring+Hibernate整合詳解與典型案例
- TMS320LF240x芯片原理、設(shè)計及應(yīng)用
- Python程序設(shè)計與算法基礎(chǔ)教程(第2版)(微課版)
- Deep Learning with R Cookbook