- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 197字
- 2021-07-02 12:46:27
How to do it...
In the previous recipe, we built a model with a batch size of 32. In this recipe, we will go ahead and implement the model to contrast the scenario between a low batch size and a high batch size for the same number of epochs:
- Preprocess the dataset and fit the model as follows:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_train = X_train/255
X_test = X_test/255
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
model = Sequential()
model.add(Dense(1000,input_dim=784,activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=30000, verbose=1)
Note that the only change in code is the batch_size parameter in the model fit process.
- Plot the training and test accuracy and loss values over different epochs (the code to generate the following plots remains the same as the code we used in step 8 of the Training a vanilla neural network recipe):

In the preceding scenario, you should notice that the model accuracy reached ~98% at a much later epoch, when compared to the model accuracy it reached when the batch size was smaller.
推薦閱讀
- Mastering JavaScript Functional Programming
- JavaScript百煉成仙
- MySQL數(shù)據(jù)庫應(yīng)用與管理 第2版
- PostgreSQL Replication(Second Edition)
- Java程序設(shè)計:原理與范例
- Java程序設(shè)計入門
- C++ Fundamentals
- 從零開始:UI圖標(biāo)設(shè)計與制作(第3版)
- Java程序設(shè)計與項目案例教程
- Windows Phone 8 Game Development
- Scala編程實戰(zhàn)
- SwiftUI極簡開發(fā)
- AMP:Building Accelerated Mobile Pages
- Using Yocto Project with BeagleBone Black
- Learning Shiny