官术网_书友最值得收藏!

How to do it...

In the previous recipe, we built a model with a batch size of 32. In this recipe, we will go ahead and implement the model to contrast the scenario between a low batch size and a high batch size for the same number of epochs:

  1. Preprocess the dataset and fit the model as follows:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_train = X_train/255
X_test = X_test/255
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
model = Sequential()
model.add(Dense(1000,input_dim=784,activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=30000, verbose=1)

Note that the only change in code is the batch_size parameter in the model fit process.

  1. Plot the training and test accuracy and loss values over different epochs (the code to generate the following plots remains the same as the code we used in step 8 of the Training a vanilla neural network recipe):

In the preceding scenario, you should notice that the model accuracy reached ~98% at a much later epoch, when compared to the model accuracy it reached when the batch size was smaller.

主站蜘蛛池模板: 讷河市| 武安市| 赤城县| 怀远县| 宝应县| 繁昌县| 津南区| 云林县| 山阳县| 巴林右旗| 朝阳县| 曲松县| 恩平市| 东莞市| 平度市| 贺州市| 岳阳市| 五台县| 凤庆县| 宽甸| 遂昌县| 香格里拉县| 拉孜县| 枣强县| 遵义市| 金华市| 余庆县| 安康市| 南宫市| 朝阳县| 都昌县| 马尔康县| 镇赉县| 远安县| 周宁县| 东安县| 惠州市| 武鸣县| 淮阳县| 和政县| 得荣县|