官术网_书友最值得收藏!

How to do it...

In the previous recipe, we built a model with a batch size of 32. In this recipe, we will go ahead and implement the model to contrast the scenario between a low batch size and a high batch size for the same number of epochs:

  1. Preprocess the dataset and fit the model as follows:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_train = X_train/255
X_test = X_test/255
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
model = Sequential()
model.add(Dense(1000,input_dim=784,activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=30000, verbose=1)

Note that the only change in code is the batch_size parameter in the model fit process.

  1. Plot the training and test accuracy and loss values over different epochs (the code to generate the following plots remains the same as the code we used in step 8 of the Training a vanilla neural network recipe):

In the preceding scenario, you should notice that the model accuracy reached ~98% at a much later epoch, when compared to the model accuracy it reached when the batch size was smaller.

主站蜘蛛池模板: 蒙自县| 兴安盟| 扬州市| 厦门市| 内丘县| 宁明县| 石台县| 彭州市| 同心县| 米易县| 富宁县| 凉山| 东乡县| 富蕴县| 密山市| 蒙城县| 石楼县| 洪湖市| 临夏县| 大石桥市| 三门峡市| 华坪县| 申扎县| 佛学| 奉贤区| 晋城| 漠河县| 广河县| 塔城市| 南郑县| 荣昌县| 彭泽县| 徐汇区| 汤原县| 思茅市| 海门市| 黄平县| 佛山市| 浠水县| 土默特左旗| 久治县|