官术网_书友最值得收藏!

How to do it...

In the previous recipe, we built a model with a batch size of 32. In this recipe, we will go ahead and implement the model to contrast the scenario between a low batch size and a high batch size for the same number of epochs:

  1. Preprocess the dataset and fit the model as follows:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_train = X_train/255
X_test = X_test/255
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
model = Sequential()
model.add(Dense(1000,input_dim=784,activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=30000, verbose=1)

Note that the only change in code is the batch_size parameter in the model fit process.

  1. Plot the training and test accuracy and loss values over different epochs (the code to generate the following plots remains the same as the code we used in step 8 of the Training a vanilla neural network recipe):

In the preceding scenario, you should notice that the model accuracy reached ~98% at a much later epoch, when compared to the model accuracy it reached when the batch size was smaller.

主站蜘蛛池模板: 饶阳县| 宁河县| 铜川市| 清原| 剑阁县| 洪洞县| 成都市| 绥中县| 仁布县| 朝阳区| 乌鲁木齐县| 崇州市| 白银市| 建宁县| 会东县| 凤城市| 汕头市| 黄龙县| 杭锦旗| 满城县| 石台县| 绥棱县| 德惠市| 遵化市| 凌源市| 屯门区| 普安县| 海盐县| 济宁市| 黄浦区| 绍兴县| 谢通门县| 郸城县| 梅州市| 齐齐哈尔市| 连山| 合山市| 内黄县| 新巴尔虎左旗| 安平县| 汉寿县|