- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 468字
- 2021-07-02 12:46:26
How to do it...
- Load and scale the input dataset:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
X_train = X_train/255
X_test = X_test/255
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
- Let's look at the distribution of the input values:
X_train.flatten()
The preceding code flattens all the inputs into a single list, and hence, is of the shape (47,040,000), which is the same as the 28 x 28 x X_train.shape[0]. Let's plot the distribution of all the input values:
plt.hist(X_train.flatten())
plt.grid('off')
plt.title('Histogram of input values')
plt.xlabel('Input values')
plt.ylabel('Frequency of input values')

We notice that the majority of the inputs are zero (you should note that all the input images have a background that is black hence, a majority of the values are zero, which is the pixel value of the color black).
- In this section, let's explore a scenario where we invert the colors, in which the background is white and the letters are written in black, using the following code:
X_train = 1-X_train
X_test = 1-X_test
Let's plot the images:
import matplotlib.pyplot as plt
%matplotlib inline
plt.subplot(221)
plt.imshow(X_train[0].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.subplot(222)
plt.imshow(X_train[1].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.subplot(223)
plt.imshow(X_train[2].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.subplot(224)
plt.imshow(X_train[3].reshape(28,28), cmap=plt.get_cmap('gray'))
plt.grid('off')
plt.show()
They will look as follows:

The histogram of the resulting images now looks as follows:

You should notice that the majority of the input values now have a value of one.
- Let's go ahead and build our model using the same model architecture that we built in the Scaling input dataset section:
model = Sequential()
model.add(Dense(1000,input_dim=784,activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=32, verbose=1)
- Plot the training and test accuracy and loss values over different epochs (the code to generate the following plots remains the same as the one we used in step 8 of the Training a vanilla neural network recipe):

We should note that model accuracy has now fallen to ~97%, compared to ~98% when using the same model for the same number of epochs and batch size, but on a dataset that has a majority of zeros (and not a majority of ones). Additionally, the model achieved an accuracy of 97%, considerably more slowly than in the scenario where the majority of the input pixels are zero.
The intuition for the decrease in accuracy, when the majority of the data points are non-zero is that, when the majority of pixels are zero, the model's task was easier (less weights had to be fine-tuned), as it had to make predictions based on a few pixel values (the minority that had a pixel value greater than zero). However, a higher number of weights need to be fine-tuned to make predictions when a majority of the data points are non-zero.
- Bootstrap Site Blueprints Volume II
- JavaScript 從入門到項目實踐(超值版)
- Python數據分析從0到1
- 信息技術應用基礎
- Hands-On Functional Programming with TypeScript
- Scala編程(第5版)
- RubyMotion iOS Develoment Essentials
- Xamarin Blueprints
- Visual Basic語言程序設計基礎(第3版)
- Java Web開發教程:基于Struts2+Hibernate+Spring
- MySQL核心技術與最佳實踐
- JavaScript高級程序設計(第4版)
- Analytics for the Internet of Things(IoT)
- Alfresco for Administrators
- JavaScript+jQuery交互式Web前端開發(第2版)