- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 282字
- 2021-07-02 12:46:32
How to do it...
- Import the relevant dataset (Please refer to the Predicting house price.ipynb file in GitHub while implementing the code and for the recommended dataset):
from keras.datasets import boston_housing
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
- Normalize the input and output dataset so that all variables have a range from zero to one:
import numpy as np
train_data2 = train_data/np.max(train_data,axis=0)
test_data2 = test_data/np.max(train_data,axis=0)
train_targets = train_targets/np.max(train_targets)
test_targets = test_targets/np.max(train_targets)
Note that we have normalized the test dataset with the maximum value in the train dataset itself, as we should not be using any of the values from the test dataset in the model-building process. Additionally, note that we have normalized both the input and the output values.
- Now that the input and output datasets are prepared, let's proceed and define the model:
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.utils import np_utils
from keras.regularizers import l1
model = Sequential()
model.add(Dense(64, input_dim=13, activation='relu', kernel_regularizer = l1(0.1)))
model.add(Dense(1, activation='relu', kernel_regularizer = l1(0.1)))
model.summary()
A summary of the model is as follows:

Note that we performed L1 regularization in the model-building process so that the model does not overfit on the training data (as the number of data points in the training data is small).
- Compile the model to minimize the mean absolute error value:
model.compile(loss='mean_absolute_error', optimizer='adam')
- Fit the model:
history = model.fit(train_data2, train_targets, validation_data=(test_data2, test_targets), epochs=100, batch_size=32, verbose=1)
- Calculate the mean absolute error on the test dataset:
np.mean(np.abs(model.predict(test_data2) - test_targets))*50
We should note that the mean absolute error is ~6.7 units.
In the next section, we will vary the loss function and add custom weights to see whether we can improve upon the mean absolute error values.