- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 198字
- 2021-07-02 12:46:33
Defining the custom loss function
In the previous section, we used the predefined mean absolute error loss function to perform the optimization. In this section, we will learn about defining a custom loss function to perform optimization.
The custom loss function that we shall build is a modified mean squared error value, where the error is the difference between the square root of the actual value and the square root of the predicted value.
The custom loss function is defined as follows:
import keras.backend as K
def loss_function(y_true, y_pred):
return K.square(K.sqrt(y_pred)-K.sqrt(y_true))
Now that we have defined the loss function, we will be reusing the same input and output datasets that we prepared in previous section, and we will also be using the same model that we defined earlier.
Now, let's compile the model:
model.compile(loss=loss_function, optimizer='adam')
In the preceding code, note that we defined the loss value as the custom loss function that we defined earlier—loss_function.
history = model.fit(train_data2, train_targets, validation_data=(test_data2, test_targets), epochs=100, batch_size=32, verbose=1)
Once we fit the model, we will note that the mean absolute error is ~6.5 units, which is slightly less than the previous iteration where we used the mean_absolute_error loss function.