- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 198字
- 2021-07-02 12:46:33
Defining the custom loss function
In the previous section, we used the predefined mean absolute error loss function to perform the optimization. In this section, we will learn about defining a custom loss function to perform optimization.
The custom loss function that we shall build is a modified mean squared error value, where the error is the difference between the square root of the actual value and the square root of the predicted value.
The custom loss function is defined as follows:
import keras.backend as K
def loss_function(y_true, y_pred):
return K.square(K.sqrt(y_pred)-K.sqrt(y_true))
Now that we have defined the loss function, we will be reusing the same input and output datasets that we prepared in previous section, and we will also be using the same model that we defined earlier.
Now, let's compile the model:
model.compile(loss=loss_function, optimizer='adam')
In the preceding code, note that we defined the loss value as the custom loss function that we defined earlier—loss_function.
history = model.fit(train_data2, train_targets, validation_data=(test_data2, test_targets), epochs=100, batch_size=32, verbose=1)
Once we fit the model, we will note that the mean absolute error is ~6.5 units, which is slightly less than the previous iteration where we used the mean_absolute_error loss function.
- Building Minecraft Server Modifications
- PHP編程基礎與實例教程
- Java程序設計與項目案例教程
- 測試架構師修煉之道:從測試工程師到測試架構師
- C++ System Programming Cookbook
- Getting Started with hapi.js
- Mastering Linux Kernel Development
- Java入門經典
- 精通Spring MVC 4
- RPA開發:UiPath入門與實戰
- Game Development with SlimDX
- Building Probabilistic Graphical Models with Python
- 數據庫系統原理及應用(SQL Server 2012)
- Elasticsearch技術解析與實戰
- Vue.js 前端開發 快速入門與專業應用