- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 532字
- 2021-07-02 12:46:35
How to do it...
- To perform specifying weightages at a row level, we will modify our train and test datasets in such a way that the first 2100 data points after ordering the dataset are in the train dataset and the rest are in the test dataset:
X_train = x[:2100,:,:]
y_train = y[:2100]
X_test = x[2100:,:,:]
y_test = y[2100:]
- A row in input shall have a higher weight if it occurred more recently and less weightage otherwise:
weights = np.arange(X_train.shape[0]).reshape((X_train.shape[0]),1)/2100
The preceding code block assigns lower weightage to initial data points and a higher weightage to data points that occurred more recently.
Now that we have defined the weights for each row, we will include them in the custom loss function. Note that in this case our custom loss function shall include both the predicted and actual values of output as well as the weight that needs to be assigned to each row.
- The partial method enables us to pass more variables than just the actual and predicted values to the custom loss function:
import keras.backend as K
from functools import partial
- To pass weights to the custom_loss function, we shall be using the partial function to pass both custom_loss and weights as a parameter in step 7. In the code that follows, we are defining the custom_loss function:
def custom_loss_4(y_true, y_pred, weights):
return K.square(K.abs(y_true - y_pred) * weights)
- Given that the model we are building has two inputs, input variables and weights corresponding to each row, we will first define the shape input of the two as follows:
input_layer = Input(shape=(5,1))
weights_tensor = Input(shape=(1,))
- Now that we have defined the inputs, let's initialize model that accepts the two inputs as follows:
inp1 = Dense(1000, activation='relu')(input_layer)
out = Dense(1, activation='linear')(i3)
model = Model([input_layer, weights_tensor], out)
- Now that we have initialized model, we will define the optimization function as follows:
cl4 = partial(custom_loss_4, weights=weights_tensor)
In the preceding scenario, we specify that we need to minimize the custom_loss_4 function and also that we provide an additional variable (weights_tensor) to the custom loss function.
- Finally, before fitting the model, we will also provide weights for each row corresponding to the test dataset. Given that we are predicting these values, it is of no use to provide a low weightage to certain rows over others, as the test dataset is not provided to model. However, we will only specify this to make a prediction using the model we defined (which accepts two inputs):
test_weights = np.ones((156,1))
- Once we specify the weights of test data, we will go ahead and fit the model as follows:
model = Model([input_layer, weights_tensor], out)
model.compile(adam, cl4)
model.fit(x=[X_train, weights], y=y_train, epochs=300,batch_size = 32, validation_data = ([X_test, test_weights], y_test))
The preceding results in a test dataset loss that is very different to what we saw in the previous section. We will look at the reason for this in more detail in the Chapter 11, Building a Recurrent Neural Network chapter.
- Python科學計算(第2版)
- Python 3.7網絡爬蟲快速入門
- Mastering RabbitMQ
- 新一代通用視頻編碼H.266/VVC:原理、標準與實現
- Visual Basic 6.0程序設計計算機組裝與維修
- Python Deep Learning
- Mastering Articulate Storyline
- 面向對象程序設計(Java版)
- Java軟件開發基礎
- Learning Three.js:The JavaScript 3D Library for WebGL
- Nginx實戰:基于Lua語言的配置、開發與架構詳解
- HTML5 WebSocket權威指南
- Kohana 3.0 Beginner's Guide
- RESTful Web API Design with Node.js
- Python深度學習(第2版)