- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 532字
- 2021-07-02 12:46:35
How to do it...
- To perform specifying weightages at a row level, we will modify our train and test datasets in such a way that the first 2100 data points after ordering the dataset are in the train dataset and the rest are in the test dataset:
X_train = x[:2100,:,:]
y_train = y[:2100]
X_test = x[2100:,:,:]
y_test = y[2100:]
- A row in input shall have a higher weight if it occurred more recently and less weightage otherwise:
weights = np.arange(X_train.shape[0]).reshape((X_train.shape[0]),1)/2100
The preceding code block assigns lower weightage to initial data points and a higher weightage to data points that occurred more recently.
Now that we have defined the weights for each row, we will include them in the custom loss function. Note that in this case our custom loss function shall include both the predicted and actual values of output as well as the weight that needs to be assigned to each row.
- The partial method enables us to pass more variables than just the actual and predicted values to the custom loss function:
import keras.backend as K
from functools import partial
- To pass weights to the custom_loss function, we shall be using the partial function to pass both custom_loss and weights as a parameter in step 7. In the code that follows, we are defining the custom_loss function:
def custom_loss_4(y_true, y_pred, weights):
return K.square(K.abs(y_true - y_pred) * weights)
- Given that the model we are building has two inputs, input variables and weights corresponding to each row, we will first define the shape input of the two as follows:
input_layer = Input(shape=(5,1))
weights_tensor = Input(shape=(1,))
- Now that we have defined the inputs, let's initialize model that accepts the two inputs as follows:
inp1 = Dense(1000, activation='relu')(input_layer)
out = Dense(1, activation='linear')(i3)
model = Model([input_layer, weights_tensor], out)
- Now that we have initialized model, we will define the optimization function as follows:
cl4 = partial(custom_loss_4, weights=weights_tensor)
In the preceding scenario, we specify that we need to minimize the custom_loss_4 function and also that we provide an additional variable (weights_tensor) to the custom loss function.
- Finally, before fitting the model, we will also provide weights for each row corresponding to the test dataset. Given that we are predicting these values, it is of no use to provide a low weightage to certain rows over others, as the test dataset is not provided to model. However, we will only specify this to make a prediction using the model we defined (which accepts two inputs):
test_weights = np.ones((156,1))
- Once we specify the weights of test data, we will go ahead and fit the model as follows:
model = Model([input_layer, weights_tensor], out)
model.compile(adam, cl4)
model.fit(x=[X_train, weights], y=y_train, epochs=300,batch_size = 32, validation_data = ([X_test, test_weights], y_test))
The preceding results in a test dataset loss that is very different to what we saw in the previous section. We will look at the reason for this in more detail in the Chapter 11, Building a Recurrent Neural Network chapter.
- Learning Scala Programming
- Docker技術入門與實戰(第3版)
- Designing Hyper-V Solutions
- JavaScript前端開發與實例教程(微課視頻版)
- 人人都是網站分析師:從分析師的視角理解網站和解讀數據
- 深度強化學習算法與實踐:基于PyTorch的實現
- Windows Server 2016 Automation with PowerShell Cookbook(Second Edition)
- VMware虛擬化技術
- Learning JavaScript Data Structures and Algorithms
- RESTful Web Clients:基于超媒體的可復用客戶端
- Python機器學習開發實戰
- C# 7.0本質論
- HTML5 Canvas核心技術:圖形、動畫與游戲開發
- GO語言編程從入門到實踐
- Learning Java Lambdas