官术网_书友最值得收藏!

How to do it...

  1. To perform specifying weightages at a row level, we will modify our train and test datasets in such a way that the first 2100 data points after ordering the dataset are in the train dataset and the rest are in the test dataset:
X_train = x[:2100,:,:]
y_train = y[:2100]
X_test = x[2100:,:,:]
y_test = y[2100:]
  1. A row in input shall have a higher weight if it occurred more recently and less weightage otherwise:
weights = np.arange(X_train.shape[0]).reshape((X_train.shape[0]),1)/2100

The preceding code block assigns lower weightage to initial data points and a higher weightage to data points that occurred more recently.

Now that we have defined the weights for each row, we will include them in the custom loss function. Note that in this case our custom loss function shall include both the predicted and actual values of output as well as the weight that needs to be assigned to each row.

  1. The partial method enables us to pass more variables than just the actual and predicted values to the custom loss function:
import keras.backend as K
from functools import partial
  1. To pass weights to the custom_loss function, we shall be using the partial function to pass both custom_loss and weights as a parameter in step 7. In the code that follows, we are defining the  custom_loss  function:
def custom_loss_4(y_true, y_pred, weights):
return K.square(K.abs(y_true - y_pred) * weights)
  1. Given that the model we are building has two inputs, input variables and weights corresponding to each row, we will first define the shape input of the two as follows:
input_layer = Input(shape=(5,1))
weights_tensor = Input(shape=(1,))
  1. Now that we have defined the inputs, let's initialize model that accepts the two inputs as follows:
inp1 = Dense(1000, activation='relu')(input_layer)
out = Dense(1, activation='linear')(i3)
model = Model([input_layer, weights_tensor], out)
  1. Now that we have initialized model, we will define the optimization function as follows:
cl4 = partial(custom_loss_4, weights=weights_tensor)

In the preceding scenario, we specify that we need to minimize the custom_loss_4 function and also that we provide an additional variable (weights_tensor) to the custom loss function.

  1. Finally, before fitting the model, we will also provide weights for each row corresponding to the test dataset. Given that we are predicting these values, it is of no use to provide a low weightage to certain rows over others, as the test dataset is not provided to model. However, we will only specify this to make a prediction using the model we defined (which accepts two inputs):
test_weights = np.ones((156,1))
  1. Once we specify the weights  of test data, we will go ahead and fit the model as follows:
model = Model([input_layer, weights_tensor], out)
model.compile(adam, cl4)
model.fit(x=[X_train, weights], y=y_train, epochs=300,batch_size = 32, validation_data = ([X_test, test_weights], y_test))

The preceding results in a test dataset loss that is very different to what we saw in the previous section. We will look at the reason for this in more detail in the Chapter 11Building a Recurrent Neural Network chapter.

You need to be extremely careful while implementing the preceding model, as it has a few pitfalls. However, in general, it is advised to implement models to predict stock price movements only after sufficient due diligence.
主站蜘蛛池模板: 会东县| 伊春市| 德钦县| 全南县| 疏勒县| 辉县市| 康乐县| 辽宁省| 阿拉善左旗| 沁水县| 福清市| 大荔县| 宜君县| 庆阳市| 堆龙德庆县| 黔西| 琼中| 沧州市| 垣曲县| 民乐县| 文安县| 南皮县| 华阴市| 报价| 松桃| 万盛区| 梓潼县| 邹城市| 如东县| 灵武市| 那坡县| 大连市| 彝良县| 龙岩市| 博乐市| 许昌县| 凤山市| 新乡县| 莱州市| 英吉沙县| 临湘市|