官术网_书友最值得收藏!

How to do it...

  1. To perform specifying weightages at a row level, we will modify our train and test datasets in such a way that the first 2100 data points after ordering the dataset are in the train dataset and the rest are in the test dataset:
X_train = x[:2100,:,:]
y_train = y[:2100]
X_test = x[2100:,:,:]
y_test = y[2100:]
  1. A row in input shall have a higher weight if it occurred more recently and less weightage otherwise:
weights = np.arange(X_train.shape[0]).reshape((X_train.shape[0]),1)/2100

The preceding code block assigns lower weightage to initial data points and a higher weightage to data points that occurred more recently.

Now that we have defined the weights for each row, we will include them in the custom loss function. Note that in this case our custom loss function shall include both the predicted and actual values of output as well as the weight that needs to be assigned to each row.

  1. The partial method enables us to pass more variables than just the actual and predicted values to the custom loss function:
import keras.backend as K
from functools import partial
  1. To pass weights to the custom_loss function, we shall be using the partial function to pass both custom_loss and weights as a parameter in step 7. In the code that follows, we are defining the  custom_loss  function:
def custom_loss_4(y_true, y_pred, weights):
return K.square(K.abs(y_true - y_pred) * weights)
  1. Given that the model we are building has two inputs, input variables and weights corresponding to each row, we will first define the shape input of the two as follows:
input_layer = Input(shape=(5,1))
weights_tensor = Input(shape=(1,))
  1. Now that we have defined the inputs, let's initialize model that accepts the two inputs as follows:
inp1 = Dense(1000, activation='relu')(input_layer)
out = Dense(1, activation='linear')(i3)
model = Model([input_layer, weights_tensor], out)
  1. Now that we have initialized model, we will define the optimization function as follows:
cl4 = partial(custom_loss_4, weights=weights_tensor)

In the preceding scenario, we specify that we need to minimize the custom_loss_4 function and also that we provide an additional variable (weights_tensor) to the custom loss function.

  1. Finally, before fitting the model, we will also provide weights for each row corresponding to the test dataset. Given that we are predicting these values, it is of no use to provide a low weightage to certain rows over others, as the test dataset is not provided to model. However, we will only specify this to make a prediction using the model we defined (which accepts two inputs):
test_weights = np.ones((156,1))
  1. Once we specify the weights  of test data, we will go ahead and fit the model as follows:
model = Model([input_layer, weights_tensor], out)
model.compile(adam, cl4)
model.fit(x=[X_train, weights], y=y_train, epochs=300,batch_size = 32, validation_data = ([X_test, test_weights], y_test))

The preceding results in a test dataset loss that is very different to what we saw in the previous section. We will look at the reason for this in more detail in the Chapter 11Building a Recurrent Neural Network chapter.

You need to be extremely careful while implementing the preceding model, as it has a few pitfalls. However, in general, it is advised to implement models to predict stock price movements only after sufficient due diligence.
主站蜘蛛池模板: 武清区| 巢湖市| 仁寿县| 平陆县| 沁水县| 酒泉市| 尼木县| 南雄市| 灌云县| 周至县| 波密县| 封丘县| 澎湖县| 登封市| 泗洪县| 灵山县| 宜川县| 卢龙县| 泰顺县| 防城港市| 瑞昌市| 博湖县| 鄯善县| 洞头县| 衢州市| 略阳县| 龙里县| 蓬溪县| 筠连县| 安龙县| 应用必备| 巢湖市| 宝坻区| 永胜县| 千阳县| 紫金县| 东平县| 克东县| 新疆| 土默特左旗| 揭西县|