Now that we've defined the input and output, we can take a look at the code for the network.
from keras.layers import Input, Dense from keras.models import Model def build_network(input_features=None): inputs = Input(shape=(input_features,), name="input") x = Dense(32, activation='relu', name="hidden")(inputs) prediction = Dense(1, activation='linear', name="final")(x) model = Model(inputs=inputs, outputs=prediction) model.compile(optimizer='adam', loss='mean_absolute_error') return model
That's all there is to it! We can then use this code to build a neural network instance suitable for our problem simply by calling it, as follows:
model = build_network(input_features=10)
Before we get to that, however, let's review a few interesting parts of the preceding code:
Every layer is chained to the layer above it. Every layer is callable and returns a tensor. For example, our hidden layer is tied to the input layer when the hidden layer calls it:
x = Dense(32, activation='relu', name="hidden")(inputs)
Our final layer's activation function is linear. This is the same as not using any activation, which is what we want for regression.
Keras models need to be compiled with .compile().
During the compile call, you need to define the cost function and optimizer you will use. I've used MAE for the cost function in this example, as we discussed. I used Adam with default parameters as my optimizer, which we covered a bit in chapter 1. It's likely that we will eventually want to tune Adam's learning rate. Doing so is quite simple: you just need to define a custom adam instance, and use that instead:
from keras.optimizers import Adam adam_optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) model.compile(optimizer=adam_optimizer, loss='mean_absolute_error')