- Python Deep Learning Cookbook
- Indra den Bakker
- 350字
- 2021-07-02 15:43:15
How to do it...
- We start by import the libraries as follows:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.optimizers import Adam
from sklearn.preprocessing import StandardScaler
SEED = 2017
- Load dataset:
data = pd.read_csv('Data/winequality-red.csv', sep=';')
y = data['quality']
X = data.drop(['quality'], axis=1)
- Split data for training and testing:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=SEED)
- Print average quality and first rows of training set:
print('Average quality training set: {:.4f}'.format(y_train.mean()))
X_train.head()
In the following screenshot, we can see an example of the output of the training data:

Figure 2-8: Training data
- An important next step is to normalize the input data:
scaler = StandardScaler().fit(X_train)
X_train = pd.DataFrame(scaler.transform(X_train))
X_test = pd.DataFrame(scaler.transform(X_test))
- Determine baseline predictions:
# Predict the mean quality of the training data for each validation input
print('MSE:', np.mean((y_test - ([y_train.mean()] * y_test.shape[0])) ** 2).round(4))
## MSE: 0.594
- Now, let's build our neural network by defining the network architecture:
model = Sequential()
# First hidden layer with 100 hidden units
model.add(Dense(200, input_dim=X_train.shape[1], activation='relu'))
# Second hidden layer with 50 hidden units
model.add(Dense(25, activation='relu'))
# Output layer
model.add(Dense(1, activation='linear'))
# Set optimizer
opt = Adam()
# Compile model
model.compile(loss='mse', optimizer=opt, metrics=['accuracy'])
- Let's define the callback for early stopping and saving the best model:
callbacks = [
EarlyStopping(monitor='val_acc', patience=20, verbose=2),
ModelCheckpoint('checkpoints/multi_layer_best_model.h5', monitor='val_acc', save_best_only=True, verbose=0)
]
- Run the model with a batch size of 64, 5,000 epochs, and a validation split of 20%:
batch_size = 64
n_epochs = 5000
model.fit(X_train.values, y_train, batch_size=batch_size, epochs=n_epochs, validation_split=0.2,
verbose=2,
callbacks=callbacks)
- We can now print the performance on the test set after loading the optimal weights:
best_model = model
best_model.load_weights('checkpoints/multi_layer_best_model.h5')
best_model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
# Evaluate on test set
score = best_model.evaluate(X_test.values, y_test, verbose=0)
print('Test accuracy: %.2f%%' % (score[1]*100))
## Test accuracy: 66.25%
## Benchmark accuracy on dataset 62.4%
With a small dataset, it's advisable to retrain on the complete training set (without validation set) and increase the number of epochs proportional to the additional data. Another option, is to use cross-validation and average the results when making predictions.
推薦閱讀
- Oracle從入門到精通(第3版)
- Python自動化運維快速入門
- Elasticsearch for Hadoop
- Linux Device Drivers Development
- RabbitMQ Cookbook
- C#實踐教程(第2版)
- Lighttpd源碼分析
- INSTANT Silverlight 5 Animation
- Programming Microsoft Dynamics? NAV 2015
- Mastering Adobe Captivate 7
- Mastering HTML5 Forms
- R的極客理想:量化投資篇
- JBoss AS 7 Development
- Koa與Node.js開發實戰
- 流暢的Python