- Python Deep Learning Cookbook
- Indra den Bakker
- 179字
- 2021-07-02 15:43:14
How to do it...
- Import the libraries and dataset as follows:
import numpy as np
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# We will be using the Iris Plants Database
from sklearn.datasets import load_iris
SEED = 2017
- First, we subset the imported data as shown here:
# The first two classes (Iris-Setosa and Iris-Versicolour) are linear separable
iris = load_iris()
idxs = np.where(iris.target<2)
X = iris.data[idxs]
y = iris.target[idxs]
- Let's plot the data for two of the four variables with the following code snippet:
plt.scatter(X[Y==0][:,0],X[Y==0][:,2], color='green', label='Iris-Setosa')
plt.scatter(X[Y==1][:,0],X[Y==1][:,2], color='red', label='Iris-Versicolour')
plt.title('Iris Plants Database')
plt.xlabel('sepal length in cm')
plt.ylabel('sepal width in cm')
plt.legend()
plt.show()
In the following graph, we've plotted the distribution of the two classes:

Figure 2.2: Iris plants database (two classes)
- To validate our results, we split the data into training and validation sets as follows:
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=SEED)
- Next, we initialize the weights and the bias for the perceptron:
weights = np.random.normal(size=X_train.shape[1])
bias = 1
- Before training, we need to define the hyperparameters:
learning_rate = 0.1
n_epochs = 15
- Now, we can start training our perceptron with a for loop:
del_w = np.zeros(weights.shape) hist_loss = [] hist_accuracy = [] for i in range(n_epochs): # We apply a simple step function, if the output is > 0.5 we predict 1, else 0 output = np.where((X_train.dot(weights)+bias)>0.5, 1, 0) # Compute MSE error = np.mean((y_train-output)**2) # Update weights and bias weights-= learning_rate * np.dot((output-y_train), X_train) bias += learning_rate * np.sum(np.dot((output-y_train), X_train)) # Calculate MSE loss = np.mean((output - y_train) ** 2) hist_loss.append(loss) # Determine validation accuracy output_val = np.where(X_val.dot(weights)>0.5, 1, 0) accuracy = np.mean(np.where(y_val==output_val, 1, 0)) hist_accuracy.append(accuracy)
- We've saved the training loss and validation accuracy so that we can plot them:
fig = plt.figure(figsize=(8, 4))
a = fig.add_subplot(1,2,1)
imgplot = plt.plot(hist_loss)
plt.xlabel('epochs')
a.set_title('Training loss')
a=fig.add_subplot(1,2,2)
imgplot = plt.plot(hist_accuracy)
plt.xlabel('epochs')
a.set_title('Validation Accuracy')
plt.show()
In the following screenshot, the resulting training loss and validation accuracy are shown:

Figure 2.3: Training loss and validation accuracy
推薦閱讀
- OpenStack Cloud Computing Cookbook(Third Edition)
- 實用防銹油配方與制備200例
- iOS開發實戰:從零基礎到App Store上架
- INSTANT CakePHP Starter
- Python程序設計案例教程
- Web前端應用開發技術
- Creating Data Stories with Tableau Public
- Apache Camel Developer's Cookbook
- Orchestrating Docker
- Java7程序設計入門經典
- Learning Python Data Visualization
- 從零開始學Python大數據與量化交易
- Mobile Test Automation with Appium
- jQuery Essentials
- Python3從入門到實戰