- Hands-On Deep Learning for Games
- Micheal Lanham
- 403字
- 2021-06-24 15:47:56
Training the model
Next, we need to train our model with a sample set of data. We will again be using the MNIST set of handwritten digits; this is easy, free, and convenient. Get back into the code listing and continue the exercise as follows:
- Pick up where we left off and locate the following section of code:
from tensorflow.keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
- We start by importing the mnist library and numpy then loads the data into x_train and x_test sets of data. As a general rule in data science and machine learning, you typically want a training set for learning and then an evaluation set for testing. These datasets are often generated by randomly splitting the data into 80 percent for training and 20 percent for testing.
- Then we further define our training and testing inputs with the following code:
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print( x_train.shape)
print( x_test.shape)
- The first two lines are normalizing our input gray scale pixel color values and a number from 0 to 255, by dividing by 255. This gives us a number from 0 to 1. We generally want to try to normalize our inputs. Next, we reshape the training and testing sets into an input Tensor.
- With the models all built and compiled, it is time to start training. The next few lines are where the network will learn how to encode and decode the images:
autoencoder.fit(x_train, x_train, epochs=50, batch_size=256,
shuffle=True, validation_data=(x_test, x_test))
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
- You can see in our code that we are setting up to fit the data using x_train as input and output. We are using 50 epochs with a batch size of 256 images. Feel free to play with these parameters on your own later to see what effect they have on training. After that, the encoder and then the decoder models are used to predict test images.
That completes the model and training setup we need for this model, or models if you will. Remember, we are taking a 28 x 28 image, decompressing it to essentially 32 numbers, and then rebuilding the image using a neural network. With our model complete and trained this time, we want to review the output and we will do that in the next section.
推薦閱讀
- Python絕技:運(yùn)用Python成為頂級(jí)數(shù)據(jù)工程師
- 劍破冰山:Oracle開(kāi)發(fā)藝術(shù)
- 使用GitOps實(shí)現(xiàn)Kubernetes的持續(xù)部署:模式、流程及工具
- 大數(shù)據(jù)可視化
- 新型數(shù)據(jù)庫(kù)系統(tǒng):原理、架構(gòu)與實(shí)踐
- Python數(shù)據(jù)分析、挖掘與可視化從入門(mén)到精通
- 數(shù)據(jù)庫(kù)開(kāi)發(fā)實(shí)踐案例
- Python數(shù)據(jù)分析:基于Plotly的動(dòng)態(tài)可視化繪圖
- 智能數(shù)據(jù)時(shí)代:企業(yè)大數(shù)據(jù)戰(zhàn)略與實(shí)戰(zhàn)
- 網(wǎng)站數(shù)據(jù)庫(kù)技術(shù)
- 數(shù)字化轉(zhuǎn)型實(shí)踐:構(gòu)建云原生大數(shù)據(jù)平臺(tái)
- 數(shù)據(jù)挖掘算法實(shí)踐與案例詳解
- Tableau商業(yè)分析從新手到高手(視頻版)
- SQL應(yīng)用開(kāi)發(fā)參考手冊(cè)
- 信息技術(shù)導(dǎo)論