- Generative Adversarial Networks Projects
- Kailash Ahirwar
- 250字
- 2021-07-02 13:38:43
The architecture of the generator
The generator network in our dummy GAN is a simple feed-forward neural network with five layers: an input layer, three hidden layers, and an output layer. Let's take a closer look at the configuration of the generator (dummy) network:

The preceding table shows the configurations of the hidden layers, and also the input and output layers in the network.
The following diagram shows the flow of tensors and the input and output shapes of the tensors for each layer in the generator network:

The architecture of the generator network.
Let's discuss how this feed-forward neural network processes information during forward propagation of the data:
- The input layer takes a 100-dimensional vector sampled from a Gaussian (normal) distribution and passes the tensor to the first hidden layer without any modifications.
- The three hidden layers are dense layers with 500, 500, and 784 units, respectively. The first hidden layer (a dense layer) converts a tensor of a shape of (batch_size, 100) to a tensor of a shape of (batch_size, 500).
- The second dense layer generates a tensor of a shape of (batch_size, 500).
- The third hidden layer generates a tensor of a shape of (batch_size, 784).
- In the last output layer, this tensor is reshaped from a shape of (batch_size, 784) to a shape of (batch_size, 28, 28). This means that our network will generate a batch of images, where one image will have a shape of (28, 28).
推薦閱讀
- 集成架構(gòu)中型系統(tǒng)
- 城市道路交通主動控制技術(shù)
- 數(shù)據(jù)庫原理與應用技術(shù)
- JSF2和RichFaces4使用指南
- Implementing Oracle API Platform Cloud Service
- Practical Big Data Analytics
- 基于Xilinx ISE的FPAG/CPLD設(shè)計與應用
- Microsoft System Center Confi guration Manager
- Bayesian Analysis with Python
- ESP8266 Robotics Projects
- INSTANT Adobe Story Starter
- 電腦上網(wǎng)入門
- Hands-On SAS for Data Analysis
- 傳感器原理及實用技術(shù)
- 深度學習之模型優(yōu)化:核心算法與案例實踐