- Hands-On Generative Adversarial Networks with Keras
- Rafael Valle
- 240字
- 2021-06-24 14:33:58
Discriminator and generator loss
Given the setup that we have described, and
play a iterative two-player minimax game with the value function
:

Literally speaking, it minimizes its loss when D(x) is equal to 1 and D(G(z)) is equal to 0, that is, when the discriminator's probability of real is 1 for real data and 0 for fake data. Hence, the discriminator is maximizing the outputs from D(X) and D(G(z)).
The generator, on the other hand, minimizes its loss when D(G(z)) is equal to 1, that is, when the discriminator's probability of real data as fake data is 1. Hence, the generator is minimizing the outputs from D(G(z)).
The objective function described in the preceding equation is equivalent to minimizing the Jensen-Shannon (JS) divergence between the distributions, as described in the paper Generative Adversarial Nets by Ian Goodfellow et al. The JS divergence is a symmetric and smoothed version of the Kullback-Leibler (KL) divergence, described in the paper On information and sufficiency by Kullback and Leibler. Note that in the following equation, stands for divergence, not discriminator:

on a continuous support is defined as follows:

A closer look at the KL divergence described earlier shows a few problems with respect to exploding losses when the support of P is not contained in Q. In the next section, we will address this topic within the context of the strengths and weaknesses of GANs.
- Mastercam 2017數(shù)控加工自動(dòng)編程經(jīng)典實(shí)例(第4版)
- 商戰(zhàn)數(shù)據(jù)挖掘:你需要了解的數(shù)據(jù)科學(xué)與分析思維
- Apache Hive Essentials
- 最后一個(gè)人類(lèi)
- 自動(dòng)化控制工程設(shè)計(jì)
- 人工智能:語(yǔ)言智能處理
- 工業(yè)自動(dòng)化技術(shù)實(shí)訓(xùn)指導(dǎo)
- 機(jī)器人人工智能
- Visual Studio 2010 (C#) Windows數(shù)據(jù)庫(kù)項(xiàng)目開(kāi)發(fā)
- Mastering Exploratory Analysis with pandas
- Hands-On SAS for Data Analysis
- PowerPoint 2010幻燈片制作高手速成
- QTP自動(dòng)化測(cè)試實(shí)踐
- CPLD/FPGA技術(shù)應(yīng)用
- Arduino創(chuàng)意機(jī)器人入門(mén):基于Mind+