官术网_书友最值得收藏!

Discriminator and generator loss

Given the setup that we have described,  and  play a iterative two-player minimax game with the value function :

Literally speaking, it minimizes its loss when D(x) is equal to 1 and D(G(z)) is equal to 0, that is, when the discriminator's probability of real is 1 for real data and 0 for fake data. Hence, the discriminator is maximizing the outputs from D(X) and D(G(z)).

The generator, on the other hand, minimizes its loss when D(G(z)) is equal to 1, that is, when the discriminator's probability of real data as  fake data is 1. Hence, the generator is minimizing the outputs from D(G(z)).

The objective function described in the preceding equation is equivalent to minimizing the Jensen-Shannon (JS) divergence between the distributions, as described in the paper Generative Adversarial Nets by Ian Goodfellow et al. The JS divergence is a symmetric and smoothed version of the Kullback-Leibler (KL) divergence, described in the paper On information and sufficiency by Kullback and Leibler. Note that in the following equation,  stands for divergence, not discriminator:

 on a continuous support is defined as follows:

A closer look at the KL divergence described earlier shows a few problems with respect to exploding losses when the support of P is not contained in Q. In the next section, we will address this topic within the context of the strengths and weaknesses of GANs.

主站蜘蛛池模板: 平阴县| 玉环县| 泰宁县| 马关县| 霸州市| 盘山县| 隆子县| 建始县| 辽中县| 宣武区| 旺苍县| 永嘉县| 石阡县| 松江区| 房产| 南川市| 福州市| 马边| 马鞍山市| 玉树县| 方正县| 玉龙| 吴川市| 越西县| 龙胜| 临颍县| 思茅市| 五台县| 衡东县| 灵璧县| 全南县| 虞城县| 福清市| 江西省| 扎鲁特旗| 晋州市| 东莞市| 云阳县| 武城县| 巴彦淖尔市| 辽阳县|