官术网_书友最值得收藏!

Training through adversarial play in GANs

In a GAN, the networks are trained through adversarial play: both networks compete against each other. As an example, let's assume that we want the GAN to create forgeries of artworks:

  1. The first network, the generator, has never seen the real artwork but is trying to create an artwork that looks like the real thing.
  2. The second network, the discriminator, tries to identify whether an artwork is real or fake.
  3. The generator, in turn, tries to fool the discriminator into thinking that its fakes are the real deal by creating more realistic artwork over multiple iterations.
  4. The discriminator tries to outwit the generator by continuing to refine its own criteria for determining a fake.
  5. They guide each other by providing feedback from the successful changes they make in their own process in each iteration. This process is the training of the GAN.
  6. Ultimately, the discriminator trains the generator to the point at which it can no longer determine which artwork is real and which is fake.

In this game, both networks are trained simultaneously. When we reach a stage at which the discriminator is unable to distinguish between real and fake artworks, the network attains a state known as Nash equilibrium. This will be discussed later on in this chapter.

主站蜘蛛池模板: 龙州县| 任丘市| 克东县| 封丘县| 永定县| 淅川县| 清新县| 东阳市| 嫩江县| 专栏| 太原市| 周至县| 三门县| 天柱县| 横峰县| 增城市| 宝兴县| 凤山市| 东明县| 南投市| 靖安县| 安陆市| 延庆县| 崇仁县| 开封市| 肥城市| 时尚| 翼城县| 通州市| 宜春市| 祁东县| 铁岭县| 榆社县| 常熟市| 明光市| 霍邱县| 正蓝旗| 彩票| 双江| 高雄县| 沐川县|