官术网_书友最值得收藏!

Training through adversarial play in GANs

In a GAN, the networks are trained through adversarial play: both networks compete against each other. As an example, let's assume that we want the GAN to create forgeries of artworks:

  1. The first network, the generator, has never seen the real artwork but is trying to create an artwork that looks like the real thing.
  2. The second network, the discriminator, tries to identify whether an artwork is real or fake.
  3. The generator, in turn, tries to fool the discriminator into thinking that its fakes are the real deal by creating more realistic artwork over multiple iterations.
  4. The discriminator tries to outwit the generator by continuing to refine its own criteria for determining a fake.
  5. They guide each other by providing feedback from the successful changes they make in their own process in each iteration. This process is the training of the GAN.
  6. Ultimately, the discriminator trains the generator to the point at which it can no longer determine which artwork is real and which is fake.

In this game, both networks are trained simultaneously. When we reach a stage at which the discriminator is unable to distinguish between real and fake artworks, the network attains a state known as Nash equilibrium. This will be discussed later on in this chapter.

主站蜘蛛池模板: 大田县| 凤阳县| 泰顺县| 那曲县| 平阳县| 临朐县| 舟曲县| 寿阳县| 泗阳县| 潼南县| 洛扎县| 龙州县| 驻马店市| 麻城市| 光泽县| 瑞安市| 汾阳市| 马公市| 安义县| 高雄县| 府谷县| 南部县| 宾阳县| 广水市| 平定县| 屏山县| 临朐县| 静乐县| 溧阳市| 临澧县| 务川| 阳江市| 游戏| 温宿县| 永定县| 上林县| 体育| 商都县| 金门县| 南岸区| 定西市|