- Generative Adversarial Networks Projects
- Kailash Ahirwar
- 217字
- 2021-07-02 13:38:42
Training through adversarial play in GANs
In a GAN, the networks are trained through adversarial play: both networks compete against each other. As an example, let's assume that we want the GAN to create forgeries of artworks:
- The first network, the generator, has never seen the real artwork but is trying to create an artwork that looks like the real thing.
- The second network, the discriminator, tries to identify whether an artwork is real or fake.
- The generator, in turn, tries to fool the discriminator into thinking that its fakes are the real deal by creating more realistic artwork over multiple iterations.
- The discriminator tries to outwit the generator by continuing to refine its own criteria for determining a fake.
- They guide each other by providing feedback from the successful changes they make in their own process in each iteration. This process is the training of the GAN.
- Ultimately, the discriminator trains the generator to the point at which it can no longer determine which artwork is real and which is fake.
In this game, both networks are trained simultaneously. When we reach a stage at which the discriminator is unable to distinguish between real and fake artworks, the network attains a state known as Nash equilibrium. This will be discussed later on in this chapter.
推薦閱讀
- Mastering Proxmox(Third Edition)
- 21小時學通AutoCAD
- 精通MATLAB圖像處理
- 計算機原理
- IoT Penetration Testing Cookbook
- 程序設計語言與編譯
- 自動檢測與轉換技術
- Data Wrangling with Python
- 21天學通Java
- 大學計算機應用基礎
- 傳感器與物聯網技術
- TensorFlow Reinforcement Learning Quick Start Guide
- SAP Business Intelligence Quick Start Guide
- 教育機器人的風口:全球發展現狀及趨勢
- Citrix? XenDesktop? 7 Cookbook