- Hands-On Generative Adversarial Networks with Keras
- Rafael Valle
- 527字
- 2021-06-24 14:33:54
Comparing discriminative and generative models
Learning the conditional distribution is easier, because you do not have to make assumptions about the marginal distribution of x or y.
We will use the following diagram to illustrate the differences between discriminative and generative models. We can see two plots with 13 points of a two-dimensional dataset; let's call the blue class labels , and the yellow class labels
:

When training a discriminative model, , we want to estimate the hidden parameters of the model that describe the conditional probability distribution that provides a decision boundary with an optimal split between the classes at hand. When training a generative model,
, we want to estimate the parameters that describe the joint probability distribution of x and y.
In addition to predicting the conditional probability, learning the joint probability distribution allows us to sample the learned model to generate new data from , where
is conditioned on
and
is conditioned on
. In the preceding diagram, for example, you could model the joint probability by learning the hidden parameters of a mixture distribution; for example, a Gaussian mixture with one component per class.
Another way to visualize the difference between generative and discriminative models is to look at a graphical depiction of the distribution that is being modeled. In the following diagram, we can see that the depiction of the discriminative model shows a decision boundary that can be used to define the class label, given some fixed data. In this case, predicting can be seen as finding a decision boundary from which the distance of a datapoint to the boundary is proportional to the probability of that datapoint belonging to a class.
In a binary classification task on a single variable-let's call it -the simplest form of such a model is to find the boundary at which more samples are labeled correctly. In the following figure, the value of
that maximizes the number of correct labels is around 50.
The following depiction of the generative model shows the exact distribution of in the presence and absence of
. Naturally, given that we know the exact distribution of
and
, we can sample it to generate new data:

Since generative models handle the hard task of modeling all dependencies and patterns that are in the input and output data, applications of generative models are uncountable. The deep learning field has produced state-of-the-art generative models for applications such as image generation, speech synthesis, and model-based control.
A fascinating aspect of generative models is that they are potentially capable of learning large and complex data distributions with a relatively small number of parameters. Unlike discriminative models, generative models can learn meaningful features from large, unlabeled datasets, a process that requires little to no labeling or human supervision.
Most recent work in generative models has been focused on GANs and likelihood-based methods, including autoregressive models, Variational Autoencoders (VAEs), and flow-based models. In the following paragraphs, we will describe likelihood-based models and variations thereof. Later, we will describe the GAN framework in detail.
- Big Data Analytics with Hadoop 3
- 嵌入式系統(tǒng)應(yīng)用
- 嵌入式Linux上的C語言編程實(shí)踐
- INSTANT Autodesk Revit 2013 Customization with .NET How-to
- 工業(yè)機(jī)器人應(yīng)用案例集錦
- 西門子變頻器技術(shù)入門及實(shí)踐
- OpenStack Cloud Computing Cookbook
- Introduction to R for Business Intelligence
- 生物3D打印:從醫(yī)療輔具制造到細(xì)胞打印
- Linux系統(tǒng)管理員工具集
- Photoshop CS4數(shù)碼攝影處理50例
- AVR單片機(jī)工程師是怎樣煉成的
- 精通ROS機(jī)器人編程(原書第2版)
- Mastering DynamoDB
- 網(wǎng)頁設(shè)計(jì)與制作