Towards Deep Generative Modeling with W&B

The Latent "Beautiful" Variable

Most of us are familiar with the concept discriminative model – given an input, say an image, the discriminative model predicts, for instance, if it's a cat or a dog. Usually, in a discriminative model, each training example has a label and thus it’s synonymous with supervised learning.

Formally speaking, discriminative modeling estimates p(y|x) — the probability of a label y(cat or dog) given observation x(image).

On the other hand, the generative model describes how a dataset is generated, in terms of a probabilistic model. Using such a probabilistic model we can generate new data. Usually, a generative model is applied to an unlabeled training example (unsupervised Learning).

Formally speaking, generative modeling estimates p(x) — the probability of observing an observation x. In case of the labelled dataset, we can also build a generative model p(x|y) — the probability of the observation x given its label y.

This blog post is divided into two parts. The first part will discuss autoencoders and then variational autoencoders which are one of the most fundamental architectures for deep generative modeling.

​🔥 Check out this report here.

​💪 Check out the GitHub repo here.

Last updated