# Towards Deep Generative Modeling with W\&B

Most of us are familiar with the concept **discriminative model** – given an input, say an image, the discriminative model predicts, for instance, if it's a cat or a dog. Usually, in a discriminative model, each training example has a label and thus it’s synonymous with supervised learning.

Formally speaking, discriminative modeling estimates p(y|x) — the probability of a label y(cat or dog) given observation x(image).

On the other hand, the **generative model** describes how a dataset is generated, in terms of a probabilistic model. Using such a probabilistic model we can generate new data. Usually, a generative model is applied to an unlabeled training example (unsupervised Learning).

Formally speaking, generative modeling estimates p(x) — the probability of observing an observation x. In case of the labelled dataset, we can also build a generative model p(x|y) — the probability of the observation x given its label y.

This blog post is divided into two parts. The first part will discuss **autoencoders** and then **variational autoencoders** which are one of the most fundamental architectures for deep generative modeling.

## ​🔥 Check out this report [here](https://app.wandb.ai/ayush-thakur/keras-gan/reports/Towards-Deep-Generative-Modeling-with-W%26B--Vmlldzo4MDI4Mw). <a href="#check-out-this-report-here" id="check-out-this-report-here"></a>

## ​💪 Check out the GitHub repo [here](https://github.com/ayulockin/deepgenerativemodeling). <a href="#check-out-the-github-repo-here" id="check-out-the-github-repo-here"></a>
