Ayush Thakur
  • About Me
  • Authoring
    • DeepFaceDrawing: An Overview
    • Rewriting a Deep Generative Model: An Overview
    • Unsupervised Visual Representation Learning with SwAV
    • In-Domain GAN Inversion for Real Image Editing
    • Metric Learning for Image Search
    • Object Localization with Keras and W&B
    • Image Segmentation Using Keras and W&B
    • Understanding the Effectivity of Ensembles in DL
    • Modern Data Augmentation Techniques for CV
    • Adversarial Latent Autoencoders
    • Towards Deep Generative Modeling with W&B
    • Interpretability in Deep Learning - CAM and GradCAM
    • Introduction to image inpainting with deep learning
    • Simple Ways to Tackle Class Imbalance
    • Debugging Neural Networks with PyTorch
    • Generating Digital Painting Lighting Effects
    • Multi Task Learning with W&B
    • Translate American Sign Language Using CNN
    • Converting FC Layers to Conv Layers
  • Projects
    • Smart Traffic Management Using Reinforcement Learning
    • Sign Language Translator
Powered by GitBook
On this page
  • ​🔥 Check out this report here.
  • ​💪 Check out the GitHub repo here.

Was this helpful?

  1. Authoring

Towards Deep Generative Modeling with W&B

The Latent "Beautiful" Variable

PreviousAdversarial Latent AutoencodersNextInterpretability in Deep Learning - CAM and GradCAM

Last updated 4 years ago

Was this helpful?

Most of us are familiar with the concept discriminative model – given an input, say an image, the discriminative model predicts, for instance, if it's a cat or a dog. Usually, in a discriminative model, each training example has a label and thus it’s synonymous with supervised learning.

Formally speaking, discriminative modeling estimates p(y|x) — the probability of a label y(cat or dog) given observation x(image).

On the other hand, the generative model describes how a dataset is generated, in terms of a probabilistic model. Using such a probabilistic model we can generate new data. Usually, a generative model is applied to an unlabeled training example (unsupervised Learning).

Formally speaking, generative modeling estimates p(x) — the probability of observing an observation x. In case of the labelled dataset, we can also build a generative model p(x|y) — the probability of the observation x given its label y.

This blog post is divided into two parts. The first part will discuss autoencoders and then variational autoencoders which are one of the most fundamental architectures for deep generative modeling.

​🔥 Check out this report .

​💪 Check out the GitHub repo .

here
here