Ayush Thakur
  • About Me
  • Authoring
    • DeepFaceDrawing: An Overview
    • Rewriting a Deep Generative Model: An Overview
    • Unsupervised Visual Representation Learning with SwAV
    • In-Domain GAN Inversion for Real Image Editing
    • Metric Learning for Image Search
    • Object Localization with Keras and W&B
    • Image Segmentation Using Keras and W&B
    • Understanding the Effectivity of Ensembles in DL
    • Modern Data Augmentation Techniques for CV
    • Adversarial Latent Autoencoders
    • Towards Deep Generative Modeling with W&B
    • Interpretability in Deep Learning - CAM and GradCAM
    • Introduction to image inpainting with deep learning
    • Simple Ways to Tackle Class Imbalance
    • Debugging Neural Networks with PyTorch
    • Generating Digital Painting Lighting Effects
    • Multi Task Learning with W&B
    • Translate American Sign Language Using CNN
    • Converting FC Layers to Conv Layers
  • Projects
    • Smart Traffic Management Using Reinforcement Learning
    • Sign Language Translator
Powered by GitBook
On this page
  • ​🔥 Check out the report here.
  • 😇 Check out our minimal implementation here.

Was this helpful?

  1. Authoring

Adversarial Latent Autoencoders

PreviousModern Data Augmentation Techniques for CVNextTowards Deep Generative Modeling with W&B

Last updated 4 years ago

Was this helpful?

In the words of Yann LeCun, Generative Adversarial Networks (GANs) are "The most interesting idea in Machine Learning in the last 10 years". This is not surprising since GANs have been able to generate almost anything from high resolution images of people "resembling" celebrities, building layouts and blueprints all the way to memes. Their strength lies in their incredible ability to model complex distributions. While autoencoders have attempted to be as versatile as GANs, they have (at least until now) not had the same generative power as GANs and historically have learnt entangled representations. The authors of the paper draw inspiration from recent progress in GANs and propose a novel autoencoder which addresses these fundamental limitations. In the next few sections, we'll dive deeper and find out how.

​🔥 Check out the report .

😇 Check out our minimal implementation .

This was co-written and implemented with .

here
here
Sairam