Ayush Thakur
  • About Me
  • Authoring
    • DeepFaceDrawing: An Overview
    • Rewriting a Deep Generative Model: An Overview
    • Unsupervised Visual Representation Learning with SwAV
    • In-Domain GAN Inversion for Real Image Editing
    • Metric Learning for Image Search
    • Object Localization with Keras and W&B
    • Image Segmentation Using Keras and W&B
    • Understanding the Effectivity of Ensembles in DL
    • Modern Data Augmentation Techniques for CV
    • Adversarial Latent Autoencoders
    • Towards Deep Generative Modeling with W&B
    • Interpretability in Deep Learning - CAM and GradCAM
    • Introduction to image inpainting with deep learning
    • Simple Ways to Tackle Class Imbalance
    • Debugging Neural Networks with PyTorch
    • Generating Digital Painting Lighting Effects
    • Multi Task Learning with W&B
    • Translate American Sign Language Using CNN
    • Converting FC Layers to Conv Layers
  • Projects
    • Smart Traffic Management Using Reinforcement Learning
    • Sign Language Translator
Powered by GitBook
On this page
  • ​🔥 Check out the report here.
  • ​💪 Check out the GitHub repo here.

Was this helpful?

  1. Authoring

Interpretability in Deep Learning - CAM and GradCAM

You need to explain your model.

PreviousTowards Deep Generative Modeling with W&BNextIntroduction to image inpainting with deep learning

Last updated 4 years ago

Was this helpful?

Training a classification model is interesting, but have you ever wondered how your model is making its predictions? Is your model actually looking at the dog in the image before classifying it as a dog with 98% accuracy? Interesting, isn't it. In today’s report, we will explore why deep learning models need to be interpretable, and some interesting methods to peek under the hood of a deep learning model. Deep learning interpretability is a very exciting area of research and much progress is being made in this direction already.

So why should you care about interpretability? After all, the success of your business or your project is judged primarily by how good the accuracy of your model is. But in order to deploy our models in the real world, we need to consider other factors too. For instance, is racially biased? Or, what if it’s classifying humans with 97% accuracy, but while it classifies men with 99% accuracy, it only achieves 95% accuracy on women?

Understanding how a model makes its predictions can also help us debug your network. [Check out this blog post on '' for some other techniques that can help].

At this point, we are all familiar with the concept that deep learning models make predictions based on the learned representation expressed in terms of other simpler representations. That is, deep learning allows us to build complex concepts out of simpler concepts. Here’s an amazing post to help you understand this concept better. We also know that these representations are learned while we train the model with our input data and the label, in case of some supervised learning task like image classification. One of the criticisms of this approach is that the learned features in a neural network are not interpretable.

Today we'll look at 3 techniques that address this criticism and shed light into neural networks' “black-box” nature of learning.

  • Visualize learned features.

  • Class Activation Map(CAM)

  • Gradient CAM

​🔥 Check out the report .

​💪 Check out the GitHub repo .

Debugging Neural Networks with PyTorch and W&B Using Gradients and Visualizations
Distill Pub
here
here