Aug 18, 2023 · We present a novel method for model correction on the concept level that explicitly reduces model sensitivity towards biases via gradient penalization.
We present a novel method for model correction on the concept level that explicitly reduces model sensitivity towards biases via gradient penalization.
Oct 22, 2024 · Deep Neural Networks are prone to learning spurious correlations embedded in the training data, leading to potentially biased predictions.
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space. M. Dreyer, F. Pahde, C. Anders, W. Samek, and S. Lapuschkin ...
People also ask
What are two ways to deal with the vanishing gradient problem in a deep neural network?
What is latent space in deep learning?
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space. Proceedings of the AAAI Conference on Artificial ...
This repository contains the code for the paper "From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space".
This work presents a novel method ensuring the right reasons on the concept level by reducing the model's sensitivity towards biases through the gradient, ...
Missing: Penalization | Show results with:Penalization
... {From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space}, booktitle = {Proceedings of the Thirty-Eight AAAI ...
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space. M Dreyer, F Pahde, CJ Anders, W Samek, S Lapuschkin.