Lq regularization for fair artificial intelligence robust to covariate shift
Statistical Analysis and Data Mining: The ASA Data Science Journal, 2023•Wiley Online Library
It is well recognized that historical biases exist in training data against a certain sensitive
group (eg, non‐White, women) which are socially unacceptable, and these unfair biases are
inherited in trained artificial intelligence (AI) models. Various learning algorithms have been
proposed to remove or alleviate unfair biases in trained AI models. In this paper, we
consider another type of bias in training data so‐called covariate shift in view of fair AI. Here,
covariate shift means that training data do not represent the population of interest well …
group (eg, non‐White, women) which are socially unacceptable, and these unfair biases are
inherited in trained artificial intelligence (AI) models. Various learning algorithms have been
proposed to remove or alleviate unfair biases in trained AI models. In this paper, we
consider another type of bias in training data so‐called covariate shift in view of fair AI. Here,
covariate shift means that training data do not represent the population of interest well …
Abstract
It is well recognized that historical biases exist in training data against a certain sensitive group (e.g., non‐White, women) which are socially unacceptable, and these unfair biases are inherited in trained artificial intelligence (AI) models. Various learning algorithms have been proposed to remove or alleviate unfair biases in trained AI models. In this paper, we consider another type of bias in training data so‐called covariate shift in view of fair AI. Here, covariate shift means that training data do not represent the population of interest well. Covariate shift occurs when special sampling designs (e.g., stratified sampling) are used when collecting training data, or the population where training data are collected is different from the population of interest. When covariate shift exists, fair AI models on training data may not be fair in test data. To ensure fairness on test data, we develop computationally efficient learning algorithms robust to covariate shifts. In particular, we propose a robust fairness constraint based on the Lq norm which is a generic algorithm to be applied to various fairness AI problems without much hampering. By analyzing multiple benchmark datasets, we show that our proposed robust fairness AI algorithm improves existing fair AI algorithms much in terms of the fairness‐accuracy tradeoff to covariate shift and has significant computational advantages compared to other robust fair AI algorithms.
Wiley Online Library
Showing the best result for this search. See all results