Incentivizing Recourse through Auditing in Strategic Classification
Incentivizing Recourse through Auditing in Strategic Classification
Andrew Estornell, Yatong Chen, Sanmay Das, Yang Liu, Yevgeniy Vorobeychik
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 400-408.
https://doi.org/10.24963/ijcai.2023/45
The increasing automation of high-stakes decisions with direct impact on the lives and well-being of individuals raises a number of important considerations. Prominent among these is strategic behavior by individuals hoping to achieve a more desirable outcome. Two forms of such behavior are commonly studied: 1) misreporting of individual attributes, and 2) recourse, or actions that truly change such attributes. The former involves deception, and is inherently undesirable, whereas the latter may well be a desirable goal insofar as it changes true individual qualification. We study misreporting and recourse as strategic choices by individuals within a unified framework. In particular, we propose auditing as a means to incentivize recourse actions over attribute manipulation, and characterize optimal audit policies for two types of principals, utility-maximizing and recourse-maximizing. Additionally, we consider subsidies as an incentive for recourse over manipulation, and show that even a utility-maximizing principal would be willing to devote a considerable amount of audit budget to providing such subsidies. Finally, we consider the problem of optimizing fines for failed audits, and bound the total cost incurred by the population as a result of audits.
Keywords:
AI Ethics, Trust, Fairness: ETF: Societal impact of AI
AI Ethics, Trust, Fairness: ETF: Safety and robustness
Game Theory and Economic Paradigms: GTEP: Other