Active feature acquisition with generative surrogate models

Y Li, J Oliva - International conference on machine learning, 2021 - proceedings.mlr.press
International conference on machine learning, 2021proceedings.mlr.press
Many real-world situations allow for the acquisition of additional relevant information when
making an assessment with limited or uncertain data. However, traditional ML approaches
either require all features to be acquired beforehand or regard part of them as missing data
that cannot be acquired. In this work, we consider models that perform active feature
acquisition (AFA) and query the environment for unobserved features to improve the
prediction assessments at evaluation time. Our work reformulates the Markov decision …
Abstract
Many real-world situations allow for the acquisition of additional relevant information when making an assessment with limited or uncertain data. However, traditional ML approaches either require all features to be acquired beforehand or regard part of them as missing data that cannot be acquired. In this work, we consider models that perform active feature acquisition (AFA) and query the environment for unobserved features to improve the prediction assessments at evaluation time. Our work reformulates the Markov decision process (MDP) that underlies the AFA problem as a generative modeling task and optimizes a policy via a novel model-based approach. We propose learning a generative surrogate model (GSM) that captures the dependencies among input features to assess potential information gain from acquisitions. The GSM is leveraged to provide intermediate rewards and auxiliary information to aid the agent navigate a complicated high-dimensional action space and sparse rewards. Furthermore, we extend AFA in a task we coin active instance recognition (AIR) for the unsupervised case where the target variables are the unobserved features themselves and the goal is to collect information for a particular instance in a cost-efficient way. Empirical results demonstrate that our approach achieves considerably better performance than previous state of the art methods on both supervised and unsupervised tasks.
proceedings.mlr.press
Showing the best result for this search. See all results