[edit]
Sequential algorithmic modification with test data reuse
Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR 180:674-684, 2022.
Abstract
After initial release of a machine learning algorithm, the model can be fine-tuned by retraining on subsequently gathered data, adding newly discovered features, or more. Each modification introduces a risk of deteriorating performance and must be validated on a test dataset. It may not always be practical to assemble a new dataset for testing each modification, especially when most modifications are minor or are implemented in rapid succession. Recent work has shown how one can repeatedly test modifications on the same dataset and protect against overfitting by (i) discretizing test results along a grid and (ii) applying a Bonferroni correction to adjust for the total number of modifications considered by an adaptive developer. However, the standard Bonferroni correction is overly conservative when most modifications are beneficial and/or highly correlated. This work investigates more powerful approaches using alpha-recycling and sequentially-rejective graphical procedures (SRGPs). We introduce two novel extensions that account for correlation between adaptively chosen algorithmic modifications: the first leverages the correlation between consecutive modifications using flexible fixed sequence tests, and the second leverages the correlation between the proposed modifications and those generated by a hypothetical prespecified model updating procedure. In empirical analyses, both SRGPs control the error rate of approving deleterious modifications and approve significantly more beneficial modifications than previous approaches.