Bias-variance tradeoffs in program analysis

R Sharma, AV Nori, A Aiken - ACM SIGPLAN Notices, 2014 - dl.acm.org
ACM SIGPLAN Notices, 2014dl.acm.org
It is often the case that increasing the precision of a program analysis leads to worse results.
It is our thesis that this phenomenon is the result of fundamental limits on the ability to use
precise abstract domains as the basis for inferring strong invariants of programs. We show
that bias-variance tradeoffs, an idea from learning theory, can be used to explain why more
precise abstractions do not necessarily lead to better results and also provides practical
techniques for coping with such limitations. Learning theory captures precision using a …
It is often the case that increasing the precision of a program analysis leads to worse results. It is our thesis that this phenomenon is the result of fundamental limits on the ability to use precise abstract domains as the basis for inferring strong invariants of programs. We show that bias-variance tradeoffs, an idea from learning theory, can be used to explain why more precise abstractions do not necessarily lead to better results and also provides practical techniques for coping with such limitations. Learning theory captures precision using a combinatorial quantity called the VC dimension. We compute the VC dimension for different abstractions and report on its usefulness as a precision metric for program analyses. We evaluate cross validation, a technique for addressing bias-variance tradeoffs, on an industrial strength program verification tool called YOGI. The tool produced using cross validation has significantly better running time, finds new defects, and has fewer time-outs than the current production version. Finally, we make some recommendations for tackling bias-variance tradeoffs in program analysis.
ACM Digital Library
Showing the best result for this search. See all results