Authors: Gagandeep Singh, Markus Püschel and Martin Vechev
Paper Information
Title: | Fast Numerical Program Analysis with Reinforcement Learning |
Authors: | Gagandeep Singh, Markus Püschel and Martin Vechev |
Proceedings: | CAV All Papers |
Editors: | Georg Weissenbacher, Hana Chockler and Igor Konnov |
Keywords: | Machine Learning, Numerical Program Analysis, Reinforcement Learning, Polyhedra Domain Analysis, Static Analysis, Online Decomposition |
Abstract: | ABSTRACT. We leverage reinforcement learning (RL) to speed up numerical program analysis. The key insight is to establish a correspondence between concepts in RL and program analysis. For instance, a state in RL maps to an abstract program state in the analysis, an action maps to an abstract transformer, and at every state, we have a set of sound transformers (actions) that represent different trade-offs between precision and performance. At each iteration, the agent (analysis in our case) uses a policy that is learned offline by RL to decide on the transformer which minimizes the loss of precision at fixpoint while increasing analysis performance. Our approach leverages the idea of online decomposition (applicable to popular numerical abstract domains) to define a space of approximate transformers with varying degrees of precision and performance. Using a suitably designed set of features that capture key properties of both, abstract program states and available actions, we then apply Q-learning with linear function approximation to compute an optimized context-sensitive policy that chooses transformers during analysis. We implemented our approach for the notoriously expensive Polyhedra domain and evaluated it on a set of Linux device drivers that are expensive to analyze. The results show that our approach can yield massive speedups (up to $515$x) while maintaining precision at fixpoint. |
Pages: | 18 |
Talk: | Jul 14 15:00 (Session 98A: Polyhedra) |
Paper: |