As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Learned action policies are gaining traction in AI, but come without safety guarantees. Recent work devised a method for safety verification of neural policies via predicate abstraction. Here we extend this approach to policies represented by tree ensembles, through replacing the underlying SMT queries with queries that can be dispatched by Veritas, a reasoning tool dedicated to tree ensembles. The query language supported by Veritas is limited, and we show how to encode richer constraints we need into additional trees and decision variables. We run experiments on benchmarks previously used to evaluate neural policy verification, and we design new benchmarks based on a logistics application at Airbus as well as on a real-world robotics domain. We find that (1) verification with Veritas vastly outperforms verification with Z3 and Gurobi; (2) tree-ensemble policies are much faster to verify than neural policies, while being competitive in policy quality; (3) our techniques are highly complementary to, and often outperform, an encoding of tree-ensemble policy verification into NUXMV.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.