Yongshao Ruan, Eric Horvitz, Henry Kautz
Access postscript or pdf file.
The time required for a backtracking search procedure to solve a problem can be reduced by employing randomized restart procedures. To date, researchers designing restart policies have relied on the simplifying assumption that runs are probabilistically independent from one another. We relax the assumption of independence among runs and address the challenge of identifying an optimal restart policy for the case of dependent runs. We show how offline dynamic programming can be used to generate an ideal restart policy, and how the policy can be used in conjunction with real-time observations to control the timing of restarts. We present results of experiments on applying the methods to create ideal restart policies for several challenging problems using two different solvers.
Keywords: Randomized search algorithms, dynamic restart policies, Markov decision processes (MDP), satisfiability (SAT) problems, quasigroup completion problem (QCP), graph coloring (GCP), logistics planning, constraint satisfaction (CSP), heavy-tailed distributions, NP-Complete problems.
In: Proceedings of the Eighth International Conference on Principles and Practice of Constraint Programming, September 2002, Ithaca, New York.
Author Email: ruan@cs.washington.edu, horvitz@microsoft.com, kautz@cs.washington.edu