FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
SAT PROCEEDINGS: PAPERS WITH ABSTRACTS

Editors: Christoph M. Wintersteiger and Olaf Beyersdorff

Authors, Title and AbstractPaperTalk
Jul 11 11:00

ABSTRACT. The effective use of satisfiability solvers requires problem encodings that make good use of the reasoning techniques employed in such solvers, such as unit propagation and clause learning. Propagation completeness has been proposed as a useful property for constraint encodings as it maximizes the utility of unit propagation. Experimental results on using encodings with this property in the context of SMT solving have however remained inconclusive, as such encodings are typically very large, which increases the bookkeeping effort of solvers.

In this paper, we introduce approximate propagation completeness and approximate conflict propagation as novel notions that balance the size of an encoding and its properties for efficient SAT-based reasoning. While approximate propagation completeness is a generalization of classical propagation completeness, (approximate) conflict propagation is a new concept for reasoning about how early conflicts can be detected by a SAT solver. Both notions together span a hierarchy of encoding quality choices, with classical propagation completeness as a special case. We show how to compute approximately propagation complete and conflict propagating constraint encodings with a minimal number of clauses using a reduction to MaxSAT. To evaluate the effect of such encodings, we give results on applying them in a case study.

Jul 09 14:00

ABSTRACT. In this paper we present a novel cardinality constraint encoding for solving the weighted MaxSAT problem with iterative SAT-based methods based on the Polynomial Watchdog (PW) CNF encoding for Pseudo-Boolean (PB) constraints. The watchdog of the PW encoding indicates whether the bound of the PB constraint holds. In our approach, we lift this static watchdog concept to a dynamic one allowing an incremental convergence to the optimal result. Consequently, we formulate and implement a SAT-based algorithm for our new Dynamic Polynomial Watchdog (DPW) encoding which can be applied for solving the MaxSAT problem. Furthermore, we introduce three fundamental optimizations of the PW encoding also suited for the original version leading to a significantly less encoding size.

Our experimental results show that our encoding and algorithm is competitive with state-of-the-art encodings as utilized in QMaxSAT (3rd place in last MaxSAT Evaluation 2017). Our encoding dominates two of the QMaxSAT encodings, and at the same time is able to solve unique instances. We integrated our new encoding into QMaxSAT and adapt the heuristic to choose between the only remaining encoding of QMaxSAT and our approach. This combined version solves 19 (4%) more instances in overall 30% less run time on the benchmark set of the MaxSAT Evaluation 2017. However, for the instances solved by both solvers our encoding is 2X faster than all employed encodings of QMaxSAT used in the evaluation.

Jul 09 14:30

ABSTRACT. We explore the relationships between two closely related optimization problems: MaxSAT and Optimization Modulo Bit-Vectors (OBV). Given a bit-vector or a propositional formula F and a target bit-vector T, Unweighted Partial MaxSAT maximizes the number of satisfied bits in T, while OBV maximizes the value of T. The contribution of this paper is twofold.

First, we propose a new OBV-based Unweighted Partial MaxSAT algorithm. Our resulting solver–Mrs. Beaver–is incremental: it can be re-used across multiple invocations with different target bit-vectors. Mrs. Beaver outscores the state-of-the-art solvers when run with the settings of the Incomplete-60-Second-Timeout Track of the MaxSAT Evaluation 2017.

Second, we introduce and study a new problem–Ordered MaxSAT (OMaxSAT). In OMaxSAT, instead of a target bit-vector T, there are multiple target bit-vectors T_{m−1} ,T_{m−2},...,T_0. The goal is to maximize the number of satisfied bits in each of the targets, where satisfying one bit of T_i is preferred to satisfying all the bits in T_{i−1}, T_{i−2} ,..., T_0 . OMaxSAT has a critical application at Intel: cleaning-up weak design rule violations during the Physical Design stage of Computer-Aided-Design. We propose a Mrs. Beaver-based OMaxSAT algorithm which takes advantage of Mrs. Beaver’s incremental features and is significantly faster than a non-incremental solution.

Jul 09 15:00

ABSTRACT. We study cdcl-cuttingplanes, Open-WBO, and Sat4j, three successful solvers from the Pseudo-Boolean Competition 2016, and evaluate them by performing experiments on crafted benchmarks designed to be trivial for the cutting planes (CP) proof system underlying pseudo-Boolean (PB) proof search, but yet potentially tricky for PB solvers. Our results demonstrate severe shortcomings in state-of-the-art PB solving techniques. Despite the fact that our benchmarks have linear-size tree-like CP proofs, the solvers often perform quite badly even for very small instances. Our analysis is that this shows that solvers need to explore stronger methods of pseudo-Boolean reasoning within cutting planes. We make an empirical observation from the competition data that many of the easy crafted instances are also infeasible over the rational numbers, or have small strong backdoors to PB instances without rational solutions. This raises the intriguing question whether the existence of such backdoors can be correlated with easiness/hardness. However, for some of our constructed benchmark families even rationally infeasible instances are completely beyond reach. This indicates that PB solvers need to get better not only at Boolean reasoning but even at linear programming. Finally, we compare CP-based solvers with CDCL and MIP solvers. For those of our benchmarks where the natural CNF encodings admit efficient resolution proofs, we see that the CDCL-based solver Open-WBO is orders of magnitude faster than the CP-based solvers cdcl-cuttingplanes and Sat4j (though it seems very sensitive to the ordering of the input). And the MIP solver Gurobi beats all of these solvers across the board. These experimental results point to several crucial challenges in the quest for more efficient pseudo-Boolean solvers, and we also believe that a further study of our benchmarks could shed important light on the potential and limitations of current state-of-the-art PB solving.

Jul 09 16:00

ABSTRACT. Restarts are a critically important heuristic in most modern conflict-driven clause-learning (CDCL) SAT solvers. The precise reason as to why and how restarts enable CDCL solvers to scale efficiently remains obscure. In this paper we address this question, and provide some answers that enabled us to design a new effective machine learning-based restart policy. Specifically, we provide evidence that restarts improve the quality of learnt clauses as measured by one of best known clause quality metrics, namely, literal block distance (LBD). More precisely, we show that more frequent restarts decrease the LBD of learnt clauses, which in turn improves solver performance. We also note that too many restarts can be harmful because of the computational overhead of rebuilding the search tree from scratch too frequently. With this tradeoff in mind, between that of learning better clauses vs. the computational overhead of rebuilding the search tree, we introduce a new machine learning-based restart policy that predicts the quality of the next learnt clause based on the history of previously learnt clauses. The restart policy erases the solver’s search tree during its run, if it predicts that the quality of the next learnt clause is below some dynamic threshold that is determined by the solver’s history on the given input. Our machine learning-based restart policy is based on two observations gleaned from our study of LBDs of learned clauses. First, we discover that high LBD percentiles can be approximated with z-scores of the normal distribution. Second, we find that LBDs, viewed as a sequence, are correlated and hence the LBDs of past learned clauses can be used to predict the LBD of future ones. With these observations in place, and techniques to exploit them, our new restart policy is shown to effective over a large benchmark from the SAT Competition 2014 to 2017.

Jul 09 16:30

ABSTRACT. Non-Chronological Backtracking (NCB) has been implemented in every modern CDCL SAT solver since the original CDCL solver GRASP. NCB’s importance has never been questioned. This paper argues that NCB is not always helpful. We show how one can implement the alternative to NCB–Chronological Backtracking (CB)–in a modern SAT solver. We demonstrate that CB improves the performance of the winner of the latest SAT Competition, Maple-LCM-Dist, and the winner of the latest MaxSAT Evaluation, Open-WBO.

Jul 09 17:00

ABSTRACT. There are many reasons to think that SAT solvers should be able to exploit formula structure, but no standard techniques in modern CDCL solvers make explicit use of structure. We describe modifications to modern decision and clause-deletion heuristics that exploit formula structure by using variable centrality. We show that these improve the performance of Maple LCM Dist, the winning solver from Main Track of the 2017 SAT Solver competition. In particular, using centrality in clause deletion results in solving 9 more formulas from the 2017 Main Track. We also look at a number of measures of solver performance and learned clause quality, to see how the changes affect solver execution.

Jul 09 17:30

ABSTRACT. Validation of complex digital circuitry primarily relies upon constrained-random simulation. However, it is intractable to simulate all satisfying assignments (models). Rather, sufficient verification coverage is achieved by selecting test stimuli uniformly at random from the set of valid models. The current state-of-the-art witness generators use hashing to perform this sampling near-uniformly. We present a new, perfectly-uniform witness generation algorithm based on the exact model counter sharpSAT and reservoir sampling. In experiments across hundreds of SAT benchmarks, our algorithm is faster than the state of the art by ten to over a hundred thousand times.

Jul 10 09:00

ABSTRACT. We present a probabilistic model counter that can trade off running time with approximation accuracy. As in several previous works, the number of models of a formula is estimated by adding random parity constraints (equations). Specifically, the binary logarithm of the number of models is approximated by the number of random constrains needed to eliminate all models. One key difference with prior works is that the systems of parity equations used by our algorithm are the parity check matrices of Low Density Parity Check (LDPC) error-correcting codes. As a result, the equations tend to be much shorter, often containing fewer than 10 variables each, making the search for models far more tractable. The price paid for computational tractability is that their corresponding statistical properties are not as good as when (much longer) constraints are used. We show how one can deal with this issue and derive rigorous approximation guarantees by performing more solver invocations.

Jul 10 09:30

ABSTRACT. In this paper, we introduce a novel algorithm to solve projected model counting (PMC). PMC asks to count solutions of a Boolean formula with respect to a given set of projected variables, where multiple solutions that are identical when restricted to the projected variables count as only one solution.

Our algorithm exploits bounded primal or incidence treewidth of the input instance. It runs in time $O(2^{2^{k+4}} n^2)$ where k is the treewidth and n is the input size of the instance. In other words, we obtain that the problem PMC is fixed-parameter tractable when parameterized by treewidth. Further, we take the exponential time hypothesis (ETH) into consideration and establish lower bounds of bounded treewidth algorithms for PMC, which yields that runtime bounds of our algorithm are tight.

Jul 10 10:00

ABSTRACT. This paper describes the algorithm implemented in the QBF solver CQESTO, which has placed second in the non-CNF track of the last year's QBF competition. The algorithm is inspired by the CNF-based solver QESTO. Just as QESTO, CQESTO invokes a SAT solver in a black-box fashion. However, it directly operates on the circuit representation of the formula. The paper analyzes the individual operations that the solver performs on the circuit.

Jul 10 14:00

ABSTRACT. While symmetries are well understood for Boolean formulas and successfully exploited in practical SAT solving, little is known about symmetries in quantified Boolean formulas (QBF). There are some works introducing adoptions of propositional symmetry breaking techniques for QBF solving, with a theory covering only very specific parts of QBF symmetries. We present a general framework based on symmetric groups that give a concise characterization of symmetries in QBF. Our framework handles universal and existential symmetries in a dual manner and gives rise to a natural generalization of a known symmetry breaking technique from SAT.

Jul 10 14:30

ABSTRACT. We develop new semantics for resolution-based calculi for Quantified Boolean Formulas, covering both the CDCL-derived calculi and the expansion-derived ones. The semantics is centred around the notion of a partial strategy for the universal player and allows us to show in a local, inference-by-inference manner that these calculi are sound. It also helps us understand some less intuitive concepts, such as the role of tautologies in long-distance resolution or the meaning of the ``star'' in the annotations of IRM. Furthermore, we show that a clause of any of these calculi can be, in the spirit of Curry-Howard correspondence, interpreted as a specification of the corresponding partial strategy. The strategy is total, i.e. winning, when specified by the empty clause.

Jul 10 15:00

ABSTRACT. We propose reductions to quantified Boolean formulas (QBF) as a new approach to showing fixed-parameter linear algorithms for problems parameterized by treewidth. We demonstrate the feasibility of this approach by giving new algorithms for several well-known problems from artificial intelligence that are in general complete in the second level of the polynomial hierarchy. By reduction from QBF we show that all resulting algorithms are essentially optimal in their dependence on the treewidth. The problems that we consider were already known to be fixed-parameter linear by using Courcelle's theorem or dynamic programming, but we argue that our approach has clear advantages over these techniques: on the one hand, in contrast to Courcelle's theorem, we get concrete and tight guarantees for the runtime dependence on the treewidth; on the other hand, we avoid tedious dynamic programming and, after showing some normalization results for CNF-formulas, our upper bounds often boil down to a few lines.

Jul 10 16:00

ABSTRACT. Quantified Boolean Formulas (QBFs) offer compact encodings of problems arising in areas such as verification and synthesis. These applications typically require that QBF solvers not only decide whether an input formula is true or false but also output a witnessing certificate. State-of-the-art QBF solvers based on Quantified Conflict-Driven Constraint Learning (QCDCL) can emit Q-resolution proofs, from which in turn certificates can be extracted. The correctness of a certificate generated in this way is validated by substituting it into the matrix of the input QBF and using a SAT solver to check that the resulting propositional formula (the validation formula) is unsatisfiable. This final check is often the most time-consuming part of the entire certification workflow.

We propose a new validation method that does not require a SAT call and provably runs in polynomial time. It uses the Q-resolution proof from which the given certificate was extracted to directly generate a (propositional) proof of the validation formula in the RUP format, which can be verified by a proof checker such as DRAT-trim. Experiments with a prototype implementation show a robust, albeit modest, increase in the number of successfully validated certificates compared to validation with a SAT solver.

Jul 10 16:30

ABSTRACT. We study non-uniform random k-SAT on n variables with an arbitrary probability distribution p on the variable occurrences. The number t = t(n) of randomly drawn clauses at which random formulas go from asymptotically almost surely (a.a.s.) satisfiable to a.a.s. unsatisfiable is called the satisfiability threshold. Such a threshold is called sharp if it approaches a step function as n increases. We show that a threshold t(n) for random k-SAT with an ensemble (p_n)_{n\in\N} of arbitrary probability distributions on the variable occurrences is sharp if ||p||_2 = O_n(t^(-2/k)) and ||p||_infinity = o_n(t^(-k/(2k-1)) * log^(-(k-1)/(2k-1))(t)).

This result generalizes Friedgut’s sharpness result from uniform to non-uniform random k-SAT and implies sharpness for thresholds of a wide range of random k-SAT models with heterogeneous probability distributions, for example such models where the variable probabilities follow a power-law distribution.

Jul 11 09:00

ABSTRACT. We initiate a proof complexity theoretic study of subsystems of cutting planes (CP) modelling proof search in conflict-driven pseudo-Boolean (PB) solvers. These algorithms combine restrictions such as that addition of constraints should always cancel a variable and/or that so-called saturation is used instead of division. It is known that on CNF inputs cutting planes with cancelling addition and saturation is essentially just resolution. We show that even if general addition is allowed, this proof system is still polynomially simulated by resolution with respect to proof size as long as coefficients are polynomially bounded.

As a further way of delineating the proof power of subsystems of CP, we propose to study a number of easy (but tricky) instances of problems in NP. Most of the formulas we consider have short and simple tree-like proofs in general CP, but the restricted subsystems seem to reveal a much more varied landscape. Although we are not able to formally establish separations between different subsystems of CP -which would require major technical breakthroughs in proof complexity- these formulas appear to be good candidates for obtaining such separations. We believe that a closer study of these benchmarks is a promising approach for shedding more light on the reasoning power of pseudo-Boolean solvers.

Jul 11 09:30

ABSTRACT. We characterize several complexity measures for the resolution of Tseitin formulas in terms of a two person cop-robber game. Our game is a slight variation of the the one Seymour and Thomas used in order to characterize the tree-width parameter. For any undirected graph, by counting the number of cops needed in our game in order to catch a robber in it, we are able to exactly characterize the width, variable space and depth measures for the resolution of the Tseitin formula corresponding to that graph. We also give an exact game characterization of resolution variable space for any formula.

We show that our game can be played in a monotonous way. This implies that the corresponding resolution measures on Tseitin formulas correspond to those under the restriction of regular resolution.

Using our characterizations we improve the existing complexity bounds for Tseitin formulas showing that resolution width, depth and variable space coincide up to a logarithmic factor, and that variable space is bounded by the clause space times a logarithmic factor.

Jul 11 10:00

ABSTRACT. Minimally Unsatisfiable clause-sets (MUs) are building blocks for understanding Minimally Unsatisfiable Sub-clause-sets (MUSs), and they are also interesting in their own right, as hardest unsatisfiable clause-sets. There are two important previously known but isolated characterisations for MUs, both with ingenious but complicated proofs: Characterising 2-CNF MUs, and characterising MUs with deficiency 2 (two more clauses than variables). Via a novel connection to Minimal Strong Digraphs (MSDs), we give short and intuitive new proofs of these characterisations, revealing an underlying common structure. We obtain unique isomorpism type descriptions of normal forms, for both cases.

An MSD is a strongly connected digraph, where removing any arc destroys the strong connectedness. We introduce the new class DFM of special MUs, which are in close correspondence to MSDs. The known relation between 2-CNFs and digraphs is used, but in a simpler and more direct way. The non-binary clauses in DFMs can be ignored. For a binary clause (a or b) normally both arcs (-a -> b) and (-b -> a) need to be considered, but here we have a canonical choice of one of them.

The definition of DFM is indeed very simple: an MU, where all clauses except of two are binary, while the two non-binary clauses are the two "full monotone clauses", the disjunction of all variables and the disjunction of all negated variables. We expect this basic class of MUs to be very useful in the future. Every variables occurs at least twice positively and twice negatively ("non-singularity"), and its isomorphism problem is GI-complete, while we have polytime decision, and likely we have good properties related to MUS-search. Via "saturations" and "marginalisations", adding resp. removing literal occurrences, as well as via "non-singular extensions", adding new variables, this class reaches out to apparently unrelated classes.

Jul 11 16:00

ABSTRACT. Computing minimal (or even just small) certificates is a central problem in automated reasoning and, in particular, in automated formal verification. For unsatisfiable formulas in CNF such certificates take the form of Minimal Unsatisfiable Subsets (MUSes) and have a wide range of applications. As a formula can have multiple MUSes and different MUSes provide different insights on unsatisfiability, commonly studied problems include computing a smallest MUS (SMUS) or computing all MUSes (AllMUS) of a given unsatisfiable formula. In this paper, we consider certificates to safety properties in the form of Minimal Safe Inductive Invariants (MSISes), and we develop algorithms for exploring such certificates by computing a smallest MSIS (SMSIS) or computing all MSISes (AllMSIS) of a given safe inductive invariant. More precisely, we show how two well-known algorithm CAMUS or MARCO for MUS enumeration can be adapted to MSIS enumeration as well.

Jul 11 16:30

ABSTRACT. Program equivalence checking is a fundamental problem in computer science with applications to translation validation and automatic synthesis of compiler optimizations. Modern equivalence checkers employ SMT solvers to discharge proof obligations generated by their equivalence checking algorithm. Equivalence checkers also involve algorithms to infer invariants that relate the intermediate states of the two programs being compared for equivalence. We present a new algorithm, called {\em invariant-sketching} that allows the inference of the required invariants through the generation of counter-examples using SMT solvers. We also present an algorithm, called {\em query-decomposition} that allows effective use of SMT solvers for application to equivalence checking. Both invariant-sketching and query-decomposition help us prove equivalence across program transformations that could not be handled by previous equivalence checking algorithms.

Jul 12 11:00

ABSTRACT. Incremental linearization is a conceptually simple, yet effective, technique that we have recently proposed for solving SMT problems over nonlinear real arithmetic constraints. In this paper, we show how the same approach can be applied successfully also to the harder case of nonlinear integer arithmetic problems. We describe in detail our implementation of the basic ideas inside the MathSAT SMT solver, and evaluate its effectiveness with an extensive experimental analysis over all nonlinear integer benchmarks in SMT-LIB. Our results show that MathSAT is very competitive with (and often outperforms) state-of-the-art SMT solvers based on alternative techniques.

Jul 12 11:30

ABSTRACT. Set membership filters are used as a primary test for whether large sets contain a given element. The most common such filter is the Bloom filter. Most pertinent to this article is the recently introduced Satisfiability (SAT) filter. This article proposes the XOR-Satisfiability (XORSAT) filter, a variant of the SAT filter based on random k-XORSAT. Experimental results show that this new filter can be more than 99% efficient (i.e., achieves the information-theoretic limit) while also having a query speed comparable to the standard Bloom filter, making it practical for use with very large data sets.

Jul 12 12:00

ABSTRACT. We present ALIAS, a modular tool aimed at finding backdoors for hard SAT instances. Here by a backdoor for a specific SAT solver and SAT formula we mean a set of its variables, all possible instantiations of which lead to construction of a family of subformulas with the total solving time less than that for an original formula. For a particular backdoor, the tool uses the Monte-Carlo algorithm to estimate the runtime of a solver when partitioning an original problem via said backdoor. Thus, the problem of finding a backdoor is viewed as a black-box optimization problem. The tool's modular structure allows to employ state-of-the-art SAT solvers and black-box optimization heuristics. In practice, for a number of hard SAT instances, the tool made it possible to solve them much faster than using state-of-the-art multithreaded SAT-solvers.

Jul 12 14:00

ABSTRACT. Boolean satisfiability (SAT) solvers are at the core of efficient approaches for solving a vast multitude of practical problems. Moreover, albeit targeting an NP-complete problem, SAT solvers are increasingly used for tackling problems beyond NP. Despite the success of SAT in practice, modeling with SAT and more importantly implementing SAT-based problem solving solutions is often a difficult and error-prone task. This paper proposes the PySAT toolkit, which enables fast Python-based prototyping using SAT oracles and SAT-related technology. PySAT provides a simple API for working with a few state-of-the-art SAT oracles and also integrates a number of cardinality constraint encodings, all aiming at simplifying the prototyping process. Experimental results presented in the paper show that PySAT-based implementations can be as efficient as those written in a low-level language.

Jul 12 14:30

ABSTRACT. We consider the problem of binary image generation with given properties. This problem arises in a number of practical applications, including generation of artificial porous medium for an electrode of lithium-ion batteries, for composed materials, etc. A generated image represents a porous medium. As such, it is subject to two sets of constraints: topological constraints on the structure and process constraints on the physical process over this structure. To perform image generation we need to define a mapping from a porous medium to its physical process parameters. For a given geometry of a porous medium, this mapping can be done by solving a partial differential equation (PDE). However, embedding a PDE solver into the search procedure is computationally expensive. We use a binarized neural network to approximate a PDE solver. This allows us to encode the entire problem as a logical formula. Our main contribution is that, for the first time, we show that this problem can be tackled using decision procedures. Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints.

Jul 12 15:00