FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
RCRA FULL PAPERS: PAPERS WITH ABSTRACTS

Editor: Marco Maratea

Authors, Title and AbstractPaperTalk

ABSTRACT. Algorithms based on singleton arc consistency (SAC) show considerable promise for improving backtrack search algorithms for constraint satisfaction problems (CSPs). The drawback is that even the most efficient of them is still comparatively expensive. Even when limited to preprocessing, they give overall improvement only when problems are quite difficult to solve with more typical procedures such as maintained arc consistency (MAC). The present work examines a form of partial SAC and neighbourhood SAC (NSAC) in which a subset of the variables in a CSP are chosen to be made SAC-consistent or neighbourhood-SAC-consistent. These consistencies are well-characterized in that algorithms have unique fixpoints and there are well-defined dominance relations. Heuristic strategies for choosing an effective subset of variables are described and tested, in particular a strategy of choosing by constraint weight after random probing. Experimental results justify the claim that these methods can be nearly as effective as full (N)SAC in terms of values discarded while significantly reducing the effort required.

Jul 13 14:22

ABSTRACT. Pseudo-Boolean constraints constitute an important class of constraints. Despite extensive studies of SAT encodings for PB constraints, there are no generally accepted SAT encodings for PB constraints. In this paper we revisit encoding PB constraints into SAT via binary adders and BDDs. For the binary adder encoding, we present an optimizing compiler that incorporates preprocessing, decomposition, constant propagation, and common subexpression elimination techniques tailored to PB constraints. For encoding via BDDs, we compare three methods for converting BDDs into SAT, namely, path encoding, 6-clause node encoding, and 2-clause node encoding. We experimentally compare these encodings on three sets of benchmarks. Our experiments revealed surprisingly good and consistent performance of the optimized adder encoder in comparison with other encoders.

Jul 13 14:44

ABSTRACT. The orthogonal cutting-stock problem tries to place a given set of items into a minimum number of identically sized bins. As a part of solving this problem with the guillotine constraint, the authors propose combining the new BLF2G, Bottom Left Fill 2 direction Guillotine, placement heuristic with an advanced genetic algorithm. According to the item order, the BLF2G routine creates a direct placement of items in bins to give a cutting format. The genetic algorithm exploits the search space to find the supposed optimal item order. Other methods try to guide the evolutionary process by introducing a greedy heuristic to the initial population to enhance the results. The authors propose enriching the population via qualified individuals, without disturbing the genetic phase, by introducing a new enhancement to guide the evolutionary process. The evolution of the GA process is controlled, and when no improvements after some number of iterations are observed, a qualified individual is injected to the population to avoid premature convergence to a local optimum. To enrich the evolutionary process with qualified chromosomes a set of order-based individuals are generated. Our method is compared with other heuristics and metaheuristics found in the literature on existing data sets.

Jul 13 12:06

ABSTRACT. Crossover operators are very important tools in Evolutionary Computation. Here we are interested in crossovers for the permutation representation that find applications in combinatorial optimization problems such as the permutation flowshop scheduling and the traveling salesman problem. We introduce three families of permutation crossovers based on algebraic properties of the permutation space. In particular, we exploit the group and lattice structures of the space. A total of 14 new crossovers is provided. Algebraic and semantic properties of the operators are discussed, while their performances are investigated by experimentally comparing them with known permutation crossovers on standard benchmarks from four popular permutation problems. Three different experimental scenarios are considered and the results clearly validate our proposals.

Jul 13 11:44

ABSTRACT. In this work we investigate how the performance of SAT solvers can be improved by SAT formulae configuration. We introduce a fully automated approach for this configuration task, that considers a number of criteria for optimising the order in which clauses and, within clauses, literals, are listed in a formula expressed using the CNF. Our experimental analysis, involving three state-of-the-art SAT solvers and six different benchmark sets, shows that this configuration can have a significant impact on solvers’ performance.

Jul 13 15:06

ABSTRACT. Answer Set Programming (ASP) is a well-established formalism for nonmonotonic reasoning. While incoherence, the non-existence of answer sets for some programs, is an important feature of ASP, it has frequently been criticised and indeed has some disadvantages, especially for query answering. Paracoherent semantics have been suggested as a remedy, which extend the classical notion of answer sets to draw meaningful conclusions also from incoherent programs. In this paper we present an alternative characterization of the two major paracoherent semantics in terms of (extended) externally supported models. This definition uses a transformation of ASP programs that is more parsimonious than the classic epistemic transformation used in recent implementations. A performance comparison carried out on benchmarks from ASP competitions shows that the usage of the new transformation brings about performance improvements that are independent of the underlying algorithms.

Jul 13 11:22

ABSTRACT. One promising trend in digital system integration consists of boosting on-chip communication performance by means of silicon photonics, thus materializing the so-called Optical Networks-on-Chip (ONoCs). Among them, wavelength routing can be used to route a signal to destination by univocally associating a routing path to the wavelength of the optical carrier. Such wavelengths should be chosen so to minimize interferences among optical channels and to avoid routing faults. As a result, physical parameter selection of such networks requires the solution of complex constrained optimization problems. In previous work, published in the proceedings of the International Conference on Computer-Aided Design, we proposed and solved the problem of computing the maximum parallelism obtainable in the communication between any two end-points while avoiding misrouting of optical signals. The underlying technology, only quickly mentioned in that paper, is Answer Set Programming (ASP). In this work, we detail the ASP approach we used to solve such problem. Another important design issue is to select the wavelengths of optical carriers such that they are spread across the available spectrum, in order to reduce the likelihood that, due to imperfections in the manufacturing process, unintended routing faults arise. We show how to address such problem in Constraint Logic Programming on Finite Domains (CLP(FD)).

This paper appears in Theory and Practice of Logic Programming, Volume 17, Issue 5-6 (33rd International Conference on Logic Programming).

Jul 13 10:06

ABSTRACT. Replaceability is a form of generalized substitutability whose features make it potentially of great importance for problem simplification. It differs from simple substitutability in that it only requires that substitutable values exist for every solution containing a given value without requiring that the former always be the same. This is also the most general form of substitutability that allows inferences from local to global versions of this property. Building on earlier work, this study first establishes that algorithms for localized replaceability (called consistent neighbourhood replaceability or CNR algorithms) based on all-solutions neighbourhood search outperform other replaceability algorithms by several orders of magnitude. It also examines the relative effectiveness of different forms of depth-first CNR algorithms. Secondly, it demonstrates an apparent complexity ridge, which does not occur at the same place in the problem space as the complexity areas for consistency or full search algorithms. Thirdly, it continues the study of methods for inferring replaceability in structured problems in order to improve efficiency. This includes correcting an oversight in earlier work and extending the formal analysis. It is also shown that some strategies for inferring replaceable values can be extended to disjunctive constraints in scheduling problems.

Jul 13 14:00

ABSTRACT. This paper investigates the performances of a greedy randomized algorithm to optimize the realization of nearest-neighbor compliant quantum circuits. Currrent technological limitations (decoherence effect) impose that the overall duration (makespan) of the quantum circuit realization be minimized. One core contribution of this paper is a lexicographic two-key ranking function for quantum gate selection: the first key acts as a global closure metric to minimize the solution makespan; the second one is a local metric acting as ``tie-breaker'' for avoiding cycling. Our algorithm has been tested on a set of quantum circuit benchmark instances of increasing sizes available from the recent literature. We demonstrate that our heuristic approach outperforms the solutions obtained in previous research against the same benchmark, both from the CPU efficiency and from the solution quality standpoint.

Jul 13 09:44

ABSTRACT. Proving a theorem in intuitionistic propositional logic, with {\em implication} as its only primitive (also called minimal logic), is known as one of the simplest to state PSPACE-complete problem. At the same time, via the Curry-Howard isomorphism, it is instrumental to finding lambda terms that may inhabit a given type.

However, as hundreds of papers witness it, all starting with Gentzen's {\bf LJ} calculus, conceptual simplicity has not come in this case with comparable computational counterparts. Implementing such theorem provers faces challenges related not only to soundness and completeness but also to termination and scalability problems.

Can a simple solution, in the tradition of "lean theorem provers" be uncovered that matches the conceptual simplicity of the problem, while being able to handle also large randomly generated formulas?

In search for an answer, starting from Dyckhoff's {\bf LJT calculus}, we derive a sequence of minimalist theorem provers using declarative program transformations steps, while highlighting connections, via the Curry-Howard isomorphism, to type inference mechanisms for the simply typed lambda calculus.

We chose Prolog as our meta-language. Being derived from essentially the same formalisms as those we are covering reduces the semantic gap and results in surprisingly concise and efficient declarative implementations. Our implementation is available at: {\bf \url{https://github.com/ptarau/TypesAndProofs}}.

Jul 13 11:00

ABSTRACT. The operations of many critical infrastructures (e.g., airports) heavily depend on proper functioning of the radio communication network supporting operations. As a result, such a communication network is indeed a mission-critical communication network that needs adequate protection from external electromagnetic interferences. This is usually done through radiogoniometers. Basically, by using at least three suitably deployed radiogoniometers and a gateway gathering information from them, sources of electromagnetic emissions that are not supposed to be present in the monitored area can be localised. Typically, relay nodes are used to connect radiogoniometers to the gateway. As a result, some degree of fault-tolerance for the network of relay nodes is essential in order to offer a reliable monitoring. On the other hand, deployment of relay nodes is typically quite expensive. As a result, we have two conflicting requirements: minimise costs while guaranteeing a given fault-tolerance.

In this paper address the problem of computing a deployment for relay nodes that minimises the relay node network cost while at the same time guaranteeing proper working of the network even when some of the relay nodes (up to a given maximum number) become faulty (fault-tolerance).

We show that the above problem can be formulated as a MILP as well as a PB optimisation problem and present experimental results comparing the two approaches on realistic scenarios.

Jul 13 09:22

ABSTRACT. ISCT, i.e., clinical experimental campaigns carried out by means of computer simulations, hold the promise to decrease time and cost for the safety and efficacy assessment of pharmacological treatments, reduce the need for animal and human testing, and enable precision medicine.

In this paper we present a case study aiming at quantifying, by means of a multi-arm ISCT supervised by intelligent search, the potential impact of precision medicine approaches on a real pharmacological treatment, namely the downregulation phase of a complex clinical protocol for assisted reproduction.

Jul 13 09:00