Automated Deduction: Decidability, Complexity, Tractability

Affiliated with CADE-21 Bremen, Germany, 15 July, 2007

All sessions will take place at the International University Bremen, in the West Hall. Click here for a map of the campus.

Invited speaker

Michaël Rusinowitch (LORIA-INRIA-Lorraine)



ADDCT proceedings



  • Silvio Ghilardi (U. Milano)
  • Ulrike Sattler (U. Manchester)
  • Viorica Sofronie-Stokkermans (MPI,Saarbrücken)
  • Ashish Tiwari (Menlo Park)
  • Program Committee

  • Matthias Baaz (T.U.Wien)
  • Maria Paola Bonacina (U. Verona)
  • Christian Fermüller (T.U.Wien)
  • Silvio Ghilardi (U. Milano)
  • Reiner Haehnle (Chalmers U.)
  • Felix Klaedtke (ETH Zurich)
  • Sava Krstic (Intel Corporation)
  • Viktor Kuncak (EPFL Lausanne)
  • Carsten Lutz (TU Dresden)
  • Christopher Lynch (Clarkson U.)
  • Silvio Ranise (LORIA/INRIA-Lorraine)
  • Ulrike Sattler (U. Manchester)
  • Renate Schmidt (U. Manchester)
  • Viorica Sofronie-Stokkermans (MPI,Saarbrücken)
  • Ashish Tiwari (SRI)
  • Luca Vigano (U. Verona)
  • Call for papers


    Submission procedure


    Important Dates

  • 11 May 2007: Submission deadline
  • 5 June 2007: Notification
  • 15 June 2007: Final version
  • 15 July 2007: Workshop
  • Contact

    For further informations please send an e-mail to

    List of papers (with abstracts)

    09:30-10:30   Michaël Rusinowitch (LORIA-INRIA-Lorraine)
    (invited paper)
    Some Decision Problems Related to Cryptographic Protocol Verification
    Cryptographic protocols such as IKE, SET, TLS, Kerberos have been developed to secure electronic transactions. However the design of such protocols often appears to be problematic even assuming that the cryptographic primitives are perfect, i.e. even assuming we cannot decrypt a message without the right key. An intruder can intercept messages, analyse them, modify them without needing much computing power and then carry out malevolent actions. This may lead to a variety of attacks such as well-known Man-in-the-Middle attacks. Many teams in recent years have been working on tools for security protocols automated analysis. The vast majority of these tools model messages as terms in a suitable term algebra and makes two assumptions: first that cryptography is unbreakable (often called the perfect cryptography assumption) and second that the protocol messages are exchanged over a network that is under the full control of an active adversary, often called the Dolev-Yao intruder. Even with these simplifying abstractions, the problem of whether a protocol actually provides the security properties it has been designed for is undecidable. We will present some decidable cases and their complexity especially for the verification of security properties in more accurate models that take into account algebraic properties of the cryptographic primitives. We will also discuss the problem of combining decision procedures for arbitrary intruder theories with disjoint sets of operators.

    11:00-11:35   Stephanie Delaune, Hai Lin and Christopher Lynch
    (regular paper)
    Protocol Verification via Rigid/Fleixble Resolution
    In this paper we propose a decision procedure, i.e., an inference system for clauses containing rigid and flexible variables. Rigid variables are only allowed to have one instantiation, whereas flexible variables are allowed as many instantiations as desired. We assume a set of clauses containing only rigid variables together with a set of clauses containing only flexible variables. When the flexible clauses fall into a particular class, we propose an inference system based on ordered resolution that is sound and complete and for which the inference procedure will halt. An interest of this form of problem is for cryptographic protocol verification for a bounded number of protocol instances. Our class allows us to obtain a generic decidability result for a large class of cryptographic protocols that may use for instance CBC encryption and blind signature.

    11:35-12:10   Aharon Abadi, Alexander Rabinovich and Mooly Sagiv
    (regular paper)
    Decidable Fragments of Many Sorted Logic
    We investigate the possibility of developing a decidable logic which allows expressing a large variety of real world specifications. The idea is to define a decidable subset of many sorted (typed) first order logic. The motivation is that types simplify the complexity of mixed quantifiers when they quantify over different types. We noticed that many real world verification problems can be formalized by quantifying over different types in such a way that the relations between types remain simple. Our main result is a decidable fragment of many sorted first-order logic with limited usage of functions that captures many real world specifications

    12:10-12:30   Maria Paola Bonacina and Mnacho Echenim
    (presentation-only paper)
    Decision procedures for variable-inactive theories and two polynomial T-satisfiability procedures
    Verification problems require to reason in theories of data structures and fragments of arithmetic. Thus, decision procedures for such theories are needed, to be embedded in, or interfaced with, proof assistants or software model checkers. Such decision procedures ought to be sound and complete, to avoid false negatives and false positives, efficient, to handle large problems, and easy to combine, because problems involve multiple theories. The rewrite-based approach to decision procedures aims at addressing these sometimes conflicting issues in a uniform way, by harnessing the power of general first-order theorem proving. In this paper, we generalize the rewrite-based approach from deciding the satisfiability of sets of ground literals to deciding that of arbitrary ground formulae in the theory. Next, we present polynomial rewrite-based satisfiability procedures for the theories of records with extensionality and integer offsets. Our results use the same key property -- termed variable-inactivity -- that allows us to combine theories in a simple way in the rewrite-based approach.

    14:00-14:35   Linh Anh Nguyen
    (regular paper)
    Approximating Horn Knowledge Bases in Regular Description Logics to Have PTIME Data Complexity
    In description logics, an RBox is a finite set of assertions of the form $R_{s_1} \circ\ldots\circ R_{s_k} \sqsubseteq R_t$, where $R_{s_1}$, \ldots, $R_{s_k}$, $R_t$ are role names. A regular RBox is an RBox whose set of corresponding grammar rules $t \to s_1 \ldots s_k$ forms a grammar such that, for every symbol $s$ of the grammar, the set of words derivable from $s$ using the grammar is a regular language. We assume that a regular RBox is specified by the corresponding finite automata. By \REG\ we denote the description logic \ALC\ with regular RBoxes. A Horn knowledge base in \REG\ is a tuple $(\mR,\mT,\mA)$ consisting of a regular RBox $\mR$, a TBox $\mT$ being a positive logic program for defining concepts, and an ABox $\mA$ of elementary facts about individuals and roles. The instance checking problem in such a Horn knowledge base is to check whether a given individual $a$ is an instance of a given positive concept $C$ w.r.t. the knowledge base. The data complexity of this problem (measured when $\mA$ varies) is coNP-hard. We present and study ``good'' approximations of this problem that have PTIME data complexity. We use a so called deterministic Horn fragment of \REG\ and give an algorithm for constructing a ``least pseudo-model'' of a Horn knowledge base $(\mR,\mT,\mA)$ in \REG\ for the case $\mT$ is a ``deterministic positive logic program''.

    14:35-15:10   Didier Galmiche and Daniel Mery
    (regular paper)
    Connection-based proof search in intuitionistic logic from transitive closure of constraints
    We present a new constraint-based calculus for intuitionistic propositional logic (IPL) which gives rise to a new connection-based characterization of intuitionistic provability. Compared to other free-variable tableaux or connection-based systems, our calculus replaces so-called prefixes by labels and constraints directly derived from the relational Kripke semantics for IPL. In order to characterize intuitionistic provability, the use of constraints instead of prefixes allows us to drop complex prefix-unification algorithms in favour of a more direct and simple transitive closure operation on constraints. We show the soundness and completeness of the system, give strong arguments for its termination, and explain how it can be used to build countermodels in case of non-provability.

    15:10-15:30   Carsten Lutz and Frank Wolter
    (presentation-only paper)
    Conservative extensions in modal and description logics
    A theory T is a conservative extension of a theory T' if every consequence of T in the language of T' is a consequence of T' already. Conservative extensions have been introduced and investigated in mathematical logic and philosophy of science. More recently, they have been used to formalise notions of refinement and modularity in software specification and ontologies. In this talk we discuss the decidability and computational complexity of the following problem: is a given finitely axiomatized theory a conservative extension of another finitely axiomatized theory. We present results and open problems for modal and description logics. We also introduce deciding conservative extensions as a challenging new problem for automated reasoning.

    16:00-16:20   Sava Krstic, Amit Goel, Jim Grundy and Cesare Tinelli.
    (presentation-only paper)
    Combined Satisfiability Modulo Parametric Theories
    We give a fresh theoretical foundation for designing comprehensive solvers for satisfiability modulo theories (SMT). In contrast with the traditional Nelson-Oppen approach dealing with theories in single- or multi-sorted first-order logic, we consider parametric theories---a restricted class of theories defined in the context of a higher-order logic with polymorphic types. Our main result is the combination theorem for decision procedures for disjoint parametric theories. Combinations of parametric theories conveniently express the "logic" of common datatypes, covering virtually all deeply nested structures (lists of arrays of sets of ...) that arise in verification practice. The vexatious stable infiniteness condition of the traditional approach is replaced in our setting with a milder flexibility condition. Our results do not subsume existing results, nor are subsumed by them, but they apply more widely because the datatypes relevant in applications are described by theories that satisfy our parametricity requirements without necessarily satisfying the stable infiniteness requirements of other combination methods.

    16:20-16:40   Leonardo de Moura, and Nikolaj Bjorner
    (presentation-only paper)
    Model-based Theory Combination
    Traditional methods for combining theory solvers rely on capabilities of the solvers to produce all implied equalities or a pre-processing step that introduces additional literals into the search space. This paper introduces a combination method that incrementally reconciles models maintained by each theory. We evaluate the practicality and efficiency of this approach.

    16:40-17:00   Viktor Kuncak, Charles Bouillaguet, Thomas Wies, Karen Zee and Martin Rinard.
    (presentation-only paper)
    Decision procedures for data structure verification
    We describe our experience in using and developing decision procedures and theorem provers to verify partial correctness of dynamically allocated data structures such as lists, queues, trees, and hash tables. We describe a simple and flexible (even if incomplete) technique for combination of expressive logics within a software verification system. We then outline our integration of first-order theorem provers, satisfiability modulo theory procedures, monadic second-order logic over trees, a decision procedure for Boolean Algebra with Presburger Arithmetic, and interactive theorem provers such as Isabelle. We sketch some results we obtained in this line of research, including soundness and completeness of omitting disjoint sorts from first-order logic with overloaded equality operator, completeness of verifying preservation of certain invariants that contain universally quantified "field constraint" conjuncts, as well as complexity of a logic that enables reasoning about sizes of unbounded finite sets. These results suggest that software analysis and verification can be a great inspiration for research in decision procedures and automated deduction.