HDT is a one-day workshop that aims at bridging between theory and design of hardware. It puts special emphasis on the multitude of connections between the theory of distributed computing and hardware design, apparent in topics such as time and causality, synchronization, and fault-tolerance. The goal is to present questions and results at the forefront of current research and spark a discussion between the fields. The program consists of several invited talks, which target a general CS audience.
HDT will be held on Friday, October 20, 2017 in Vienna, Austria, and will be co-located with DISC 2017. The workshop is chaired by Christoph Lenzen. For information on registration, please go to the DISC 2017 website.
Venue: Room Newton (changed!), Austria Trend Hotel Park Royal Palace
|9:00 a.m.||talk 1||Yoram Moses: Viewing Circuits as Distributed Systems|
|10:00 a.m.||coffee break|
|10:30 a.m.||talk 2||Ran Ginosar: Fighting Faults|
|11:30 p.m.||talk 3||Alex Yakovlev: How to Design Little Digital, yet Highly Concurrent Electronics?|
|12:30 p.m.||lunch break||(lunch is provided on-site)|
|02:00 p.m.||talk 4||Matthias Függer: Fault-tolerant Distributed Clocking: Self-stabilization, Byzantine Failures, and Beyond|
|03:00 p.m.||talk 5||Miloš Krstić: Optimizing Design of Fault-Tolerant Computing Systems|
|04:00 p.m.||coffee break|
|04:30 p.m.||talk 6||Christian Ikenmeyer: Metastability, Three-valued Logic, and Monotone Circuits|
|05:30 p.m.||talk 7||Michael Mendler: Logical Analysis of Distributed Systems: The Importance of Being Constructive|
Yoram Moses Technion Yoram Moses is the Israel Pollak Academic Chair at the Technion. He received the Godel Prize in 1997 and the Dijkstra Prize in 2009, for work on the application of knowledge theory in distributed computing. His recent work focuses on the interaction between communication, time, and coordination in distributed systems and circuits. Viewing Circuits as Distributed Systems This talk will survey some recent results obtained by treating asynchronous and self-timed circuits as distributed systems. In particular, it will illustrate how potential causality can be used to simplify and generalize classic results in the theory of circuits, and point to possible extensions and future work. slides
Yoram Moses is the Israel Pollak Academic Chair at the Technion. He received the Godel Prize in 1997 and the Dijkstra Prize in 2009, for work on the application of knowledge theory in distributed computing. His recent work focuses on the interaction between communication, time, and coordination in distributed systems and circuits.
Viewing Circuits as Distributed Systems
This talk will survey some recent results obtained by treating asynchronous and self-timed circuits as distributed systems. In particular, it will illustrate how potential causality can be used to simplify and generalize classic results in the theory of circuits, and point to possible extensions and future work.
Ran Ginosar is professor of EE at the Technion. He has studied at the Technion and Princeton, and worked for Bell Labs and Intel Research. He has co-founded several VLSI companies, and is now heading Ramon Chips, making chips and systems for space.
The high reliability industry is bugged with fault fighting. It costs dearly in money, power, size, mass, development and verification time and risk aversion. Levels of reliability are defined in order to manage complexity and grade risk vs. cost. Faults will be characterized, and will be identified as hardware, software or communication faults. Implications of faulty systems will be sketched. Fault mitigation methods will be reviewed. Relations to other calamities will be discussed. Lessons learned from the space industry are described, and we wonder how much of it is applicable to other domains.
Alex Yakovlev is a Professor of Computing Systems Design at the School of Electrical and Electronic Engineering, Newcastle University. He received DSc from Newcastle University in 2006, and PhD from St. Petersburg Electrical Engineering Institute in 1982, both in the field of asynchronous systems. Since 1991 Yakovlev has been at Newcastle University, where he has been leading a research group that is known for its contributions in designing asynchronous circuits, concurrent systems, Petri nets, metastability and synchronizers. One of his key achievements is the model of Signal Transition Graphs and associated synthesis methods and tools that are increasingly used in industry and academia. His most recent research interests are in the field of energy-modulated computing, real-power systems, and asynchronous design for analog-mixed-signal electronics. Yakovlev has published eight authored and edited monographs and more than 300 papers in academic journals and conferences, managed over 30 research contracts, and supervised more than 40 PhD students. He is a Fellow of IEEE.
How to Design Little Digital, yet Highly Concurrent Electronics?
The talk will focus on “little digital” asynchronous systems that exhibit interesting concurrent behaviours at the level of discrete events taking place in their electronic devices, involving active and passive components. Examples of such systems will include logic control circuits in computational structures, such as processors and memory, as well as mixed-signal electronics, such as DC-DC power converters. Formal models of distributed and concurrent systems, exemplified by Petri nets and their interpretations, are useful in capturing the behaviour of asynchronous “little digital” systems for the purposes of their synthesis and verification. The modelling and design techniques presented in this lecture are supported by software tools publically available under the WORKCRAFT environment (workcraft.org).
CNRS - ENS Cachan
Matthias Fuegger received his M.Sc. (2006) and his PhD (2010) in computer engineering from TU Vienna (Austria). He has previously held positions at TU Vienna (Austria), Ecole polytechnique (France), and Max-Planck Institute for Informatics (Germany), and recently moved to Paris for a CNRS position at LSV, ENS Paris-Saclay.
His main research interest is the formal study of distributed computation in highly restricted systems and under harsh environmental conditions. This includes VLSI systems and biological systems.
Fault-tolerant Distributed Clocking: Self-stabilization, Byzantine Failures, and Beyond
In this talk we study the problem of generating and distributing synchronized clock signals on a chip. While the problem can be viewed as an instance of the well-known and classically studied fault-tolerant distributed clock synchronization problem, designing such algorithms for direct implementation in silicon bears additional challenges: (i) a generally high cost of computation and communicating, (ii) lack of computationally complex atomic actions, and (iii) faults that are beyond classical worst-case (i.e., Byzantine) faults.
The talk starts with an introduction into metastability-induced faults and techniques to cope with them. Subsequently, we highlight the difficulties of designing on-chip algorithms at the example of two classical distributed clock synchronization algorithms, and discuss how to protect the system from metastability-induced faults using so-called metastability-containing circuits. Last but not least, we briefly study self-stabilizing techniques and their use in space applications, pointing out challenging open problems.
University of Potsdam and IHP Microelectronics
Miloš Krstić is professor for Design and Test Methodology at the University of Potsdam, Germany, and team leader at IHP. He received his Dr.-Ing. in electronics from Brandenburg University of Technology, Cottbus, Germany, in 2006. Since 2001 he has been with IHP Microelectronics, Frankfurt (Oder), Germany, where he is currently a Team leader for design and test methods in the Wireless Communication Systems Department. Prof. Krstić has been leading many European and national R&D projects. He published more than 100 publications and 8 patent applications mainly in the area of advanced design methods including fault tolerant and robust design methodology.
Optimizing Design of Fault-tolerant Computing Systems
In this talk, design methods for the optimization of fault-tolerant computing systems will be presented. The main aim is to reduce the overhead that is always present in fault-tolerant computing, while still ensuring sufficient resilience of the resulting systems. In this context, two types of methods will be presented. One strives for better use of static redundancy, improving the trade-offs between fault-tolerance, power consumption and performance. The other approach is to use dynamic redundancy, i.e., having the system assign redundant resources adaptively. The proposed methods will be illustrated at hand of a number of applications, such as space and automotive, and results of the respective research projects will be presented. The talk will highlight a number of key challenges in fault-tolerant hardware design that are of great importance in the foreseeable future.
MPI for Informatics
Christian Ikenmeyer studies computational complexity lower bounds and algorithms, in particular in algebraic and geometric complexity theory, but is also interested in algorithmic representation theory and the algebraic geometry of group varieties. He received his PhD in 2012 from Paderborn University, Germany, under the supervision of Peter Bürgisser and held a 3-year visiting assistant professorship at Texas A&M University in the geometry group. He joined the Max Planck Institute for Informatics in Saarbrücken in 2016 and is a senior researcher since 2017.
Metastability, Three-valued Logic, and Monotone Circuits
We discuss logic circuits with input bits whose voltage does not clearly identify them as 0 or 1. Those bits are called metastable. Friedrichs, Függer, and Lenzen modeled this phenomenon by using a classical three-valued logic of Kleene, where they allow each input bit to have one of the three values 0, 1, or M. They formally define what it means for a circuit to have a high resilience against errors caused by metastability and show that not all circuits satisfy this desired property. However, any circuit can be adjusted to achieve this resilience, at the expense of using more computational gates. Recently, we showed that this cost must be very high in some cases. In this talk, we will discuss the above three-valued logic, the high resilience constructions, and connections to classical circuit complexity theory.
University of Bamberg
Michael Mendler received his Ph.D. from the University of Edinburgh in 1993. After postdoctoral positions at the Technical University of Denmark (1993-1995) and the University of Passau (1995-1998), Germany, he became a reader in the Department of Computer Science at the University of Sheffield, UK, in 1999. Since 2002, he is a Professor in the Faculty of Information Systems and Applied Computer Sciences at the University of Bamberg, Germany. Generally, his main research interests lie in concurrency and type theory and constructive modal logic. His present research focus is on semantics of synchronous programming languages.
Logical Analysis of Distributed Systems: The Importance of Being Constructive
The synchronous circuits design style has proven extremely expedient. It abstracts the complex analog behavior of electronics to the discrete mathematical domain of two-valued Boolean logic and Mealy automata. This simple discrete modelling is an idealization which is adequate only under stringent assumptions on the structure of the system and the execution environment. When the design assumptions of the synchronous model are violated the two-valued Boolean model is insufficient. Causality problems, meta-stability or oscillation may occur. A many-valued logic is needed to regain sufficient expressiveness for dealing with hardware failures. Traditionally, three-valued Kleene algebra is used. It adds a single third value “undefined” to accommodate electrical behaviour in failure situations. However, Kleene algebra is not a logic since it has no theorems.
This talk revisits the ternary model from a logical point of view and connects it with provability in intuitionistic logic, essentially justifying “Berry’s Thesis” according to which digital circuits are delay-insensitive if and only if they are provably correct in intuitionistic logic. More technically, the talk will show how non-inertial delays give rise to a constructive modal logic while inertial delays are inherently non-constructive. This gives a logical explanation for why inertial delays can be used to build arbiters, memory-cells and other synchronization elements, while non-inertial delays are not powerful enough. Though these results are only tentative, they do indicate the importance of logical constructivity for meta-stable-free discrete abstractions of physical behavior.