# Programs, minds and machines

# Program

### First joint workshop by the Mathematics and the Philosophy Research Institutes of UNAM

## August 6-9, 2018

The Monday and Wednesday talks will be at the Institute for Philosophical Research, and the Tuesday and Thursday talks will be at the Math Institute.

## MONDAY 6 (IIF)

Favio Miranda (11:00-12:30): "What is a deductive system for practical logic?"

Armando Castañeda (12:30-14:00): "The topological nature of distributed computing"

LUNCH

Moshe Y. Vardi (16:00-17:30): "A logical revolution"

João Marcos (17:30-19:00): "Are you a consistent believer?"

## TUESDAY 7 (Imate, directions to IMate)

João Marcos (11:00-12:00): "Breaking the proof code"

Moshe Y. Vardi (12:00-13:00): "The Automated-Reasoning Revolution: from theory to practice and back"

## WEDNESDAY 8 (IIF)

Roman Kuznets (10:30-12:00): "What do Byzantine agents know?"

Scott Aaronson (12:00-14:00): "The P vs. NP Problem"

LUNCH

Tom Froese (16:00-17:30): "The problem of meaning in AI: Still with us after all these years"

Sergio Rajsbaum (17:30-19:00): "A simplicial complex model of dynamic epistemic logic for fault-tolerant distributed computing"

## THURSDAY 9 (IMate, directions to IMate)

Favio Miranda (11:00-12:00): "Going backwards suffices: some reflections on proof construction for formal verification"

Scott Aaronson (12:00-13:00): "Quantum supremacy and its applications"

LUNCH

## Abstracts

### MONDAY, AUGUST 6

What is a deductive system for practical logic?

Favio Miranda

Facultad de Ciencias, UNAM

We elucidate the choices taken in the design of a deductive system for first-order classical logic with equality, whose main aim is to serve as a reasoning tool for practical logic, understood as the formalism sustaining everyday mathematical reasoning. This system, inspired by the native reasoning mechanisms of the Coq proof assistant, pretends to fill a gap between traditional and applied logic as employed in formalized mathematics and formal methods of software engineering.

The topological nature of distributed computing

Armando Castañeda

Instituto de Matemáticas, UNAM

Distributed systems abound nowadays, from the Internet and cryptocurrency systems to multicore architectures. Simply put, a distributed system is made of several computational devices that communicate each other through a communication medium, and the power of a distributed system to solve a given problem depends on the several parameters that define it (e.g type of communication medium, failures, number of failures, etcetera). It turns out that the fundamental reason a given distributed system can or cannot solve a problem is topological: the solvability of a given problem depends on the properties and of topological spaces associated to the problem and the system. In this talk we will see this topological view of distributed computing.

A Logical Revolution

Moshe Y. Vardi

Rice University

Mathematical logic was developed in an effort to provide formal foundations for mathematics. In this quest, which ultimately failed, logic begat computer science, yielding both computers and theoretical computer science. But then logic turned out to be a disappointment as foundations for computer science, as almost all decision problems in logic are either unsolvable or intractable. Starting from the mid 1970s, however, there has been a quiet revolution in logic in computer science, and problems that are theoretically undecidable or intractable were shown to be quite feasible in practice. This talk describes the rise, fall, and rise of logic in computer science, describing several modern applications of logic to computing, include databases, hardware design, and software engineering.

Are you a consistent believer?

João Marcos

Universidade Federal do Rio Grande do Norte, Brazil

Suppose that after studying a little bit of Logic you arrive at an island where some inhabitants are knights —that is, they always say the truth— and others are knives —that is, they always lie. You meet someone who tells you: “You will never know that I am a knight!” What can you conclude from that?

This talk will provide a very brief presentation to the logic of lying and truth telling, giving a taste of one of the main results of Logic in the 20th century. There are two possible conclusions to the talk!

### TUESDAY, AUGUST 7

Breaking the Proof Code

João Marcos

Universidade Federal do Rio Grande do Norte, Brazil

How many of your proofs are correct? Can computers assist you in doing better math? This talk will appraise some recent achievements in mechanized generation of proofs and refutations for various significant mathematical conjectures.

The Automated-Reasoning Revolution: From Theory to Practice and Back

Moshe Y. Vardi

Rice University

For the past 40 years computer scientists generally believed that NP-complete problems are intractable. In particular, Boolean satisfiability (SAT), as a paradigmatic automated-reasoning problem, has been considered to be intractable. Over the past 20 years, however, there has been a quiet, but dramatic, revolution, and very large SAT instances are now being solved routinely as part of software and hardware design. In this talk I will review this amazing development and show how automated reasoning is now an industrial reality.

I will then describe how we can leverage SAT solving to accomplish other automated-reasoning tasks. Counting the the number of satisfying truth assignments of a given Boolean formula or sampling such assignments uniformly at random are fundamental computational problems in computer science with applications in software testing, software synthesis, machine learning, personalized learning, and more. While the theory of these problems has been thoroughly investigated since the 1980s, approximation algorithms developed by theoreticians do not scale up to industrial-sized instances. Algorithms used by the industry offer better scalability, but give up certain correctness guarantees to achieve scalability. We describe a novel approach, based on universal hashing and Satisfiability Modulo Theory, that scales to formulas with hundreds of thousands of variables without giving up correctness guarantees.

### WEDNESDAY, AUGUST 8

What Do Byzantine Agents Know?

Roman Kuznets

TU Wien

The idea to analyze communication in distributed systems using epistemic logic, the logic of knowledge, has been around for more than 25 years and yielded such seminal results as the necessity of common knowledge as a precondition for simultaneous coordinated action. The goal of this ongoing project is to incorporate Byzantine agents, i.e., agents prone to failure, into the epistemic framework. Some limited types of Byzantine behaviours have already been studied in this respect, e.g., crash failures by Dwork and Moses. But handling knowledge in such limited circumstances is more straightforward.

We consider fully Byzantine agents whose behavior is maximally unrestricted but whose knowledge remains local. Failures of such agents to follow their protocol may happen due to malicious intent or to malfunction. Epistemically, the latter case presents more interest. In particular, a natural goal is for a protocol to take into account and try to remedy possible failures, which would require the agent to know a failure has occurred.

Our contribution is a comprehensive and modular distributed framework for message passing, incorporating the possibility of arbitrary failures by agents. We demonstrate the potential of this framework by providing a modular way of incorporating into it various popular communication contexts, including synchronous agents, synchronous communication, broadcast (both hardware and software), coordinated action, etc. Joint work with Lucas Gréaux, Laurent Prosperi, and Ulrich Schmid, in alphabetical order.

The P vs. NP Problem

Scott Aaronson

University of Texas at Austin

I'll discuss the status of the P=?NP problem in 2018, offering a personal perspective on what it's about, why it's important, why many experts conjecture that P!=NP is both true and provable, why proving P!=NP is so hard, the landscape of related problems, and crucially, what progress has been made in the last half-century. I'll say something about diagonalization and circuit lower bounds; the relativization, algebrization, and natural proofs barriers; and the recent works of Ryan Williams and Ketan Mulmuley, which (in different ways) hint at a duality between impossibility proofs and algorithms.

The problem of meaning in AI: Still with us after all these years

Tom Froese

Instituto de Investigaciones en Matemáticas Aplicadas y Sistemas, UNAM

In recent years there has been a lot of excitement about the possibilities of advanced artificial intelligence that could rival the human mind. I cast doubt on this prospect by reviewing past revolutions in cognitive robotics, specifically the shift toward situated robotics in the 90s and the shift toward a dynamical approach in the 00s. I argue that despite claims to the contrary, these revolutions did not manage to overcome the fundamental problem of meaning that was first identified in the context of various theoretical and practical problems faced by Good Old-Fashioned AI. Even after billions of dollars of investment, today’s computers simply do not understand anything. I argue for a paradigm shift in the field: the aim should not be to replicate the human mind in autonomous systems, but to help it realize its full potential via interfaces.

A simplicial complex model of dynamic epistemic logic for fault-tolerant distributed computing

Sergio Rajsbaum,

Instituto de Matemáticas, UNAM

The usual epistemic S5 model for multi-agent systems is a Kripke graph, whose edges are labeled with the agents that do not distinguish between two states. We propose to uncover the higher dimensional information implicit in the Kripke graph, by using as a model its dual, a chromatic simplicial complex. Then we use dynamic epistemic logic to study how the simplicial complex epistemic model changes after the agents communicate with each other. We show that there are topological invariants preserved from the initial epistemic complex to the epistemic complex after an action model is applied, that depend on how reliable the communication is. In turn, these topological properties determine the knowledge that the agents may gain after the communication happens. Joint work with Eric Goubault

### THURSDAY, AUGUST 9

Going backwards suffices: some reflections on proof construction for formal verification

Favio Miranda

Facultad de Ciencia, UNAM

It has been said that the backwards method for constructing a proof, which goes back to what is called analysis in ancient geometry, is a helpful heuristic but its output should not be considered as a correct proof. Instead, after this discovery methodology has succeed, it is mandatory to construct a forward proof. This is a major drawback when the proofs sought after are not purely mathematical but related to formal verification of software programs. We argue against such point of view by discussing a formal notion of backward proof that allows to define interactive proof-search transition systems, and to establish an equivalence between them and several traditional deductive systems. Thus showing that the backward proof construction is a logically valid process, and hence there is no need for forward proofs.

Quantum supremacy and its applications

Scott Aaronson

University of Texas at Austin

In the near future, there will likely be special-purpose quantum computers with 50-70 high-quality qubits and controlable nearest-neighbor couplings. In this talk, I'll discuss general theoretical foundations for how to use such devices to demonstrate “quantum supremacy”: that is, a clear quantum speedup for *some* task, motivated by the goal of overturning the Extended Church-Turing Thesis (which says that all physical systems can be efficiently simulated by classical computers) as confidently as possible. This part of the talk is based on joint work with Lijie Chen, https://arxiv.org/abs/1612.05903. Then, in a second part of the talk, I'll discuss new, not-yet-published work on how these experiments could be used to generate cryptographically certified random bits.