Talks with abstracts

Type Design Patterns for Computer Proofs 

Georges Gonthier, Cambridge

Abstract:  The practical feasibility of large formalization crucially depends on the ability to factor concepts by overloading notation, definitions and even theorems. 

Proof assistants such as the Coq system support this through both syntax-directed overloading resolution and "exotic" type system features such as coercions, dependent types, computable type functions, and first-class type classes. Although most of these features have been introduced independently, it turns out they can be combined in novel and nontrivial ways to capture some of the elaborate forms of ad hoc polymorphism that arise in both the formalization of advanced mathematics and of imperative program verification

.



Sequential versus Continuous Models of Finite Types
Dag Normann, Oslo

Abstract: Inspired from previous work by Kleene (1959), Kreisel (1959) and Platek(1963), Scott (1967) designed a logic LCF for computing with functionals of finite types, and also a semantics for LCF. This semantics led him to the discovery of domain theory, a tool for creating denotational semantics of programs. Later, Plotkin(1977) reformulated LCF to a typed lambda-calculus PCF and showed that there is a sequential evaluation procedure for PCF adequate with respect to the Scott model. The sequential functionals will be the minimal typed structure of extensional partial functionals containing all partial functions on the integers and that can serve as a model for PCF. It turns out that this model is significantly smaller than the Scott model, and for some years it was not well understood how they relate. In this talk we will survey the development of the computability theory of functionals of finite types to the extent it is relevant to theoretical computer science. We will give a smooth introduction to the sequential functionals, and discuss how they fit into the Scott model and an alternative model due to Milner(1977). The last part of the talk will be based on joint work with V. Yu. Sazonov.



Resizing rules - their use and semantic justification
Vladimir Voevodsky, Princeton

Abstract:

I will discuss a proposed class of user-controllable modifications to the universe assignment rules in polymorphic type theories which are called " resizing rules " . The necessity for such rules originally arose in the univalent approach to formalization which uses a defined type hProp instead of the syntactic [ Prop ] as the type whose terms represent properties . To establish global impredicativity for [ hProp ] one has to introduce a rule which says that if a type [ T ] is in [ hProp ] i.e. if one is provided with a term of the type [ isaprop T ] then [ T : U ] is valid for any universe [ U ] independently of the original universe level of [ T ] . I'll show how to construct a model which justifies the consistency of this and some other resizing rules and how these rules are used to construct well behaved quotient types in Coq . Unlike axioms, resizing rules do not introduce new opaque terms and it is conjectured that semantically consistent resizing rules do not affect the normalization properties of the theory .

Regular Talks



A Type System for Positivity
Matteo Acerbi, Dominic Mulligan and Claudio Sacerdoti Coen

Abstract: 

The traditional presentation of the Calculus of (Co)Inductive Constructions (CIC) is based on two different syntactic checks. The first check, called guardedness by constructors/destructors is used to guarantee that every recursive function is terminating by ruling out functions that are not structurally recursive. The second check, called strict positivity, rules out inductive types that are also responsible for non-termination.

Syntactic checks are problematic for two distinct reasons. The first, more obvious one, is that they are not intensionally complete since they recognize only a small, decidable subset of the terminating programs and admissible inductive types. The second and more critical one occurs when the calculus is extended with metavariables to represent incomplete (proof) terms. Indeed, syntactic checks do not commute with instantiation of metavariables. For instance, if you have a metavariable occurring in the body of a recursive function, the metavariable should be only instantiatable with terms that do not contain recursive calls on non-smaller arguments.

Sized types, advocated by Gimenéz and popularized by Abel amongst others, overcome both problems. The idea is to use types to relate the size information on the input and output of functions. Moreover, the context associated to each metavariable, describing the set of variables that can occur in whatever the metavariable is instantiated to, also tracks the size information to disallow non-admissible recursive calls.

In our talk we propose an analogue of sized types for the (strict) positivity condition. We also show an interesting example of a rank 2 predicative inductive type family that becomes admissible in our proposed type system and that is not well formed in CIC. The example is used to capture the fixpoint of monotone type constructors and it occurs in the ``datatypes a la carte'' technique by Swierstra.



Model Theory of Logic-Enriched Type Theories
Robin Adams

Abstract: A logic-enriched type theory or LTT (pronounced `let') is a formal system consisting of two components: a type theory that provides the collection of mathematical objects to be reasoned about; and a separate \emph{logic} for stating and proving theorems about those objects. We can thus change one of these components without affecting the other; for example, we can add excluded middle to the logic without changing the reduction properties of the type theory.

They were introduced by Aczel and Gambino set theories and type theories: the translations between the set theory CZF and the type theory $ML_1V$ are easier to study if we introduce an LTT (which they called ML(CZF)) as a half-way stage. Since then, I have studied many LTTs of different strengths, and shown that they are able to capture some schools of thought in the foundations of mathematics (such as Weyl's predicativism) arguably better than can be done with either a type theory or a predicate logic on its own.

In this talk, I shall present a plan for a programme of research into LTTs, together with some early results. I shall give the appropriate notion of model for an LTT, and prove soundness and completeness theorems. A model for an LTT consists of a category $C$ that interprets the type theory component, together with a model for the logic defined as in first-order model theory but using the objects of $C$ in place of sets.

LTTs are interesting as logical systems in their own right, but also provide a `bridge' between the worlds of type theories and predicate logics, which should hopefully allow results from one of these worlds to be more easily applied to the other. I will justify these comments by showing how LTTs may be used to give new type-theoretic proofs of two old results: that ACA0 is a conservative extension of PA, and that the functions provably total in PA are exactly the epsilon_0-recursive functions.



Initiality for Typed Syntax and Semantics
Benedikt Ahrens

Abstract: Initial Semantics aims at characterizing the syntax associated to a signature as the initial object of some category, yielding a recursion principle for defining functions on the syntax. For untyped syntax with variable binding several initiality results were presented at the same time -- using different encodings of binding – by Fiore, Plotkin & Turi (FPT), Gabbay & Pitts and Hofmann. Later on, Hirschowitz & Maggesi (HM) and Zsidó provided initiality results based on the notion of representations of a signature in monads. Their results are strongly related to FPT via an adjunction between the corresponding categories. We extend HM and Zsidó's work in two directions.

Firstly, we prove an initiality result which accounts for translations between simply-typed languages over a different set of types. To achieve this, we generalize Zsidó's notion of representation to allow object types to vary, yielding a "larger" category of representations, while preserving initiality of the syntax therein. A translation from this syntax to another term language over different types can be given via a recursion principle as an initial morphism.

Secondly, we integrate some kind of semantics into initiality. We model semantics by considering syntax as a set of terms (indexed by a set of free variables) that is equipped with a preorder -- a reduction relation on those terms. To specify a reduction on the syntax associated to a signature S, we introduce the notion of S-inequation. We prove that, given a set A of S-inequations, the subcategory of representations of S verifying every inequation of A has an initial object.

Afterwards we combine both extensions to obtain translations between languages that are equipped with reduction relations, and present two examples of such translations: firstly, a translation of PCF, equipped with its usual reduction, into the untyped lambda calculus with beta reduction is given as initial morphism in the category of representations of PCF. Secondly, viewing propositions as types and proofs as terms, a translation between propositional logics can be considered as an initial morphism in a suitable category of representations. We explain which characteristics of those maps are ensured by our formalism, and how to express further desirable properties therein. Lastly, we present the formalization in Coq of some of our results.

NB. preprints about the mentioned theorems and Coq code are available on my web page



Abstract Structural Induction
Robert Atkey, Patricia Johann and Neil Ghani

Abstract: Structural induction is the basic tool of definition and reasoning in type theory. Computation proceeds by examining a piece of inductive data and dispatching work based on its structure. Further work is restricted to data strictly smaller that the input. Thus computation by structural induction always terminates, and is driven by the structure of the input data.

Building on category-theoretic accounts of structural induction originally by Hermida and Jacobs and recently extended by Ghani, Johann and Fumex, we have axiomatised the concept of structural induction to take it beyond basic induction over inductive data. A structural induction principle consists of a rule of inference (represented as a term in type theory), along with an associated computation rule.

This allows us to construct structural induction principles modularly from existing structural induction principles. Examples include lexicographic induction and complete induction. The modular construction of structural induction principles is related to the construction of recursion principles in functional programming.

We are also investigating the notion of a category of structural induction principles, and the properties and structure of that category. This is work in progress.



Formal Proofs of Robustness for Watermarking Algorithms
David Baelde, Pierre Courtieu, David Gross-Amblard, Christine Paulin and Xavier Urbain

Abstract: We show how to achieve provable security in the domain of watermarking, by adapting methods now common in cryptography. We obtained the first results of robustness against arbitrary attackers for watermarking algorithms, and formalized them in Coq using the ALEA library.

Formal security proofs for cryptographic protocols can be obtained by establishing that attacking the protocol would amount to solving a problem commonly accepted as unfeasible, such as computing discrete logarithms in polynomial time. In that approach, the protocol is represented as a game where the opponent is the network, aware of all exchanged messages and able to perform arbitrary computations to break security. Such games are usually probabilistic, reflecting the fact that secret keys are chosen randomly. Computational resources may also be accounted for, and attackers are generally limited to certain algorithmic classes such as polynomial time. Various cryptographic protocols have been proved secure using this approach, and formal proofs were even provided in the Coq development Certicrypt.

While cryptography is about ensuring privacy, watermarking techniques are designed to protect ownership: the general goal is to mark data with a proof of ownership, and be able to ensure that the mark cannot be altered by any transformation without loosing the original data at the same time. Watermarking protocols may be complex and have diverse specifications, but they all rely in the end on the robustness of a non-removable embedding. Those embeddings can be specific to some kind of structured data such as images, but generic ones are also used, e.g., for marking databases. Unfortunately, the only proofs of robustness are of limited significance, since they only take into account known attacks. We obtained the first results establishing robustness against arbitrary attackers by adapting the methodology used for cryptographic protocols, highlighting key differences between cryptography and watermarking.

This work has been fully formalized in Coq. We used the ALEA library, which allows to model probabilistic algorithms as functional programs using a probability monad, and to reason in a mathematically exact way on the induced probability distributions.



Certified Polynomial Approximation for Solving the Table Maker's Dilemma
Nicolas Brisebarre, Mioara Joldes, Erik Martin-Dorel, Micaela Mayero, Jean-Michel Muller, Ioana Pasca, Laurence Rideau and Laurent Théry

Abstract: The newly revised IEEE 754--2008 standard for floating-point (FP) arithmetic recommends that some mathematical functions should be correctly rounded (roughly speaking, the system must always return the FP number nearest to the exact mathematical result of the operation). Requiring correctly rounded functions has a number of advantages: among them, it greatly improves the portability of numerical software and it allows one to design algorithms and formal proofs of software that use this requirement. To be able to design fast programs for correctly rounded functions, we must address a problem called the Table Maker's Dilemma (TMD): we need to locate, for each considered function and for each considered FP format and rounding mode, the so-called hardest-to-round (HR) points. The naive method of finding these points (evaluating the function with large precision at each FP number) is far too impractical.

Two different algorithms have been designed to find these HR points:

- the Lefèvre algorithm, based on a variant of the Euclidean GCD algorithm; - the Stehlé--Lefèvre--Zimmermann (SLZ) algorithm, based on the lattice basis reduction algorithm LLL.

The processes that generate these HR points are based on complex and very long calculations (years of cumulated CPU time) that inevitably cast some doubt on the correctness of their results. In the French ANR project entitled TaMaDi, we thus undertake to fully reconsider the methods used to get HR points, with a special focus on their certification.

In this work, we focus on the formal validation of the polynomial approximation step that is common to both L and SLZ algorithms. For this purpose, we have recourse to Taylor Models, which are state-of-the-art techniques to provide certified polynomial approximation. However, our approach is very specific, since we focus mostly on univariate functions, for which we want to calculate zillions of approximation polynomials, with an accuracy of hundreds of bits. We also want to obtain these certified polynomials in a format that is well-suited for inclusion in our certification chain to solve the TMD.

Furthermore, to compute the desired coefficients of these polynomials, we follow a recurrence-based approach that scales well for our applications. Indeed, most of mathematical functions we want to deal with in the TaMaDi project are so-called D-finite functions, i.e., they satisfy a linear ordinary differential equation with polynomial coefficients, implying the coefficients of their Taylor expansion will satisfy a recurrence relation.

In this talk, we will thus present our on-going formalization in the Coq proof assistant, using the CoqInterval library that is now based on the Flocq library.



Real Algebraic Geometry: A Formalization of a Quantifier Elimination Procedure for the Theory of Real Closed Field
Cyril Cohen and Assia Mahboubi

Abstract: This work describes a formalization of the theory of discrete real closed fields in the Coq proof assistant and an important example of application. This theory captures both the theory of classical real numbers and of real algebraic numbers. Real algebraic numbers form a decidable subset of real numbers and have interesting computational properties. This theory is at the core of numerous effective methods in real analysis, including decision procedures for non linear arithmetic or optimization methods for real valued functions.

We formalized an abstract structure of discrete real closed fields and we achieved most of the formalization of a quantifier elimination procedure. We chose a procedure whose proof needed a large part of the theory of real closed field. We now expect that this theory can be reused to certify efficient procedures.

In this talk, we present key points of the certification of the quantifier elimination procedure. Hence we show how to implement and use pseudo-remainder sequences, Cauchy indexes and Tarski queries, using a combination of methods : combinatorics, polynomial analysis on real closed fields and some linear algebra. We present the way we decided to formalize key definitions and properties of the theory. This represents a rather large amount of formalized mathematics, we highlight the issues raised by the choice of the data-structures representing the mathematical objects and the organization of the code



Ornaments: Organising the Zoo of Data-types
Pierre-Evariste Dagand and Conor Mcbride

Abstract: When programming in type theory, we discover a whole zoo of data-types. We have all met unary natural numbers, lists, vectors, finite sets and other similarly structured yet often exotic breeds. Despite their specificities, we notice a striking similarity between these data-types, with natural numbers appearing here as a common ancestor. In type theory, such a diverse evolution comes out of necessity: to be fit for purpose, our types have to be as precise as possible. From a reusability perspective, this is a disaster: these purpose-built data-types are too specific to fit into or benefit from a global library.

In this work, we study an organisation principle for the zoo of data-types. McBride's Ornaments[1] define a general principle for creating new data-types from old. We shall give a categorical presentation of this construction and show that it exactly captures our intuition of a ''lineage''. Our abstract treatment greatly simplifies the type-theoretic definition. Hence, we can explain the standard constructions (e.g., ornamental algebra, algebraic ornament) in simpler terms. Further, we are able to tap into the rich categorical structure of ornaments to uncover new programming artifacts.

Besides giving a mathematical basis for our classification work, this presentation gives a hint on how one could semi-automatically lift operations from ancestor to descendant. Whilst lifting algebras is still work in progress, we have obtained some promising results for restricted classes of morphisms of data-types.

This work has been formalised in Agda[2], using Interaction Structures to model inductive families.

[1] ``Ornamental Algebras, Algebraic Ornaments'', Conor McBride, JFP
http://personal.cis.strath.ac.uk/~conor/pub/OAAO/LitOrn.pdf
[2] http://personal.cis.strath.ac.uk/~dagand/stuffs/contornament.tar.gz



On Vectorial Typing
Alejandro Díaz-Caro

Abstract: This presentation is based on my PhD thesis, which is currently under review. The objective of this thesis is to develop a type theory for the linear-algebraic λ-calculus, an extension of λ-calculus motivated by quantum computing. The first step was to compare this calculus with another algebraic calculus coming from linear logic. We have established strong connections between these calculi in terms of the possibility of simulations between them.

These extensions of λ-calculus encompass all the terms of λ-calculus together with their linear combinations, so if t and r are two terms, so is α.t + β.r, with α and β being scalars from a given ring. The key idea and challenge of this thesis was to introduce a type system where the types, in the same way as the terms, form a vectorial space and provide the information about the structure of the normal form of the terms. We have fully developed four type systems, with their subject reduction and strong normalisation proofs:

• The first one is the Scalar type system. This is an extension of System F which allows to keep track of the scalars in the terms. If t and u have type T, then α.t + β.u has type (α + β).T. The type system has some direct applications, such as the ability to determine when a given term will reduce to a barycentric (∑αi = 1) distribution of terms.

• The second one is the Additive type system. This is a type system for the additive fragment of the linear-algebraic λ-calculus, where if t has type T and r has type R, then t + r has type T + R. This system can be encoded in the classic System F with pairs, relating the sums in the calculus to a special kind of pairs, which are commutative, associative and distributable with respect to applications.

• The third system is the λCA type system, an extension of Additive to the full calculus, where the scalars are restricted to positive real numbers. The interaction between scalars in the terms and sums in the types has been done by taking the floor of those scalars, i.e. if a term t has type T, then α.t has type ⌊α⌋.T = T + · · · + T. Such types provide an approximation of the scalars involved in the terms, and remain being a system which can still be interpreted, via an encoding, in classic System F. In fact, Additive is an abstract interpretation of the full calculus.

• Finally, the Lineal type system. This is a combination of the previous three systems, which highlights the vectorial structure of the reduct of a term.

This presentation will include several examples of use and key ideas for future research.



Prototype Implementation of an Inhabitation Algorithm for FCL$(\cap, \leq)$
Boris Duedder, Moritz Martens and Jakob Rehof

Abstract: We present a prototype implementation of an inhabitation algorithm by \citet{RehofUrzyczyn11} for finite combinatory logic FCL$(\cap, \leq)$ (the applicative fragment of $\lambda$-calculus with intersection types and subtyping). The algorithm is given in the formalism of Alternating Turing Machines (ATMs) \citep{Chandra81}. Inhabitation for FCL$(\cap, \leq)$ was proven EXPTIME-complete in \citep{RehofUrzyczyn11}. The implementation is done in Prolog using the SWI-Prolog system described by \citet{Wielemaker10}. We exploit an idea which is similar to the simulation of an ATM by a Prolog program as proposed by \citet{Shapiro84}. The implementation not only decides the inhabitation problem but enumerates and graphically represents all valid inhabitants. We intend to discuss key points of the implementation and its application to several examples. In the long run we want to use the inhabitation algorithm for program synthesis with a combinator library. Here, the library corresponds to the type environment and the target type to the specification of the target program. The resulting inhabitant then yields the synthesized program.



Exploring the Reach of Hereditary Substitution
Harley D. Eades Iii and Aaron Stump

Abstract: The Tait-Girard reducibility method is a well-known technique for proving normalization of simply typed and polymorphic lambda calculi. This technique is, however, somewhat intricate, which makes it difficult to apply to new theories. Therefore an easier technique is of interest to the research community. The hereditary-substitutions method invented by Watkens et al. is qualitatively simpler than proof by reducibility. For example, the proof of the type soundness theorem using reducibility requires a quantification over substitutions satisfying the typing context, but no quantification over substitutions is required for normalization by hereditary substitution. The hereditary-substitutions method is also (at least nominally) proof-theoretically less complex than proof by reducibility. The central concept of reducibility is defined by recursion on types, introducing new quantifiers in alternating polarity. To formalize the argument, one thus needs a logic in which quantified formulas can be defined by recursion. In contrast, the central definitions for the hereditary-substitutions method can be expressed without such rich constructions, and thus should be formalizable in much weaker theories. These advantages suggest that proof by hereditary substitution may be easier to apply to new theories, and simpler to formalize, than proof by reducibility. The drawback is that it is unclear if the method can scale to richer type theories. This paper contributes to understanding which type theories can be shown to be normalizing using this method. We investigate the reach of hereditary substitution by proving normalization of three type systems. The first two systems are extensions of Stratified System F (SSF), a type theory of predicative polymorphism studied by D. Leivant. We extend SSF first with sum types and commuting conversions, and then with types expressing equalities between terms. For the third system, we consider an extension of the Simply Typed Lambda Calculus (STLC) with types expressing equalities between types. We show how normalization by hereditary substitution can be applied to these different advanced typing features. While systems with these features have been analyzed by other methods before, the current work is, to our knowledge, the first to do so using the method of proof by hereditary substitution.



Extension of General Logic
Patrik Eklund and Robert Helgesson

Abstract: The framework of general logics (Goguen, Burstall, Meseguer, 1984, 1989, 1991) consists of its abstraction into entailment systems and institutions, and covers in its examples, e.g., first-order logic and equational logic. In the abstract framework, terms are not present as the category of signatures is treated as an abstract category not explicitly embracing operators. Sentences in the abstract sense are thus not built upon terms, and models do not invoke assignment.

In this paper we show how a substitution oriented extension of the framework of general logics provides a specificity to the abstract framework of general logics, yet keeping the structure of generalized sets of sentences at an abstract level.

The advantage is that, on the one hand, we can provide inference calculi, and, on the other hand, we can fully incorporate algebras and related variable assignments. Signatures are concretely selected, so that the abstract category of signatures really is the ordinary category of (many-sorted) signatures. Terms are specifically constructed, and term monads remain in an abstract form, i.e., generalized term monads many be the classical term monad, or compositions of various monads with the term monad using distributive laws (Beck, 1969). Further, we are able to manage other underlying categories than just the category of sets. Thereby the category for managing variables may be more elaborate. The formal treatment of many-sorted terms provide a example where the underlying category of variables appear. Uncertainty and non-determinism may also be considered either in composed monads forms or by selecting appropriate underlying categories to capture these notions.

Classical general logic is based on manipulation of sets of sentences only. In a monadic view, this is provided by the power set functor. Moreover, the partial order on the power set, provided by subset inclusion, is also used but is not recognized as part of the monad. Extending monads to include these partial orders gives rise to the notion of partially ordered monads. Our treatment is a further extension of general logics, where we identify suitable functors, monads and partially ordered monads for managing terms, sentences and programs.

This framework obviously invites also to investigation on monadic structures in type theory, e.g., when making clear distinctions between terms and their related lambda terms (lambda abstractions). This in important, e.g., for functorial descriptions of applications, respectively using terms and lambda terms.



Infinite Sets that Satisfy the Principle of Omniscience in all Varieties of Constructive Mathematics
Martin Escardo

Abstract: Without postulating controversial axioms for constructive mathematics (such as choice, continuity, bar induction, fan theorem, Markov's principle, Church's thesis), we show that there are plenty of infinite sets X that satisfy the omniscience principle

for every extensional p : X -> 2, either p(x)=1 for some x in X or else p(x)=0 for all x in X

We don't postulate the axiom of extensionality, but we assume it when necessary. We need some amount of higher types, at least level 3 to develop the main construction, and the higher we climb the more omniscient sets of type 1 we get. We generalize the Grillot-Ishihara trick in order to establish this, and ordinals naturally show up in the analysis of the kinds of omniscient sets we do get, and that one can possibly get in Heyting Arithmetic with finite types.



Algebraic Foundations for Type Theories
Marcelo Fiore

Abstract: Over the past several years, I have been working on algebraic foundations for type theories. The general idea driving this research is to develop mathematical meta-theories for the algebraic modelling of both the syntax and semantics of type theories, with the aim to use these model-theoretic frameworks to synthesise logical frameworks; see [1]. This line of investigation has been explored in detail in the context of simple type theories [2, 3]. In this talk, I would like to reexamine this work and consider extensions of it in two orthogonal directions to respectively incorporate dependent type theories and polymorphic type theories. The associated mathematical development has applications to the formalisation of type theories and to dependently-typed programming.

References

[1] M. Fiore. Algebraic meta-theories and synthesis of equational logics. Research programme, <http://www.cl.cam.ac.uk/~mpf23/EqMetaLog.pdf>, 2009.

[2] M. Fiore and C.-K. Hur. Second-order equational logic. In Proceedings of the 19th EACSL Annual Conference on Computer Science Logic (CSL 2010), volume 6247 of Lecture Notes in Computer Science, pages 320--335. Springer-Verlag, 2010.

[3] M. Fiore and O. Mahmoud. Second-order algebraic theories. In Proceedings of the 35th International Symposium on Mathematical Foundations of Computer Science (MFCS 2010), volume 6281 of Lecture Notes in Computer Science, pages 368--380. Springer-Verlag, 2010.



PTS with Typed Equality and Explicit Substitutions
Daniel Fridlender and Miguel Pagano

Abstract: R. Adams proved the equivalence between functional PTS having an untyped notion of conversion and functional PTS with judgemental equality, called EPTS. This result was recently extended to semi-full PTS by Siles and Herbelin. In this presentation we comment on the progress of using the technique proposed by Adams, i.e. typed parallel operational semantics, to prove the equivalence between PTS and EPTS with de Bruijn indices and explicit substitutions. To our knowledge this is the first work considering such PTS in general. The proof of the correspondence is being formalised in Agda.



A Monoidal Category of Inductive Recursive Definitions
Neil Ghani, Peter Hancock, Conor Mcbride and Lorenzo Malatesta

Abstract: Induction recursion offers the possibility of a clean, simple and yet powerful core of a type system of a dependently typed programming language. At its crux, induction recursion offers the possibility of defining universes of objects (primarily, types) closed under given dependently typed operations. In more detail, induction recursion gives a system of codes, each of which represents a functor Fam D -> Fam D for a (potentially large) type D. The initial algebras of such functors are then universes whose codes are built up inductively at the same time as the recursive definition of their decoding function.

We do not yet understand induction recursion as well as we might. Although we have a system for coding functors, we do not have codes for representing the natural transformations between them. Further, we lack a operation on codes for functors that reflects functor composition. These things are available for other natural classes of functors such as containers and indexed containers.

We will present recent work on induction recursion that makes good these two deficiencies. We define morphisms of inductive recursive codes and show they represent natural transformations. We prove that this representation is full and faithful, just as in the case of containers and indexed containers. We also provide a composition operator on inductive recursive codes which reflects functor composition. In short, we provide a monoidal category of inductive recursive codes and a strict monoidal, full and faithful embedding of this category into the functor category.



A Computational Proof of Dependent Choice Compatible with Classical Logic
Hugo Herbelin

Abstract: Choice is derivable from strong existential quantification (Σ-types) in Martin-Löf&apos;s intuitionistic type theory. Strong existential elimination is not compatible in general with a computational approach of classical logic but it can still be accomodated with computational classical logic if one enforces a call-by-value normalisation semantics of proofs and restrict strong existential elimination to proofs that are locally intuitionistic.

If, in addition, we interpret countable universal quantification as an infinite conjunction that we evaluate, not in call-by-value (because it would take an infinite time) but in call-by-need, then, the proof of countable choice from intuitionistic type theory can be reformulated in this logic with strong existential and classical reasoning, leading to a computational proof of countable choice (namely ∀n∃xP(n,x) → ∃f∀nP(n,f(n))) compatible with classical reasoning.

This scales to dependent choice and this suggest that if one extends PA^ω with strong elimination of existential quantification, one exactly captures the strength of PA^w + dependent choice.



Logical Predicates as First-Class Citizens in LF
Furio Honsell, Marina Lenisa, Luigi Liquori, Petar Maksimovic and Ivan Scagnetto



Abstract: The Edinburgh Logical Framework LF was introduced both as a general theory of logics and as a metalanguage for a generic proof development environment. In this paper, we consider an extension of LF, called the Logical-Logical Framework LLF, that features predicates as first-class citizens that can be reified and passed as arguments, as a special kind of functional objects. These functional objects return the argument under the condition that it satisfies a logical predicate, or simply freeze the argument until this predicate becomes true. We study the language theory of LLF and provide proofs for subject reduction, confluence, and strong normalization. By way of example, we illustrate how special instances of LLF allow for encodings of side-conditions of Modal Logics in Hilbert style and a direct encoding of pre- and post-conditions on input and output of a theory of functions. Our motivation for introducing LLF is twofold. First, the type system of LF is too coarse with respect to the “side conditions” that it can enforce on the application of rules. Second, almost all logics are encodable in LF (with a greater or lesser degree of difficulty involved, depending on the choice of logic) but never participate actively in the reductions. This last point requires further attention: by the Curry-Howard isomorphism, types are formulæ (theorems) and λ-terms are logical proofs of the the validity of the formulæ (or the proof of the theorem). LLF moves the “periscope” of this 30- year-old principle onto another direction: λ-terms can also be formulæ the proof of which is not encoded in LLF itself but “externalized”, via an external call, to a logical system from which we request the verification of the truth of the formula itself. Following Curry-Howard, one can view this externalization as an “oracle call”, which will be typed with a suitable type representing the oracle itself.

Historically, the idea of having stuck-reduction in objects and types in the setting of higher-order term rewriting systems with sophisticated pattern-matching capabilities was first introduced in Cirstea-Kirchner-Liquori’s Rho-cube, in order to design a hierarchy of type systems for the untyped Rewriting Calculus, and was later generalized to a framework of Pure Type Systems with Patterns. This typing protocol was essential in the effort to preserve the strong normalization of typable terms. The idea underlying the Logical-Logical Framework LLF is the same as that exploit for the General Logical Framework GLF. However, there is an important difference between the two frameworks in the definition of predicates. Predicates in are used both to determine whether β-reduction fires and to compute a substitution, while in the present paper they are truly first-class objects. A further attempt was done with the Constrained Logical Framework CLF by the same authors. The big difference between the two frameworks is that reduction was typed in CLF, while it is untyped in LLF, and that CLF relies on an infinite number of binders λP , each of which encoded one logical predicate, while the predicates in LLF are first-class objects declared in contexts: this latter point has a direct fallback in the viewing of LLF as a kernel for proof assistants, since we do not need to “parse” the metalanguage each time we add a new predicate.

Apart from encodings of Modal Logics, we believe that our Logical-Logical Framework could also be very helpful in modeling dynamic and reactive systems: for example bio-inspired systems, where reactions of chemical processes take place only provided some extra structural or temporal conditions; or process algebras, where often no assumptions can be made about messages exchanged through the communication channels. Indeed, it could be the case that a redex, depending on the result of a communication, can remain stuck until a “good” message arrives from a given channel, firing in that case an appropriate reduction (this is a common situation in many protocols, where “bad” requests are ignored and “good ones” are served). Such dynamic (run-time) behaviour could hardly be captured by a rigid type discipline, where bad terms and hypotheses are ruled out a priori.

Another interesting improvement is to be found in proving program correctness. It is well known that encoding pre- and post-conditions in LF if not so obvious: nevertheless, despite the theoretical difficulty, and driven by the importance of writing code which can be proven to be “correct”, this decade has seen a plæthora of software tools bridging the gap between proof assistants and prototype programming languages, featuring built-in Hoare-logic-inspired primitives (such as the Why pre-compiler, translating pre- and post-condition annotations into Coq proof obligations). Having the possibility to externalize the call to logical systems via a simple term application greatly simplifies this task.



Towards a Computational Justification of the Axiom of Univalence
Simon Huber and
Thierry Coquand

Abstract: The Univalence Axiom recently proposed by Vladimir Voevodsky as an extension to Martin-Löf&apos;s Constructive Type Theory implies functional extensionality and that any isomorphic structures satisfy the same propositions---a property which fails in set theory. Although this axiom has a model in classical set theory using simplicial sets, it so far lacks in a computational justification. As a possible first step in this direction, we present a basic extensional type theory with computation rules for equality proofs: Our theory has a predicative type of propositions and a universe of types consisting of simple types only; instead of treating equality as inductively defined with reflexivity being the only introduction rule, we define equality by recursion on the type and add reflexivity as a new constant with computation rules. In this way, function extensionality follows from reflexivity at function types, and we obtain strong normalization and canonicity using the method of reducibility candidates. This approach can be regarded as an internalized variation of Gandy&apos;s interpretation of extensional type theory in intensional type theory. The approach is also closely related to Observational Type Theory of Altenkirch, McBride, and Swierstra: they also take equality as defined by recursion on the type, but instead of equipping the reflexivity constants with computation rules, they gain canonicity from a clever distinction between data and proofs.



Economic Equilibriai in Type Theory
Cezar Ionescu

Abstract: We begin the quest for verified economic models by formalizing the main concepts of economic theories in a conveniently chosen type theory. In particular, we&apos;ll examine the relationships between several kinds of Nash equilibria. An operational reading of the resulting formalization opens up interesting avenues for testing models based on such equilibria, and yields the first elements of a domain-specific language for economic models.



Comparative Study of Different Styles of Coinductive Logic Programming for Type Inference
Ekaterina Komendantskaya

Abstract: Coinductive logic programs (by Simon et al.) have been shown to be a convenient tool for type inference in several programming languages. In CSL&apos;11, we have proposed a novel kind of coinductive logic programs inspired by our recent coalgebraic semantics for logic programming. In this talk, I will show how this new kind of coinductive logic programs can be used for type inference, and will compare these applications with similar applications implemented using the coinductive logic programs of Simon et al. I will pay special attention to the differences brought up by the fact that the former type of coinductive logic programs performs sequential computations, and the latter - concurrent.



Type Classes for Efficient Exact Real Arithmetic in Coq
Robbert Krebbers and
Bas Spitters

Abstract: Floating point operations are fast, but require continuous effort on the part of the user in order to ensure that the results are correct. This burden can be shifted away from the user by providing a library of exact analysis in which the computer handles the error estimates. Previously, we provided a fast implementation of the exact real numbers in the Coq proof assistant. Our implementation improved on an earlier implementation by O&apos;Connor by using type classes to describe an abstract specification of the underlying dense set from which the real numbers are built. In particular, we used dyadic rationals built from Coq&apos;s machine integers to obtain a 100 times speed up of the basic operations already.

In this talk, we discuss various extensions of the implementation. First, we implement and verify the sine and cosine function. Secondly, we create an additional implementation of the dense set based on Coq&apos;s fast rational numbers. Thirdly, we extend the hierarchy to capture order on undecidable structures, while it was limited to decidable structures before. This hierarchy, based on type classes, allows us to share theory on the naturals, integers, rationals, dyadics, and reals in a convenient way. Finally, we obtain another dramatic speed-up by avoiding evaluation of termination proofs at runtime.

We have recently submitted an article to LMCS.



Extended Curry-Howard Correspondence: The Case of Loeb Logic
Tadeusz Litak

Abstract: Several modal type systems and corresponding lambda calculi have been investigated in the literature. These include, in particular, so-called lax logic---which, as discussed by Benton, Bierman and de Paiva, is the Curry-Howard counterpart of Moggi&apos;s monadic language for computational effects---and constructive S4, whose associated proof-term calculus has been used by, e.g., Davies and Pfenning to give an account of staged computation. I am going to discuss the constructive variant of the Loeb logic, which can be obtained by adding to constructive K4 calculus the rule "from []p -> p, infer p", and some of its extensions, such as the one obtained by adding the axiom p->[]p. Such systems have made several appearances in type-theoretic literature, in particular in the work of Nakano ("A modality for recursion" and a subsequent TACS paper), in the work of Appel et al. ("A very modal model of a modern, major, general type system") and subsequent line of papers (e.g. Benton and Tabareau); also, in a still unpublished work, Conor McBride proposed a Loeb modality in the context of functional programming to ensure productivity of unbounded corecursion. However, to the best of my knowledge, not only is a clear-cut, definite account of the Curry-Howard correspondence for such system(s) still missing, but---more importantly---some of their most interesting properties known in the modal logic community are yet to be fully used in the type-theoretic context. The most important one I have in mind is the *unique fixed-point theorem*, proved by Giovanni Sambin in mid-1970&apos;s. It works in both constructive and classical setting and in the latter case, it has been used in a series of papers by Visser, Van Benthem and Alberucci and Facchini to show eliminability of fixed-point operators over the Loeb logic. I am not aware of similar applications of this theorem for constructive Loeb logic(s); in fact, some of above-mentioned type-theoretic papers, such as those by Nakano, did add *both* an explicit fixed-point operator and a Loeb modality to the type system. One of goals of my work is to investigate whether explicit fixed point operators can be eliminated also over intuitionistic base and, more generally, whether Sambin&apos;s celebrated result can throw some light on uses of Loeb-like modalities to reason about both inductive and coinductive properties of programs. Concerning the Curry-Howard correspondence for the constructive Loeb logic and its variants, it appears that a satisfying account of it can be given, for example, by taking a starting point a 1985 paper by Bellin on natural deduction for the classical version of the system. In fact, de Paiva and coauthors have already used this work for subsystems such as K and K4, although apparently not for the Loeb logic itself. I am also going to report on ongoing work on semantics in terms of both Kripke-style structures and cartesian closed categories (extended with a suitable endofunctor). Note that by Kripke-style semantics I mean semantics in the sense of a seminal Mitchell-Moggi paper, i.e., semantics for proof terms rather just for types (propositions). Semantics in the latter sense for the constructive Loeb logic was already given in the 1970&apos;s by Ursini. To sum up, this is very much a work in progress, which can diverge in unexpected directions. I am certainly looking forward to suggestions, feedback and criticism---both constructive and classical.



Type-Theoretical Semantics with Coercive Subtyping
Zhaohui Luo

Abstract: The type-theoretical semantics is the formal semantics of natural languages based on modern type theories.  The theory of coercive subtyping adequately extends the modern type theories with a notion of subtyping and plays a very useful role in making type theories more expressive for formal semantics.  We shall explain how coercive subtyping may provide useful tools for expressing various linguistic features, especially in formal lexical semantics.

The related references include:

[1] A. Ranta. Type-Theoretical Grammar. OUP, 1994.

[2] Z. Luo. Type-theoretical semantics with coercive subtyping. Semantics and Linguistic Theory, Vol. 20. 2010.

[3] Z. Luo. Contextual analysis of word meanings in type-theoretical semantics. Logical Aspects of Computational Linguistics. LNAI 6736, 2011.



Formalizing a Discrete Model of the Continuum in Coq from a Discrete Geometry Perspective
Nicolas Magaud, Agathe Chollet and Laurent Fuchs

Abstract: Dealing with geometric problems (geometric constraints solving, geometric modeling) people are, finally, faced to computations that involve computer representation of real numbers. Due to their important impact, the studies about real numbers in computer science are numerous and our purpose is not to surpass them but to reactivate an efficient point of view that has been forgot for a while [RR96].

This point of view was built in the eighties by J. Harthong and G. Reeb [Har89] and consists in a model of the continuum based over the integers that is the Harthong-Reeb line. This model was at the origin of important developments in the Discrete Geometry field [RR96]. And, at that time, the constructive content of this model was neglected.

In previous works [CWF+09a], it was shown that the Harthong-Reeb line satisfies the axioms for constructive real numbers introduced by Bridges [Bri99]. However, the Harthong-Reeb line construction is based on a nonstandard arithmetic of the integers that was not explicitly built. A first attempt of such construction, based on the Omega-integers of Laugwitz and Schmieden [Lau83], was made by some of the authors with others in [CWF+09b].

The present work presents a first formalization of the Harthong-Reeb line using the Coq proof assistant. It can be seen as a light counterpart of the seminal works about the formalization of exact arithmetic [GNSW07].

Our motivations is to make sure that the handwritten proofs given in [CWF+09a] do not contain subtle mistakes or imprecisions. This confidence problem of proofs is mainly due to the unusual mathematics that we deal with: the handled arithmetic is in a nonstandard framework and the axioms are in a constructive framework. Moreover, this formalization allows to better understand of how concepts and proofs are related to one another.

An implementation in Coq of the Hartong-Reeb line based on the work of Laugwitz and Schmieden [Lau83] is also presented and analyzed. We currently use integer sequences represented with functions to denote the Omega-integers of Laugwitz and Schmieden.  A further step would be to switch to a more efficient implementation using coinduction and its associated lazy evaluation mechanism.

Overall, we hope that this work could help reasoning and implementing numeric computations more reliably in geometric systems.


The Link between the Intensional Level and the Extensional one in the Minimalist Foundation
Maria Emilia Maietti

Abstract: A key advantage in adopting an intensional type theory, like Martin-Loef&apos;s type theory or Coquand&apos;s Calculus of Constructions, as a foundation for constructive mathematics is its following double face: the formal theory is in the same time a set theory apt to formalize mathematical theorems and a programming language where to extract programs from proofs. This latter aspect is crucial to provide evidence for the constructivity of the formal theory.

A main drawback in adopting an intensional type theory as a foundation for mathematics is that it lacks a direct representation of extensional concepts, like quotients or proof-irrelevance, and it is common practice to represent them by means of setoids.

In joint work with Giovanni Sambin in [1] we thought of giving the status of "extensional theory" to the tools used for the formalization of extensional concepts in intensional type theory. This led us to propose that a foundation for constructive mathematics should be a "two-level theory": one level, called "intensional", should be used as a programming language; the other, called "extensional", should satisfy the following conditions: 1) it should be closed under standard extensional constructs in order to formalize mathematical proofs directly, 2) it should be seen as an abstraction of the intensional one according to Sambin&apos;s forget-restore principle.

In [2] it was stated that to satisfy the link between the two levels acccording to [1] it is enough to interpret the extensional level in the intensional one by means of a quotient completion over it. The two-level minimalist foundation in [2], extending that in [1], was built to fall under this notion.

In this talk, we describe the abstract properties of the quotient completion used to interpret the extensional level of the minimalist foundation in its intensional one. These properties have been devised in joint work with Giuseppe Rosolini in [3].

It is worth pointing out that to interpret the extensional theory over the intensional one, the quotient completion must be equipped with a suitable notion of coherent isomorphisms (in particular these are used to interpret the equality judgement between extensional sets and the judgements about their elements).

It is an open problem how to provide an abstract notion of these isomorphisms.

[1] M.E. Maietti, G. Sambin "Toward a minimalist foundation for constructive mathematics" in ''From Sets and Types to Topology and Analysis: Practicable Foundations for Constructive Mathematics", (L. Crosilla and P. Schuster eds.) OUP, 2005.

[2] M.E. Maietti "A minimalist two-level foundation for constructive mathematics" APAL, 160(3):319--354,2009

[3] M.E. Maietti, G. Rosolini ``Quotient completion for the foundation of
constructive mathematics'', 2011, submitted




An Equivalence between Small Induction Recursion and Indexed Containers
Lorenzo Malatesta,
Thorsten Altenkirch, Neil Ghani, Peter Hancock and Conor Mcbride

Abstract: Induction Recursion (IR) is a scheme which formalises principles for introducing data types in Martin L ̈f type theory. The essential feature of IR is the simultaneous definition of an inductive type and a function from this type into another one by structural recursion. The paradigmatic example of an inductive recursive definition is Martin L ̈f’s universe a l ́ Tarski: a set U of codes for small sets is generated inductively at the same time as a “decoding” function T, which maps each code to the corresponding set it denotes. The function T is defined by recursion on the way the elements of U are generated. A finite axiomatization of induction recursion has been given by Peter Dybjer and Anton Setzer; further development of the theory has been exploited by these same authors. In recent years the search for a categorical account for dependent types has led researchers to the notion of polynomial functor or indexed container (IC); these have been identified as a good candidate for representing strictly positive families of data types. The aim of this work is to investigate the relationship between IR and IC in the special case of “small” inductive recursive definitions. In detail we (i) introduce a category of small inductive recursive definitions and their mor- phisms, (ii) recall the definition of the category of indexed containers, and (iii) prove the equivalence of these categories. This result could be read as a (weak) proof theoretic reduction of small induction recursion to indexed inductive definitions, which highlights the fact that the real expressive power of IR lies within “large” IR.



On the Curry-Howard Correspondence for Constructive Modal Logic CK
Michael Mendler

Abstract: Modal type theories are lambda calculi enriched by modal types to capture and control contextual computational information. Special such type theories have been introduced to cope with computational side effects, strictness analysis, staged computations, resource constraints, mata-variables and higher-order abstract syntax or distributed computations. The logics of inhabitation for these type systems correspond to various special theories of constructive modal logic. Most central among them are propositional lax logic PLL, CS4 and CS5. However, these are only a few singular cases in the rich lattice of constructive modal logics which itself is an enrichment of the world of classical modal logics. In the classical world many systematic results are available about the correpondence between particular theories and particular classes of (Kripke) models. It seems a natural idea to embark on a similar correspondence theory for modal logics in the constructive spirit which links constructive modal theories not only with classes of (constructive Kripke) models but at the same time with associated computational lambda-calculi in the spirit of Curry-Howard. We do not know yet what such a constructive correspondence theory will loook like but it will have to start at a base logic from which the space may be explored systematically. In this talk we discuss the logic CK and its associated lambda-calculus as such a basis. We hope that by coming to grips with CK a lot can be learnt about the relationships of derives theories such as PLL, CS4, CS5 and others than may become important for modal type theories in the future.



Rings with Explicit Divisibility Formalized in Coq
Anders Mörtberg and Cyril Cohen



Abstract: This work is concerned with the formalization of the theory of rings with explicit divisibility using the SSReflect language extension to the Coq system. The main motivation for this work is to have a suitable setting for implementing and prove correct the Smith normal form algorithm for computing homology groups of simplicial complexes from algebraic topology. This algorithm is a generalization of the Gaussian elimination algorithm to principal ideal domains (PIDs) instead of fields and the formalization of this algorithm is work in progress.

Rings with explicit divisibility are integral domains R such that for any a, b in R there is a method to check if a divides b and if this is the case get x such that b = xa. Standard examples of these rings are Z and k[x] where k is a field. Examples of rings with this property are greatest common divisor rings (GCD rings), Bézout rings and Euclidean rings. The first two of these are non-Noetherian analogues of structures from classical mathematics, but for the formalization of the Smith normal form algorithm it is necessary to have some form of Noetheriannity. Hence we have implemented constructive PIDs which extends the Bézout rings with a constructive approximation of Noetheriannity.

All of the structures has been formalized together with some theory about them, for example have we formalized the inclusions in the chain:

Euclidean rings \subset constructive PIDs \subset Bézout rings \subset GCD rings

So by instantiating the Euclidean ring structure it inherits the properties of the above structures automatically. We have instantiated the Euclidean ring structure (and hence the other three) with implementations of Z and k[x] where k is a field. This means that once the formalization of the Smith normal form algorithm is completed, we will be able to apply it to matrices with elements from these rings.



Elimination Principles for Initial Dialgebras
Fredrik Nordvall Forsberg, Thorsten Altenkirch, Peter Morris and Anton Setzer

Abstract: There are two equivalent well-established approaches to model the semantics of ordinary inductive data types. In Martin-Löf Type Theory, each set X comes equipped with an eliminator which at the same time represents reasoning by induction over X and the definition of recursive functions out of X. A more categorical approach models data types as initial T-algebras for a suitable endofunctor T : C -> C.

There are, however, other meaningful forms of induction -- such as induction-induction -- which do not fit into this framework. To accommodate these, we generalise the theory to (F, G)-dialgebras for suitable F, G : C -> D, where C and D have finite limits. (An (F, G)-dialgebra (X, f) consists of an object X in C together with a morphism f : F(X) -> G(X) in D.) The equivalence between initiality and the elimination principle still goes through in this more general setting, and we get a uniform proof for ordinary induction, induction-recursion, induction-induction...



Mechanizing Metatheory without Typing Contexts
Sungwoo Park

Abstract: When mechanizing the metatheory of a programming language, one usually needs a large number of lemmas proving structural properties of typing judgments, such as permutation and weakening. This paper proposes a simple technique for removing all such lemmas, thereby greatly reducing programming effort. First it syntactically distinguishes between bound variables (with corresponding binders) and free parameters (without corresponding binders). Then it eliminates typing contexts by converting all bindings in typing contexts to annotations of free parameters. This technique is based on the view that entailment relations, such as typing judgments, are just syntactic tools for displaying only the hypotheses and conclusion of a hypothetical proof while hiding its internal structure.

In order to measure the effect of eliminating typing contexts, we formalize System F_sub from the POPLmark challenge in the Coq proof assistant in a number of different ways. An analysis of our Coq developments shows that eliminating typing contexts produces a more significant reduction in both the number of lemmas and the count of tactics than the cofinite quantification, one of the most effective ways of simplifying the mechanization involving binders. Our experiment with System F_sub suggests three guidelines to follow when applying the technique of eliminating typing contexts. We report a case study of applying the technique to System F extended with linear typing contexts.



The Principal Subtyping Theorem and the Subtyping Hierarchy of Lambda Terms
Andrew Polonsky



Abstract: As is known, the principal type theorem for simple types can be generalized to recursive types by turning off the "occurs check" in the Hindley--Milner type inference algorithm: this gives any lambda term a recursive type with a "minimal" amount of recursion. The present paper proves the existence of principal subtypings, which provides a further generalization by relaxing equality between types to a mere partial order: subtyping. This gives (simple) recursive types an extra piece of structure. In particular, it allows us to characterize pure lambda terms by how much subtyping they require in order to become typable. Elementary properties of the resulting \emph{Subtyping Hierarchy} are investigated.



A Characterization of Reduction 1-Cycles in Infinitary Lambda Calculus
Andrew Polonsky and Jörg Endrullis

Abstract: A classic theorem of Albert Visser states that Omega = (\x.xx)(\x.xx) is the only (finite) lambda term which reduces to itself in exactly one step of beta reduction. We investigate the structure of such terms in the infinitary setting, where term trees are allowed to be infinite. Our result gives a general scheme of how such "1-cycles" are constructed, starting from an arbitrary term B and an increasing family of "periodic positions" of B, which correspond to those subterms that are syntactically identical with B. Outside of this scheme, there are three exceptional 1-cycles: Omega, I^oo = I(I(I(I(I(...)..), and the term QQ with Q=\x.xQ.



The Role of Second-order Quantification in Soft Type Assignment Systems
Aleksy Schubert, Jacek Chrząszcz and Ken-Etsu Fujita

Abstract: Soft type assignment systems STA, STA_+, and STA_B characterise by means of reduction of terms the computation in complexity classes PTIME, NP, and PSPACE, respectively. All these systems are inspired by linear logic and include polymorphism similar to the one of System F. It is an interesting question now what is the impact of the second-order quantification on the expressibility of the systems. We present the expressibility results of the systems in case the second-order general quantifier is present, in case it is limited to ML-like typing language, and in case the polymorphic quantification is omitted. Then we present an initial study on the expressibility in case the existential quantifier is used instead of the general one.



Computing in a Boolean Topos
Arnaud Spiwack

Abstract: In a dependent type theory, setoids (defined as the pair of a type and an equivalence relation) are a useful way to organise programs, as they enrich types with the ability to form quotients. A little known fact, though, is that in a suitably impredicative type theory (such as Coq with the -impredicative-set where the sort Set is seen as the sort of propositions) they form a topos. In other word, the sort of propositions form a setoid (up to equivalence) and have strong existentials. A first important result is that the setoid or propositions cannot support classical logic. This would imply an isomorphism between the setoid of booleans and that of propositions, but since our functions are all program, that would mean the decidability of the truth of proposition.

Now, topoi offer a standard way, sometimes called sheafification, to produce new topoi. This is where the fun begins, because sheafified topoi are built of a subcollection of the objects of the original one, the same morphisms and a different logic. In particular we can choose the new logic to be classical. So, following a textbook, we build a topos whose objects are setoids (albeit not all of them) and morphisms equality respecting programs, but whose logic is classical.

This topos, which we shall explore, is obtained, as could be expected, by a continuation passing style transformation of a sort. Continuation passing style does not work out of the box for setoids, because the setoid (A=>E)=>E (with E the empty set) has at most one element. So this observation should already count as a nice one. But the object of all this is rather to look at how this construction coming from standard mathematics can be relevant for programming. Note, on this matter, that topoi support usual type constructions (including dependent products and sums) and have the ability to form quotient. This makes most functionnal program interpretable in any topos. In addition, a sheafified topos preserves the notion of elements and (n-ary) functions from the original one. Hence we have indeed built an honest topos of setoids and equality respecting programs.



A User-extensible and Safe Alternative to the Conversion Rule using VeriML
Antonis Stampoulis and Zhong Shao

Abstract: VeriML is a programming language that combines an ML-like core language with first-class support for terms of a higher-order logic, like propositions and proof objects. Logical terms retain their typing information, enabling the programmer to write dependently-typed procedures that manipulate them. This leads to tactics that specify, in their type, what input terms they expect, and what their output is: e.g. a tactic can specify that its output is a proof object for its input proposition.

We have recently used this language to revisit the issue of the conversion rule in type theories. This rule, as it exists in logics like CIC, renders propositions that are equivalent up to evaluation implicitly identical. Thus proofs in such a logic do not need to witness that equivalence, leading to simpler and smaller proofs. Still, the equivalence checking procedure becomes part of the trusted core of the logic, both in its metatheory and its implementation. Recently, further extensions to what propositions are implicitly identical have been proposed: for example, the CoqMT framework considers certain arithmetic simplifications as part of the conversion rule. These extensions require enlarging the trusted core of the logic further.

Our approach is to replace the conversion rule by explicit calls to VeriML procedures that perform the equivalence checks. These procedures are typed so that they return a proof object explicitly witnessing the equivalence. Based on the type-safety of the language, such proof objects are guaranteed to exist as long as the evaluation of the procedures is successful, even if they are not generated. Thus we are led to a user-extensible replacement for the conversion rule which allows for a simpler logic and smaller trusted core, while retaining the space savings. The proof certificates generated in this way can be checked under varying levels of trust: 1) the calls can be evaluated without generating proof objects, trusting the implementation of the computational language 2) all equivalence checking calls can be evaluated fully, and their results checked for validity 3) the calls can be evaluated fully by the proof producer, and the receiver can check their results, in case running user-specified code is undesired.

We show how a basic equivalence checking procedure for beta- and iota-equality can be written; how it can be extended with a decision procedure for equality with uninterpreted functions; and how simple arithmetic simplifications can be performed as part of it.



Decision Problems in Intersection Logics
Pawel Urzyczyn

Abstract:  This talk will discuss the decision (inhabitation) problems for some intersection logics (intersection type assignment systems). In particular it will be shown that the inhabitation problem is decidable (in non-elementary time) without rule E/\ and without subtyping and exponential space complete without rule I/\ and with subtyping.



Context-dependent Computation using Comodels of Theories in Type Theory
Tarmo Uustalu

Abstract: Moggi introduced the very powerful idea that effectful computation can be modelled and analyzed in terms of monads. Plotkin and Power have later advocated a considerably more fine-grained approach based on models of theories. The same options are available for modelling the dual phenomenon, contextual computation, as occurring, e.g., in cellular automata, computations on streams and trees. Here one can apply comonads or, alternatively, comodels of theories. I will discuss the pragmatics of both possibilities from the point of view of dependently typed functional programming as available in the Agda language. Since this is very much about language constructs and libraries for programming and reasoning with inductive and coinductive types and families, I will also analyze the implications of different designs of language support for coinductive types for my project.

Comments