With the purpose of celebrating the UNESCO World Logic Day, the Australasian Association for Logic will host a Southern Summer Logic Day. The event will take place on Zoom (contact Guillermo Badia at g.badia@uq.edu.au for the Zoom link). There will be invited presentations by Leonid Levin, Patrick Blackburn, Rod Downey, Jack Copleand, Nox Cowie and Yuri Gurevich. The date and time will be Tuesday, 13 January 2026 at 23:00:00 (UTC) (notice that in AU/NZ this will be a Wednesday 14).
Timetable (in AEDT, Wednesday 14 January):
Leonid Levin: 10AM - 11:10AM
Yuri Gurevich : 11:25AM - 12:30PM
Lunch break
Jack Copeland & Nox Cowie: 1:30PM - 2:30PM
Break
Rod Downey: 3:15 PM- 4:15PM
Break
Patrick Blackburn: 4:30 PM - 5:40 PM
ORGANIZERS
Guillermo Badia (University of Queensland, Australia)
Sasha Rubin (University of Sydney, Australia)
Leonid Levin (Boston University)
Usual math sets have special types: countable, compact, open, occasionally Borel, rarely projective, etc. They are described by a single Set Theory formula with parameters unrelated to other formulas. Exotic expressions involving sets related to formulas of unlimited complexity appear mostly in esoteric or foundational studies.
Recognizing internal to math (formula-specified) and external (based on parameters in those formulas) aspects of math objects greatly simplifies foundations. I postulate external sets (not internally specified, treated as the domain of variables) to be hereditarily countable and independent of formula-defined classes, i.e. with finite algorithmic Information about them.
This allows elimination of all non-integer quantifiers in Set Theory sentences. All with seemingly no need to change almost anything in mathematical papers, only to reinterpret some formalities.
Slides can be found here: https://www.cs.bu.edu/fac/lnd/expo/sets/.
Yuri Gurevich (University of Michigan)
I begin by extracting lessons from the development of
logic, from antiquity to the heyday of logic, to humbling realization that
logic can’t be the sole foundation of AI.
Then I examine how generative AI works.
Finally, I discuss three entangled objectives for logic in the age of AI.
Direct Contribution: While AI exhibit impressive reasoning, its coherence is statistical, not deductive. This invites logic to complement
learning through scaffolding (enforcing normative constraints),
auditing (detecting inconsistencies), and governance (tracking commitments).
Foundations: Clarifying vague foundational notions like intelligence, understanding, and alignment.
Scope Extension: One radical extension is to become a ”science of convincing reasoning”.
Rod Downey (Victoria University of Wellington)
Relativization is a technique which saw its birth in set theory and
in computability theory, notably in Turing’s 1939 paper where Turing
defined what it meant for a set X to be A-computable. Classically, the
collection of A-computable sets are defined to be {B | B ≤T A}.
Relativization in computability theory is a longstanding love-affair,
notably allowing for effortless proofs of results about the jump operator
from results about the halting problem for example. These effortless results
also generated a number of false conjectures such as the homogenity
conjecture.
Computational complexity theory has a more turbulent relationship
with relativization. This is particularly since the classical results of Baker-
Gill-Solovay saying that there are oracles A and B such that PA ̸= NPA
and PB = NPB. Their conclusion is that methods that relativize are not
sufficient to settle the P =?NP question. The Baker-Gill-Solovay results
spawned a huge cottage industry of results showing similar separation
methods could not be settled by methods that relativized. This caused
computational complexity to think about methods such as arithmetization,
algebraization, etc.
In this talk, I will analyse relativization in complexity theory and argue
that it is wrongly applied; arguing that the current techniques are only
concerned with partial relativization. This argument resurrects the hope
that there might be some method of using oracles to settle questions such
as the P =?NP one.
Jack Copeland & Nox Cowie (University of Canterbury)
In his doctoral dissertation, “Untersuchungen über das logische Schließen” (Investigations Into Logical Deduction), Gentzen created two formalisations of logical deduction: natural deduction and sequent calculus. The classical natural deduction system
NK can be obtained from the intuitionistic system NJ by adding a double negation
elimination inference rule. On the other hand, the intuitionistic and classical sequent
calculi LJ and LK are differentiated not by the addition of a rule, but by the number of
formulae that can be contained in the right-hand side of a sequent. In an LJ-derivation,
the right-hand side of a sequent can contain at most one formula; the classical LK is
obtained by lifting this restriction. This distinction between LJ and LK is of a notably
different nature than that between NJ and NK: the first being purely structural, and
the second involving the explicit addition of a classical inference pattern. How does
the structural feature of right multiplicity embody the classical principle of double
negation elimination? We demonstrate how the classical strength of double negation
elimination emerges from the structural feature of right multiplicity by examining the
relationship between derivations in natural deduction and sequent calculus and, in particular, applications of their respective implication and negation inference rules. As a
result, we clarify how right multiplicity constitutes a counterpart to the principle of
double negation elimination.
Patrick Blackburn (University of Roskilde)
“Quasi-Propositions and Quasi-Individuals'' is a short paper which first
appeared in 1968 as Paper XII in the first edition of Arthur Prior's book Papers on
Time and Tense. Despite its brevity, it is important both logically and philosophically.
Logically, Prior uses what he calls “quasi-modalities'' to build a small “egocentric
logic'' for reasoning about taller and shorter; as we shall see, this is perhaps the
earliest example of what computer scientists now call description logic. Moreover,
partially inspired by his earlier work on the tense logic of special relativity, he shows
how to hybridize this logic by defining a totalizing (universal) quasi-modality and
quantifying over “people-propositions” (quasi-Individuals); that is, he quantifies over
nominals. The philosophical discussion is equally interesting: the paper contains
Prior's first critical reflections on his earlier claims, boldly stated in "Tense Logic and
the Logic of Earlier and Later'' (Paper XI in the first edition of Papers on Timeand
Tense) on the significance of hybridization