The Logic & AI Seminar is a weekly event on logic in artificial intelligence and the theoretical aspects of computer science at Imperial College London. It usually takes place on Tuesday, 4pm, room 218 in the Department of Computing's Huxley building. As well as established speakers, we strongly encourage PhD students and postgraduates to speak.
Logic & AI Seminar
Logics for topology via non-standard rules
In this talk I will give an overview of some recent developments on modal logics
arising from topological considerations. These logics have a binary modality
and admit axiomatizations via the Gabbay-style non-standard rules.
FO-definable Constraint Satisfaction Problems
Many natural computational problems, such as satisfiability or systems of equations, can be expressed in a unified way as constraint satisfaction problems (CSPs). We study infinite CSPs defined by finitely many first-order formulas. We determine the decidability status of FO-definable CSP depending on whether the number of allowed types of constraints or/and number of tuples in each constraint are allowed to be infinite.
Joint work with Bartek Klin, Eryk Kopczyński, Sławek Lasota and Szymon Toruńczyk.
Using Java Pathfinder to Reason about Agent Systems
In this talk, Franco will present how Java Pathfinder (JPF) can be used as a tool to support the verification of practical instances of multi-agent systems models. In particular, Franco will show how JPF can act as a bridge between a MAS specification formalism such as Brahms and verification tools such as SPIN and Prism. Franco will introduce some technical details of the JPF implementation and an evaluation of the performance on practical scenarios, including an assessment of situational awareness and an assessment of workload.
Pandora's Problem - a technique from search theory to deal with
uncertainty and cost
In this talk, I will introduce a problem from search theory called
Pandora Problem. The problem deals with optimizing reward while
searching through boxes with uncertain prizes, which can be uncovered
against a certain cost. Pandora's Problem not only has a wide range of
possible applications, it also has a surprisingly elegant and optimal
solution. I will provide some intuition behind the solution and then
show an application of this technique in the form of a negotiating agent
that can adapt itself to the user. In such a negotiation setting, it is
essential for the agent to understand the user's preferences, without
exposing them to elicitation fatigue. I discuss a new model, using
Pandora's Rule, in which a negotiating agent may incrementally elicit
the user's preference during the negotiation. This yields an optimal
elicitation strategy that decides, at every stage of the negotiation,
how much additional user information to extract at a certain cost.
Verification of resource-bounded multi-agent systems
Properties of multi-agent systems are often verified using Alternating Time
Temporal Logic (ATL). ATL models are concurrent game structures, where we
can consider joint actions and strategies by groups of agents acting
against the environment or against other agents to enforce some outcome
or to maintain some invariant. Natasha talks about resource logics,
which extend ATL. In the models of resource logics, actions can produce or
consume resources, and languages of resource logics can express not just
the existence of strategies, but also their resource cost. There are many
flavours of resource logics, and depending on subtle differences, their
model-checking problems are either decidable or undecidable. Natasha tries to
give some intuitions for the reasons behind this decidability
or undecidability. Natasha discusses the suitability of various flavours of
resource logics for modelling different kinds of multi-agent systems.
Defining Privacy is Supposed to be Easy
Formally specifying privacy goals is not trivial. The most widely used approach in formal methods is based on the static equivalence of frames in the applied pi-calculus, basically asking whether or not the intruder is able to distinguish two given worlds. A subtle question is how we can be sure that we have specified all pairs of worlds to properly reflect our intuitive privacy goal. To address this problem, we introduce a novel and declarative way to specify privacy goals, called "alpha-beta privacy", and relate it to static equivalence. This new approach is based on specifying two formulae alpha and beta in first-order logic with Herbrand universes, where alpha reflects the intentionally released information and beta includes the actual cryptographic ("technical") messages the intruder can see.
Then alpha-beta privacy means that the intruder cannot derive any "non-technical" statement from beta that he cannot derive from alpha already. We describe by a variety of examples how this notion can be used in practice. Even though alpha-beta privacy does not directly contain a notion of distinguishing between worlds, there is a close relationship to static equivalence of frames that we investigate formally. This allows us to justify (and criticize) the specifications that are currently used in verification tools, and obtain partial tool support for alpha-beta privacy.
Complexity and Expressiveness of Abstract Dialectical Frameworks
Abstract dialectical frameworks (ADFs) are a logic-based formalism for representing knowledge about abstract arguments and relationships between them. In this talk, I will give an overview on the capabilities of ADFs. Firstly, I present results on the computational complexity of important reasoning tasks associated with ADFs, for example deciding the acceptability of arguments. Secondly, I report on ADFs' expressiveness, that is, what kinds of interpretation sets they can express. I also compare the capabilities of abstract dialectical frameworks to those of their close relatives, abstract argumentation frameworks and normal logic programs.
Logic programming with preferences and priorities (I would rather not, thank you)
There is a great deal of work in AI and in logic programming on preferences and priorities. The terms are used interchangeably, and have a wide range of meanings, sometimes purely technical, sometimes not. The state-of-the-art is surprisingly disappointing. Systems produce different answers to the same examples but apart from vague appeals to intuition authors are usually unable to provide any reason why their treatment should be taken over another. With very few exceptions the question is not even addressed. No system produces the correct answer in all cases. I will attempt to explain what correct answer means in this context, or at least, some of the properties it should exhibit. I will sketch an approach we have been groping towards that looks promising.There are a few loose ends. Prioritised rules have been applied in two very different use-cases: to determine what is the case (or what an agent believes to be the case), and to determine what should be the case(or what an agent intends to make the case). We are interested in both applications, and the extent to which the underlying principles differ for the two cases.
(joint work with Richard Evans)
Engineering Agent Applications in QuLog
QuLog is an integrated higher order logic+function+action rules programming
language. It has evolved out of many years of teaching and using a multi-threaded
Prolog system (Qu-Prolog) for multi-agent applications. It is a state-of-the-art flexibly
typed rule language for agent software engineering.
A QuLog agent is a multi-threaded process where each thread typically executes a
non-terminating recursively defined action which queries a BeliefStore of dynamic
facts using rule defined relations and functions - the declarative knowledge of the
agent. Its action rules are its completely separate behavioural knowledge for
programming its activity threads. They enable the an activity thread to manipulate
the agents dynamic belief facts shared by all its threads, fork new activity threads,
and to communicate with other agents and applications.
QuLog relation rules are more declarative than Prolog clauses having explicit
universal and existential quantifications in rule bodies. Relation calls can have
expression arguments, including set and list comprehensions. Expression evaluation
Communication between Qulog agent processes and Python, Java or C/C++
processes, is via a high level publish/subscribe and addressed message router,
Pedro, using addressed or publish/subscribe communication. The Pedro publish/
subscribe feature provides some of the functionality of an agent match maker.
Queries received by an agent are evaluated by writing a type and mode safe metainterpreter.
The presentation of QuLog will overview its key features, emphasising the use of its
action rules to program multi-tasking communicating agents.
Tarek R. Besold
A Casual Cognitive Stroll Through the Non-Classical Reasoning Garden(s)
Analogy and analogical reasoning is one of the most studied representatives of a family of non-classical forms of reasoning working across different domains.
In the first part of the talk, I will shortly introduce general principles of computational analogy models (relying on a generalization-based approach to analogy-making) and will have a closer look at Heuristic-Driven Theory Projection (HDTP) as an example for a theoretical framework and implemented system. HDTP computes analogical relations and inferences for domains which are presented in many-sorted first-order logic languages, using a restricted form of higher-order anti-unification for finding common generalizations encompassing structurally shared elements common to both domains. The system presentation will be followed by a few reflections on the "cognitive plausibility" of the approach motivated by theoretical complexity and tractability considerations.
The second part of the talk will discuss an application of HDTP to modeling essential parts of concept blending processes as current "hot topic" in Cognitive Science. Here, I will sketch an analogy-inspired account of concept blending---developed in the European FP7-funded Concept Invention Theory (COINVENT) project---combining HDTP with mechanisms from Case-Based Reasoning.
1-10 of 42