Invited Talks

Interpretable Artificial Intelligence

Interpretable artificial intelligence (IAI) refers to techniques which can be trusted and easily understood by humans. Unlike the concept of ‘black box’, IAI can be used to implement a social right to explanation i.e. to explain why an AI arrives to a specific decision. A technical challenge of explaining AI decisions is known as the interpretability problem. One possible approach for handling it is to carefully design and develop AI w.r.t. formal syntax and semantics. In this talk, we will introduce the basics of computational logic (the notions of syntax, semantics, and proof theory) and its relationship to knowledge representation formalisms. We will also investigate the standard inferences and relate their computations to the transparency of AI decisions.