The panel will address the following questions:
Can an AI discover, revise, and reject its own axioms, and the inference schemata, heuristics and implementation that are used to reason over those axioms?
The word "reasoning" can encompass a wide range of activities for intelligent humans - which of these may have been omitted or neglected in AI in the past, but could potentially be subjected to some form of axiomatization and AI reasoning?
What would it take to do reasoning to "all of science" scale?
When is formal axiomatization indispensable to precise and broad-ranging reasoning — and when is it a hindrance? To what extent does this depend on the nature of the axiomatization?
Is an explicit context mechanism, and axiomatization of context, like the Cyc MicroTheory system, necessary for reasoning in AI?