In this welcome note, I will address three things: How this workshop came about? Thought process behind the program? And some ground rules.
Let us start with some background on how this workshop came about.
At the May 2024 National Science Board meeting, Ken Forbus made the point that there are three big revolutions happening in AI today: Deep learning, knowledge graphs and automated reasoning. He argued that there are plenty of resources going into deep learning, and we need to re-balance our AI portfolio and invest in "knowledge graphs for common sense reasoning" and "human-like learning and reasoning methods".
Independently, back in 2021, Doug Lenat had the idea that he wanted to disseminate the knowledge engineering expertise he had built over the years. He wanted to do so by setting up a Knowledge Axiomatization Institute that would teach knowledge engineering at every level from high school to computer science undergraduates and above. He had pitched this idea of such an institute to National Science Foundation.
Chaitan Baru, at the US National Science Foundation who had heard the pitch from Doug, and was also listening in to the National Science Board meeting put these together with his own work. He has been pioneering an initiative called Proto Open Knowledge Network. With about $26.7M in funding from NSF, a use-inspired knowledge network is being built.
Through Chaitan, NSF invited me to explore the feasibility of creating a Knowledge Resource that would do two things. First, it would serve as a hub for research, training and education on knowledge graphs and automated reasoning. Second, it would host repositories of knowledge based on open-source data.
The use of the phrase "Knowledge Axiomatization" in the name of the workshop is inspired by Doug, and we kept it as a homage to his pioneering vision in this area.
We use the word "Translational" in the name of the workshop, because Chaitan is in the Directorate on Technology, Innovation and Partnerships. One of their missions is to translate technology advancements into practice. We are, however, not limited to discussing only translational activities at the workshop.
Let me now turn to the second topic and give you insight into the thought process behind the design of the program.
We have kept the first half of the first day to get acquainted, and to set some goals and expectations.
We propose to situate the opening discussion in the context that "curated knowledge" matters for AI.
A natural question that arises is what do we mean by knowledge?
We are using the phrase "Knowledge Axiomatization" in the title of the workshop.
That just suggests that logic-base knolwedge is included, but our view of knowledge is much broader.
Knowledge could be in a constraint network, in an integer program or a hierarchical task network just to name a few.
The key characteristic is that it is curated and verified by humans.
The next question one may ask is what is this knowledge about?
Is it world knowledge? common sense knowledge? domain-specific knowledge? encyclopedic knowledge?
For the purpose of the workshop, we are using an all inclusive term to refer to knowledge "Foundational Knowledge". Foundational knowledge is knowledge that is of use and applicable to mutliple use cases and domains, and not just limited to a single use case.
Bulk of the workshop is devoted to talking about foundational knowledge: How do we specify it? How do we reason with it? How do we acquire it? How do we measure the outcomes?
Many in the field are concerned that all the great work that has been done on KR&R and knowledge engineering is no longer being taught to the next generation. As a result, there are not many people who know the material, and in many instances, it is being reinvented. We have a session with the theme of modern education on knowledge axiomatization.
Our final session is to synthesize the outcomes of the discussions over two days, and chart out a plan for action. We want to answer two questions: What should TIKA do? How should it be organized?
Let me now turn to the third topic of some ground rules.
We have only two simple additions to AAAI's standard code of conduct.
Let us refrain from criticizing traditional symbolic AI. Let us figure out what makes the most sense in the current context and move it forward.
Let us refrain from criticizing LLMs. Let us learn from what works really well with this new technology, and is there a way for us to leverage.
With that I thank you all for coming, and let us have a great workshop!