【Topic 2: Quantum Intelligence. Subtopic 2.3: Artificial Intelligence Virtual Machine (AIVM)】
During 2017-2019, a team of engineers and I designed an Artificial Intelligence Virtual Machine (AIVM). This VM consists of three components: IO virtual machine, knowledge base virtual machine and abstract virtual machine. (All of these virtual machines are software.) Since AIVM is a very abstract concept and not easily understood, I do not intend to describe it in this introduction. However, it can be understood through an implementation of Natural Language Processing (NLP), Complex Adaptive Systems (CAS), Multi-Agent Systems (MAS), and Ontology Programming. Most of these technologies have deep learning capabilities, but they are still within the so-called AI 1.0 category.
[Conversation between machines and humans]
NLP is just a format that uses CAS's user interface (UI) semantically. Other formats are graphical interface, audio and video. Usually, we refer to user input with "qualified" semantics (semantics must be qualified and require moderate symbol conversion) as "SPEC".
Now, we’re going to talk about AI 2.0, or generative AI. If you like martial arts novels, in the picture below you can see the power of NLP with generative AI. The martial arts novel fragment in this picture was written through GPT2.0 (Generative Pre-Trained Transformer 2.0). If you provide some simple "SPEC" information, you can also let the machine automatically generate the Chinese martial arts novels you want. NLP based on deep learning is a new research field that is developing very rapidly and can improve the quality of conversations similar to ordinary people. However, if you look closely at this fan page image, you will see that the sentence quality is not good; the logic or story layout is not good.
In October 2022, OpenAI took the lead in releasing ChatGPT based on GPT3.0, which has developed NLP to a very perfect level and can conduct serious conversations. It should be said that it not only passed the traditional Turing Test (that is, regardless of grammar or semantics, it cannot be detected that this is machine is talking), he can even know astronomy better than a hundred professors, and geography at a lower level. This technology uses the so-called Transformer deep learning technology to turn knowledge on the Internet into datasets and then talk to humans in sequence. GPT3.0 can already perform semantic analysis of conversation content. The fly in the ointment is that GPT has a fundamental hallucination problem, and it often conjures up unreal answers to questions that lack knowledge. In the past, traditional semantic analysis methods, such as using Boolean networks to make "ideographic symbols" (Tokens) in conversations into large semantic hierarchical networks, and using ontology programming Ontological Programming to process basic formal ontology (BFO) and Python-based ontology. Build a knowledge base that interconnects its knowledge items. This method does not have any fantasy problems, but it is not easy to handle large data, and it is not easy to automatically generate BFO. Can traditional semantic analysis methods be combined with GPT to remove the hallucination problem? This aspect is still in the research stage.
[How to solve problems with the machine]
In recent years, Google, OpenAI, Microsoft, and Facebook have continued to improve GPT technology. In the future, ChatGPTx.0 will not just use words to express NLP, it can also "use your brain to solve problems", such as intelligent answers or creative solutions. Traditionally, these ideals have been achieved by performing ontology programming to interconnect the knowledge items behind NLP, and using "complex adaptive systems" and "multi-agent systems" to process this network. However, it is still unknown how far the future ChatGPTx.0 problem solver can reach and what shortcomings need to be compensated by traditional methods.