I had been searching for answers to an old issue that bugged me for years: is there anything like 'chinese logic'. By chinese logic I refer to odd reasoning patterns, that do not seem to follow what in the west is called 'logical thinking'. Plenty of that in the world.
The question, after the laughs and the jokes, was more or less animately debated on various online discussion groups, including notable scholars. Many answers, especially those from clever mathematicians, seemed to suggest that all logical systems are based on FOL (first order logic), and if it's not FOL, is not logical.
Hm. Admittedly I am no logician, but having travelled far and wide, and lived in different countries I was under the impression that people in different parts of the world seem to reason somewhat differently, (not sure if FOL or not, and frankly I could not care less) and I wondered if so, why.
One particular online search yielded a book entitled 'Chaotic Logic', by Ben Goertzel.
Chapter six starts with a brief exposition of the Saphir Whorf theory, which explains (proves?) how 'language shapes categories', and reminds readers that also Lakoff addresses specifically that point. Section 6.1.2. (to my joy) is entitled Chinese and Western Modes of Thought, and it actually provides additional arguments in favour of a chinese logic hypothesis. This made me laugh (in a good way), and felt that maybe my suppositions about chinese logic were not that outrageous. My quest by now had drifted. Who is this researcher who does not think of AI narrowly like others I have come across before? Are there others like him? What school of thought do they belong to? It turns out that Goerzel in addition to being an AI researcher, a book writer, an entrepreneur and a family person, is also an aspiring immortal,
A wikipedia entry says he is a member of the Order of the Cosmic Engineers.
Well, I am rather open minded (being myself the founder of the order of the Masters of the Universe, I am in no position to raise eyebrows).
It also looks like Goerzel is leading the AGI research effort, which stands for 'Artificial General Intelligence'. Something I had not heard of before. AGI is having their third annual conference in Lugano later this week. The conference was scheduled to have a keynote speech by Ray Solomonoff, pioneer of Machine Learning, founder of Algorithmic Probability Theory, father of the Universal Probability Distribution, creator of the Universal Theory of Inductive Inference, who sadly passed away last November. A memorial Salomonoff's lecture is scheduled, and I hope to be there.
I thought I should meet these legendary AGI folks, and before doing so it may be a good idea try to have a better understanding of what AGI is all about about, so I dig in. Ben kindly answered the following questions by email, edited for legibility.
Can you provide some insights into the programme of the conference?
AGI is a subset of the broad field of AI, focused on making systems that have the same sort of capability for learning, self-understanding and generalization that humans possess. Much of the work presented will be theoretical in nature. Significant progress is being made in joining together the abstract mathematical theory of intelligence with practical work being done building intelligent systems such as humanoid robots, and this theoretical unification should lay the groundwork for exciting future progress in the field. Regarding what may be transparently interesting to the layperson we will have a demonstration of some intelligent robots that learn from experience, from Juergen Schmidhuber's lab at IDSIA, some demonstration of machine learning software developed at Google, which is used inside some of their commercial applications to supply greater linguistic intelligence and some demonstration of intelligent virtual dogs in an online virtual world, which possess the ability to learn and communicate in simple language, and will be embodied in a children's game to launch in early 2011
Can you summarise briefly how AGI differs from narrow AI? Is it research methods, paradigms, implementation, philosophy?
AI is a broad field encompassing many kinds of intelligent computer systems, but over time it has come to focus mainly on highly specialized, task-specific intelligent systems, i.e. "narrow AI". AGI is a subset of the broad field of AI, focused on making systems that have the same sort of capability for learning, self-understandingand generalization that humans possess. Examples of practical AGI systems would be:
- a robot that could go to preschool or college
- an online question-answering system that could answer questions with real understanding
- an artificial scientist that could come up with its own research ideas, do the research, and write a paper on it
Designing a robot that learns like a human be great, but surely sending it to school would only result in unnecessary fees? That does not sound very practical from where I stand.
Of course a robot college student is not that useful, it's just an example of a criterion for AGI. If you could make a robot college student, you could also make a robot scientist that would do everything a real scientist does and better...
An online Q/A system that works would be helpful, like an intelligent knowledge base, right? I can see the use for that -
"Ask Jeeves" didn't work -- current question-answering systems are very crude and display "understanding" only in very limited domains to which they were specially customized.... There are no computer programs that can carry out halfway intelligence conversations, not even in simple roles like librarian or customer support agent...
In terms of 'results' what can we say is an achievement?
There are many tests for "human-level AGI" such as
- ability to pass a university class
- ability to pass the third grade
- the Turing Test (ability to fool judges that it's a human in a conversation)
- ability to write publish a high-quality scientific paper based on its own ideas and work
However, evaluating **interim progress** toward these grand goals is a difficult problem and a source of contention within the field, and is in fact the topic of the final workshop at the conference, the "AGI Roadmap" workshop.
What would be the best examples of AGI that we can look at now? Or is it still all work in progress?
It's all work in progress. But the example demonstrations mentioned above are about as good as anything...
What is the common denominator of AGI approaches, I mean why would the example you mention fall under AGI, what makes them so?
The common denominator is the ability to transcend particular specialized domains, and transfer what is learned in one domain to help understand a different domain. To do this in an everyday human context turns out to be a subtle thing, and to require such phenomena as understanding of self and others
In what way does AGI achieve the above, that AI did not ?
Well, no one has yet achieved human-level or transhuman general intelligence in a machine. However, the **goal** of AGI research is different from the **goal** of most AI research. If your goal is to make a machine that can spot fraudulent credit card transactions, drive a car, or win at chess, then probably that is what you're going to create -- a machine that can do **just that** and nothing else.. The original founders of AI didn't know enough to draw a distinction between narrow-AI and AGI ... they thought that one could work toward AGI via narrow-AI.... They didn't understand that one could create a champion chess program that would operate in such a non-human-like matter and lack general intelligence...
Many AI researchers **still** think that way -- they still think that by creating highly specialized systems solving particular problems intelligently, one can work toward human-level AGI. But most of us who classify ourselves as "AGI researchers" think differently: we think that to achieve human-like AGI, one is going to have to directly and concentratedly work toward that goal, rather than working toward specialized applications...
You still have not said much about 'how' general intelligence is achieved are there specific design features, for example, that would characterise an AGI system in terms of architecture, or something....
Different researchers have different theories of how to make an AGI ... Some look to neuroscience, some to psychology, some to comp sci and math for inspiration ... There is no consensus on what is the right path, this is a topic of the conference
How do you see the role of the web in relation to AGI? How do you think we can we make the web more intelligent?
AGI and the Web, hmmm? Well, if you think about Tim Berners Lee's "Semantic Web" notion, you can see that it probably won't be realizable till something resembling human-level AGI has been created. The Semantic Web is based on the idea that every Web page will be marked up with structured meta-data describing the meaning of its contents: e.g. prices will be marked up as prices in appropriate currencies, opinions will be marked with their holders and their strengths, etc. If this markup were there, it would allow the Web tobe searched, organized and utilized in all sorts of new ways. But the reason the idea has not caught on, is that people who make Web pages don't want to be troubled to add all that markup data to the pages they write. What is needed is an automated (AI) system to read pages and add the semantic markup. But, Google and many other companies have tried to achieve this with narrow-AI natural language processing technology, and succeeded only to a very limited extent. Some sort of AGI with deeper understanding of the meaning of the text seems to be required. Of course, from the AGIs point of view, once it can read the text on the Web with understanding, it will go from being an ignorant baby mind to the most knowledgeable mind on the planet...
AGI 2010, University of Lugano 5-8 March
Research Notes >