Chalmers & The Singularity

In his article about the Singularity, David Chalmers briefly traces the history of the concept of "singularity" and then makes a clear, precise and almost exhaustive description of some of AI's possible outcomes. Although I mainly agree with what it is said in the article, have some different background assumptions.

Different assumptions:

First I believe that the most complex forms of intelligence are the ones that we share with most other living beings. If we are to create programs that emulate natural intelligence it will be very difficult to construct a robot that emulates a horse or even a turtle. To be able to perceive and act on a world so diverse, full of dangers, food, mates, obstacles, myriads of different species and objects, all of this seems much more "intelligent" than making a calculation or measuring the speed of two falling bodies. In other words, I believe the word "intelligence" to be misapplied if animals don't fit in it.

Secondly it seems to me that humans, individually, are not much smarter than most mammals. If left alone on a forest a human baby would seem no more intelligent than any other primate. What makes us indeed smarter is our ability to absorb, to make use, of the vast set of strategies, of successful algorithms, that our ancestors have left to us as their most important legacy. It is this legacy that allows us to behave different, to look smarter than other species. We are indebted to symbolic language and to the ability to copycat the behavior of others. Language has allowed our species to replicate whatever works. Be it in society's way of acting, or in physics or astrophysics.

Third, I also believe that most of our intelligent behavior is also due to the fact that we are in constant contact with the physical world, which helps us by correcting wrong expectations and providing a "working memory" much vaster than we could access through the brain only.

Fourth, it also seems to me that we are already artificial intelligencies, you and me. Created by "programmers" hundreds of thousands of years ago and systematically improved over the eons. In societies and families where there is freedom of thought and of speech there are much more algorithms to choose from and each of us has the ability to build itself choosing, from the almost endless poll of different algorithms or strategies, what he thinks it is best. He/her then tries them out, see what works better, experiments with different combinations, etc, until one is satisfied with his "self". We are artificial systems, where sometimes the I plays the part of the main programmer, although, in closer societies, where fear, guilt and superstition predominate, there is considerable less freedom to choose. The ability of the I to choose its own algorithms depends also on personal cognitive abilities and the emotional ability to feel apart. For some of us, whatever works is just fine. If we find a way to deal with life that gets us money, friends and all the other practicalities, than many of us won't think further, even if there are many mysteries unaccounted for in one's perspective.

Fifth, given that the world is already full of artificial intelligences, namely current humans, it should be very difficult to expand these artificial intelligences over the biological constraints that they now face. This can be done in a variety of ways, either be eugenics, artificial implants, use of technology, books and culture, or the creation of independent technological artifacts that can maintain themselves. This last avenue seems to me, given my assumptions, the more time-consuming of all, although, the most promising one if we are speaking in the time scale of thousands of years. But, for now, it seems much more plausible that the advance of the existing artificial intelligence will be accomplished by incrementally improving the systems that already exist (ourselves). And this is already what is being done, from the internet to the pda we are devising a range of systems that improve a lot our global intelligence. These systems work with us, not against us or competing with us. Like many species in nature we have a symbiotic relationship. In the long run things may change, but not in the next hundreds of years.

Sixth, I do not think that a computer simulation of a human brain, even if successful, could bring much intelligence to the world. Due to my assumption of the unintelligence of humans individually, I do not think such a simullated being could bring much of novelty into the world. Probably he would feel at loss if his ability to trade information with other humans were hampered by his digital state. Even if we could run "him" at higher speed we would only get a faster human, in the sense of more stressed out, not more intelligent. So I don't put much hope in all brain simulations. I think that intelligence will continue to "amplify" to use Vigner's (1993) term, using enhanced memory, information processing, virtual simulators (like The Sims), the dissemination of information (some people call it "piracy"). Eventually biological interfaces with the physical world will also help us to augment our life expectancy, which might also help in amplifying intelligence.

Chalmers' article:

Concept of singularity

Chalmers begins with the recent history of AI and the concept of "singularity", introduced by Vinge in 1993 and its history in the last decades. Vinge himself attributes the notion to von Newman, through Ulam's 1958 Tribute. Chalmers identifies an earlier concept of "intelligence explosion" brought by I.J. Good (who himself supports the concept on other sources), see his 1963 article. Good concludes that, due to this intelligence explosion, "the first ultraintelligent machine is the last invention that man need ever make".

Chalmers distinguishes the intelligence explosion thesis from the "speed explosion" thesis, based on Moore's law. Chalmers says that Solomonoff was the first to expose this speed explosion thesis in (Solomonoff 1985). Chalmers also says that the speed and intelligence explosion are logically independent, we can have one without the other, but that they "work particularly well together". When both are brought together we can imagine an advance so rapid in AI that we cannot predict its future development, so, in a way we have something akin to a singularity in physics in the sense that it is not predictable.

It is difficult for me to conceive that a speed advance is not accompanied by an advance in artificial intelligence. If we have new technologies, increased memory and processor speed, then, even with the same kinds of software we could do more (be more intelligent). PDAs for instance allow us today to be better connected, more organized and have tons of information with us at all times. This is an advance in general intelligence, our intelligence, but artificial nevertheless. On the other hand if we conceive artificial intelligence as necessitating artificial hardware then it is also difficult to understand how intelligence could "explode" without a high increase in the speed of the hardware that supports it.

Of course, if we accept the view that modern man is already an "artificial" entity, an "artificial man", then that intelligence explosion is something that we already saw in the past, in some western cultures, and, in the more distant past, also in other cultures. We can see also how that explosion comes, what are the social aspects that allow for it to occur. But of course, one of them is not an increase in the speed of our brain. It is, in that case, an improvement regarding the software, or the available knowledge to human societies, used to educate children, create ideals, values and define appropriate behaviors for men, women, children and all that surrounds us, design technologies, understand the world and so on. This improvement is almost completely independent of the capabilities of the individual brain, since, on the one hand, it can be stored on different brains (specialization), and, more importantly, it can be stored outside of brains (written materials, vinyl records, photographic plates, etc). So, it might be that the intelligence explosion was in part connected to the memory explosion that we have seen when writing and the development of the press were put to use. In that case it is not an increase in speed but it is nevertheless an increase in the "hardware" capabilities.

Physical limits of the singularity.

The basic idea is, of course, that intelligent creatures can generate even more intelligent creatures, leading to a continuous increase of intelligence. This however, does not seem obvious because man seems to be the first creature on the planet to be able to use its natural intelligence to design more intelligent successors. Many animals are intelligent and yet they cannot share their intelligence with other animals, at best they may hope that their descendents or partners will be as smart or intelligent as they are, but not much more than that.

Kinds of intelligence: motor (equilibrium, etc)

    • Chalmers on the Singularity, good references and clarity.

      • "biggest bottleneck on the path to AI is software, not hardware"

      • "Perhaps the most important remaining form of resistance is the claim that the brain is not a mechanical system at all, or at least that nonmechanical processes play a role in its functioning that cannot be emulated. If these processes cannot be emulated or artificially created, then it may be that human-level AI is impossible." This might be valid regarding the "self" / "I", but not regarding the cognitive capabilities of the self, which are already improved with technological resources (both memory (pcs), senses (glasses and hearing aids) and reasoning (calculators))

      • "It must be acknowledged that every path to AI has proved surprisingly di.cult to date. The history of AI involves a long series of optimistic predictions by those who pioneer a method, followed by a periods of disappointment and reassessment. This is true for a variety of methods involving direct programming, machine learning, and artificial evolution, for example."

      • Alan Perlis has suggested “A year spent in artificial intelligence is enough to make one believe in God”.

      • It is worth noting that in principle the recursive path to AI++ need not start at the human

      • level. [...] So in principle the path to AI++ requires only that we create a certain sort of self-improving system, and does not require that we directly create AI or AI+. In practice, the clearest case of a system with the capacity to amplify intelligence in this way is the human case [language is clearly this ability to self-improvement, in this perspective, the "path to AI" has started directly with the invention of language - a system that can self-improve in the sense that, in the long run, it is always augmenting it's intelligence.]

      • Many or most of these [cognitive] capacities are not themselves self-amplifying

      • [Artificial intelligence: we are it! We are artificial intelligent creations living on the bodies of primates, primitive humans. From primitive humans to doctors, nurses and football players. There is no break to further AI but continuity, from oral traditions (the Odyssey), to science, to the written press, small steps in the creation of an artificial man, which we already are and have been for centuries!]

      • [capacities that worked: memory, language (the ability to imprint in others specif states of mind), mathematics, logical thinking, clarity!,

      • Structural obstacles: Limits in intelligence space - Failure of takeoff - Diminishing returns.