Chalmers' article on the singularity

In his article about the Singularity, David Chalmers briefly traces the history of the concept of "singularity" and then makes a clear, precise and almost exhaustive description of some of AI's possible outcomes. Although I mainly agree with what it is said in the article, have some different background assumptions.

He begins with the recent history of AI and the concept of "singularity", introduced by Vinge in 1993. However the concept has a long history, Vinge himself attributes the notion to von Newman, through Ulam's 1958 Tribute. Chalmers identifies an earlier concept of "intelligence explosion" brought by I.J. Good (who himself supports it on other sources), see his 1963 article. Good concludes that, due to this intelligence explosion, "the first ultraintelligent machine is the last invention that man need ever make".

Chalmers distinguishes the intelligence explosion thesis from the "speed explosion" thesis, based on Moore's law. Chalmers says that Solomonoff was the first to expose this speed explosion thesis in (Solomonoff 1985). Chalmers also says that the speed and intelligence explosion are logically independent, we can have one without the other, but that they "work particularly well together". When both are brought together we can imagine an advance so rapid in AI that we cannot predict its future development, so, in a way we have something akin to a singularity in physics in the sense that it is not predictable.

For me it is difficult to conceive that a speed advance is not accompanied by an advance in intelligence. Unless this is not reflected in new technologies, increased memory and processor speed, etc. PDAs for instance allow us today to be better connected, more organized and have tons of information with us at all times. This is an advance in general intelligence, our intelligence, but artificial nevertheless.

Physical limits of the singularity.

The basic idea is, of course, that intelligent creatures can generate even more intelligent creatures, leading to a continuous increase of intelligence. This however, does not seem obvious because man seems to be the first creature on the planet to be able to use its natural intelligence to design more intelligent successors. Many animals are intelligent and yet they cannot share their intelligence with other animals, at best they may hope that their descendents or partners will be as smart or intelligent as they are, but not much more than that.

Kinds of intelligence: motor (equilibrium, etc)

    • Chalmers on the Singularity, good references and clarity.

      • "biggest bottleneck on the path to AI is software, not hardware"

      • "Perhaps the most important remaining form of resistance is the claim that the brain is not a mechanical system at all, or at least that nonmechanical processes play a role in its functioning that cannot be emulated. If these processes cannot be emulated or artificially created, then it may be that human-level AI is impossible." This might be valid regarding the "self" / "I", but not regarding the cognitive capabilities of the self, which are already improved with technological resources (both memory (pcs), senses (glasses and hearing aids) and reasoning (calculators))

      • "It must be acknowledged that every path to AI has proved surprisingly di.cult to date. The history of AI involves a long series of optimistic predictions by those who pioneer a method, followed by a periods of disappointment and reassessment. This is true for a variety of methods involving direct programming, machine learning, and artificial evolution, for example."

      • Alan Perlis has suggested “A year spent in artificial intelligence is enough to make one believe in God”.

      • It is worth noting that in principle the recursive path to AI++ need not start at the human

      • level. [...] So in principle the path to AI++ requires only that we create a certain sort of self-improving system, and does not require that we directly create AI or AI+. In practice, the clearest case of a system with the capacity to amplify intelligence in this way is the human case [language is clearly this ability to self-improvement, in this perspective, the "path to AI" has started directly with the invention of language - a system that can self-improve in the sense that, in the long run, it is always augmenting it's intelligence.]

      • Many or most of these [cognitive] capacities are not themselves self-amplifying

      • [Artificial intelligence: we are it! We are artificial intelligent creations living on the bodies of primates, primitive humans. From primitive humans to doctors, nurses and football players. There is no break to further AI but continuity, from oral traditions (the Odyssey), to science, to the written press, small steps in the creation of an artificial man, which we already are and have been for centuries!]

      • [capacities that worked: memory, language (the ability to imprint in others specif states of mind), mathematics, logical thinking, clarity!,

      • Structural obstacles: Limits in intelligence space - Failure of takeoff - Diminishing returns.