Different assumptions

First I believe that the most complex forms of intelligence are the ones that we share with most other living beings. If we are to create programs that emulate natural intelligence it will be very difficult to construct a robot that emulates a horse or even a turtle. To be able to perceive and act on a world so diverse, full of dangers, food, mates, obstacles, myriads of different species and objects, all of this seems much more "intelligent" than making a calculation or measuring the speed of two falling bodies. In other words, I believe the word "intelligence" to be misapplied if animals don't fit in it.

Secondly it seems to me that humans, individually, are not much smarter than most mammals. If left alone on a forest a human baby would seem no more intelligent than any other primate. What makes us indeed smarter is our ability to absorb, to make use, of the vast set of strategies, of successful algorithms, that our ancestors have left to us as their most important legacy. It is this legacy that allows us to behave different, to look smarter than other species. We are indebted to symbolic language and to the ability to copycat the behavior of others. Language has allowed our species to replicate whatever works. Be it in society's way of acting, or in physics or astrophysics.

Third, I also believe that most of our intelligent behavior is also due to the fact that we are in constant contact with the physical world, which helps us by correcting wrong expectations and providing a "working memory" much vaster than we could access through the brain only.

Fourth, it also seems to me that we are already artificial intelligencies, you and me. Created by "programmers" hundreds of thousands of years ago and systematically improved over the eons. In societies and families where there is freedom of thought and of speech there are much more algorithms to choose from and each of us has the ability to build itself choosing, from the almost endless poll of different algorithms or strategies, what he thinks it is best. He/her then tries them out, see what works better, experiments with different combinations, etc, until one is satisfied with his "self". We are artificial systems, where sometimes the I plays the part of the main programmer, although, in closer societies, where fear, guilt and superstition predominate, there is considerable less freedom to choose. The ability of the I to choose its own algorithms depends also on personal cognitive abilities and the emotional ability to feel apart. For some of us, whatever works is just fine. If we find a way to deal with life that gets us money, friends and all the other practicalities, than many of us won't think further, even if there are many mysteries unaccounted for in one's perspective.

Fifth, given that the world is already full of artificial intelligences, namely current humans, it should be very difficult to expand these artificial intelligences over the biological constraints that they now face. This can be done in a variety of ways, either be eugenics, artificial implants, use of technology, books and culture, or the creation of independent technological artifacts that can maintain themselves. This last avenue seems to me, given my assumptions, the more time-consuming of all, although, the most promising one if we are speaking in the time scale of thousands of years. But, for now, it seems much more plausible that the advance of the existing artificial intelligence will be accomplished by incrementally improving the systems that already exist (ourselves). And this is already what is being done, from the internet to the pda we are devising a range of systems that improve a lot our global intelligence. These systems work with us, not against us or competing with us. Like many species in nature we have a symbiotic relationship. In the long run things may change, but not in the next hundreds of years.

Sixth, I do not think that a computer simulation of a human brain, even if successful, could bring much intelligence to the world. Due to my assumption of the unintelligence of humans individually, I do not think such a simullated being could bring much of novelty into the world. Probably he would feel at loss if his ability to trade information with other humans were hampered by his digital state. Even if we could run "him" at higher speed we would only get a faster human, in the sense of more stressed out, not more intelligent. So I don't put much hope in all brain simulations. I think that intelligence will continue to "amplify" to use Vigner's (1993) term, using enhanced memory, information processing, virtual simulators (like The Sims), the dissemination of information (some people call it "piracy"). Eventually biological interfaces with the physical world will also help us to augment our life expectancy, which might also help in amplifying intelligence.