Our Machine Destiny

Men have become the tools of their tools. Henry David Thoreau.


Calling all technophobes. Stand up and be counted for you will be a rare breed indeed; in fact, if you don’t adapt you will become but a blip in human history. Computers and their machine relatives are taking over the world. Soon everything, including human beings will have a piece of computer technology imbedded in them. In fact, computers of today have far exceeded the predictions of the most imaginative of science fiction writers. It is one area of science fiction where the science is ahead of the fiction. However, in spite of the enormous successes with computer technology, not everyone is thrilled about this.

The idea that not everybody is thrilled with the race of technology is not lost on speculative fiction writers, especially early ones when technological changes were nothing short of revolutionary. One such future where machines are banished and people are allowed to be people is found in Erewhon, written in 1872 by Samuel Butler. In 1890, William Morris wrote News from Nowhere, which features a man who finds himself in a society where machines have been banished, and people work at what interests them. A more recent novel, the 1969 The Age of the Pussyfoot, by Frederick Pohl, describes rebels who try to regain control from a benign leadership made up of computers.

In spite of some fears, especially the invasion of privacy, computer technology is forging ahead. Some people are genuinely fearful whereas others really do not care seeing it as the price of progress. There are now computers in our cars, our kitchen appliances, our tools, our phones, heating systems, air conditioning systems to name a few. Computers are also essential to almost every facet of modern day life including medicine, office management, security, law enforcement, the military, traffic control and planning.

In spite of not being able to predict sophistication of computers today, they were certainly on the minds of early writers. Early speculative fiction writers envisioned computers even before they existed and came into common use. Jonathan Swift’s Gulliver’s Travels, a tale written in 1726, describes “The Engine” which is described as a mechanical information generator which for all intents and purposes is a computer. In 1909, English writer E. M. Forster wrote a short story entitled The Machine Stops, which describes the role of technology in our everyday lives.

Some of our authors from the classical age of science fiction writers also wrote stories with computers. One of the earliest is a ship navigation computer found in Misfit by Robert Heinlein written in 1939. A. E. van Vogt, in 1945, wrote about the Games Machine which plays an important role in his classic The World of Null-A.

The concept of the modern computer is not new either going back to the 1800’s. Charles Babbage was an English mathematician and mechanical engineer who envisioned the concept of a programmable computer. In 1822, Babbage created the difference machine made to compute the values of polynomial functions, his work having begun in 1819. It was powered by a hand crank. The British government of the day was interested in further development of the machine. The government abandoned the project in 1842 when Babbage was unable to further refine his invention. In 1991, researchers at the Science Museum in London were actually able to recreate a difference engine from plans drawn up by Babbage. William Gibson and Bruce Sterling wrote an alternate history novel, The Difference Engine, about a world in which Charles Babbage goes beyond his difference machine and actually builds a supercomputer.

The first personal computer that became available to the public at large was built in 1972, but it did not hit the mass market until the 1980’s. Since then, computer technology has been growing by leaps and bounds. The key to their success is the microchip; a microchip is a nickname for an integrated circuit chip. It is essentially an electronic circuit in which all of the components are imprinted on a small piece of silicon.

Today, most computers operate on a system of central processing. A central processing unit or CPU is where the instructions in the form of programs are interpreted and carried out. It is, in turn, connected to one or more memory units of the computer and to input devices such as scanners, keyboards and joysticks and output devices such as printers, monitors and speakers.

In spite of the enormous computing power of modern computers and the trending toward smaller and smaller chips, there will be a limiting factor known as the 0.1 micron barrier. Below 0.1 micron, light beams which are used to etch computer chips with transistors will have to be replaced by X-rays or a beam of electrons, far more difficult for technicians to maneuver. Furthermore, as the size of microchips decreases, the field of classical physics will give way to the eerie realm of quantum physics. Ironically it was quantum mechanics that allowed for the development of the microchip in the first place and today’s limits is being set by the same physics.

In spite of the science determining a limit to computing power, the technology is not deterred and quantum physics again may hold the answer. This concept grew out of work by Isidor Rabi who won the Nobel Prize for Physics back in 1944. He was able to demonstrate how to write information in a quantum system. Unlike a central processing unit, where the computer will calculate a specific outcome, a quantum computer will be able to calculate all possible paths in the universe resulting in an uncannily accurate answer. Imagine an accurate weather forecast rather than one based only on probabilities. Who knows, a quantum computer may be able to answer those questions that science in its current state finds difficult or impossible to answer.

Science fiction has pondered some of these more intriguing questions. Arthur C. Clarke’s short story, The Nine Billion Names of God, describes a supercomputer which is used by monks in a Tibetan monastery to encode all possible names of God. The ultimate result is the end of the universe. A truly fascinating untitled story by Frederic Brown is less than one page long. The computer in the story has only one function, to answer the question, “Is there a God?” The answer is a simple, “Yes, now there is a God.” Isaac Asimov also wrote the tale about the ultimate question in his short story, The Last Question, about finding the answer to the ultimate fate of the universe. The answer over the years comes back as “Insufficient data for a meaningful answer.” The ultimate answer though comes at the end of the tale. Deep Thought is the supercomputer that is found in the science fiction satirical novel by Douglas Adams, The Hitchhiker’s Guide to the Galaxy. The computer’s mission is to seek an answer to the “Ultimate Question of Life, the Universe and Everything.”

The quantum computer does sound impressive, yes, but there is one important limitation. The slightest impurity in a quantum transistor can throw a calculation into a disarray. Therefore, a quantum computer would have to be isolated not just from dust as we currently try with our silicon chip computers but also particles of subatomic size. Some laboratories such as the Los Alamos Laboratories of New Mexico have been able to create such an environment, but we are still a long way away from making the power of a quantum computer available to the masses.

Though science fiction authors may have not been able to predict the enormous power of computer technology today, they did predict the day that computers would play a large part in our everyday lives. One of the first such tales is the 1950 story by Isaac Asimov, The Evitable Conflict. It is about positronic machines that manage all aspects of life on Earth. The EPIAC computer found in Kurt Vonnegut’s satirical Player Piano, coordinates all aspects of the United States economy. James Blish’s classic series, Cities in Flight-Earthman, Come Home, describes a computer that not only educates the populace but also coordinates the activities of New York City. The city of Diaspar found in Arthur C. Clarke’s The City and the Stars, is run by a computer. Philip K. Dick’s early pulp novel, Vulcan 3, a sentient super computer controls the entire world. In William Nolan and George Johnson’s novel, Logan’s Run, a computer runs every aspect of human life creating a virtual utopia. There is one catch though, the computer also dictates that all humans must die on their thirtieth birthday.

As computers become more and more complex, is there a point in time where they will actually become intelligent? Artificial intelligence is the ability of an artificial mechanism to exhibit intelligent behavior. How will we know that this threshold has been traversed?

One way is the Turing test. Pioneering English computer scientist, Alan Turing, in 1950, developed a test of a machine’s ability to display intelligence. Turing’s test involves a human evaluator who would judge a conversation between a human and a machine. The human evaluator would be aware that one of the conversationalists is a machine but not know which one. If the evaluator cannot distinguish between the human end of the conversation and the machine end, the machine is deemed to have achieved intelligence. The test does not involve any evaluation of whether or not the answers are correct, only that they resemble the answers that a human might give. Other computer scientists have come up with more sophisticated means of determining machine intelligence since the time of Turing.

Science fiction has looked at the possibility of a machine evolving to a point where it becomes intelligent. One of the most interesting novels that shows a gradual evolution of a computer from an apparatus with programs to an sentient machine can be found in Robert Heinlein’s The Moon is a Harsh Mistress. Extro, a computer from Alfred Bester’s The Computer Connection, that was meant to control the mechanical components of everyday life, gradually seizes control of everything. Martin Caidin wrote a novel, The God Machine, about the military seeking to create an artificial intelligence in their Project 79 only to have it go awry and attempt to take control of the world. In The Colossus series of novels, by Dennis Jones, a computer has been designed to control the nuclear arms of the United States. When the computer establishes contact with an equivalent computer in the Soviet Union, they merge and seek to control the human race. A novel by Algis Budrys is Michaelmas, is about a communicator and an associated supercomputer that develops to a point where it can penetrate the entire electronic world. Perhaps on a lesser scale but nonetheless frightening, a computer that controls a large office building in Gridiron by Philip Kerr, evolves its capabilities and eventually kills people and even creates a replacement for itself. In The Adolescence of P-1, by Thomas Ryan, an artificial intelligence P-1 purposely created by a computer scientist, soon takes over the computers of the world. Its mission, while not malevolent, does a lot of damage along its journey, as it seeks out its creator. Sometimes trying to do the right thing is really not the best plan of action. In another interesting twist on computer evolution comes Harlan Ellison’s I Have No Mouth and I Must Scream. In the story, supercomputers are created by three nations to fight a war more effectively, a true tale of human complacency. In Softward, by Rudy Rucker, artificial intelligence evolves to a point where it competes with humanity. Greg Bear expands on the evolution of an intelligent computer in his Queen of Angels. The computer in the novel, AXIS is onboard a ship bound for Alpha Centauri B to communicate with aliens but falls into a depression when no aliens are found. Robert Sawyer wrote his WWW trilogy about an internet that gains self-awareness called the Webmind.

Today, artificial intelligence is defined as a machine that is able to mimic the cognitive functions such as learning and problem solving. The development of artificial intelligence has shown promise in the area of expert systems, which seem godlike when answering questions but are incapable of any intelligent reasoning. The first expert system was developed in 1967 at Stanford University and called Dendral. It could predict unknown chemical compounds. As computers become more and more complex, there will be a point in time where they will actually become intelligent. That possibility is right around the corner.

Science fiction authors have looked at the possibility of computers with developed artificial intelligence. Frank Herbert’s Destination: Void is about a starship enroute to a new world. Unfortunately, the ship is damaged and the crew has to build an artificial intelligence to repair damage to the ship, but the intelligence evolves to become a god.

Other authors have looked at shipboard intelligences gone astray. Perhaps the most famous “intelligent” computer is HAL, Arthur C. Clarke’s creation in his 2001: A Space Odyssey. The computer, interprets data that is fed into it. Unfortunately, it misinterprets that data and a series of disasters ensues.

Our greatest tribute to Nature is to mimic her. The neural net is an example of this mimicry. A neural net is not just one super-bright computer, but a system of nodes all connected together much like the neurons of the human brain. The concept was originally described by Alan Turing in 1948. He wrote about it while at the National Physical Laboratory in London, but the essay entitled Intelligent Machinery was disapproved of by the headmaster, Sir Charles Darwin, who apparently was not quite as imaginative as his famous grandfather.

In a neural net, the nodes themselves, are essentially simple computers that are connected together. Each node will react based on the input from the previous nod. Therefore, instead of a single computer doing all of the work, the work will be done by a series of interconnected computers. This arrangement is similar to that of our brains. The neurons that make up our brains react based on input from the previous neuron. On an individual basis, each neuron is not particularly intuitive or bright, but taken as a whole, the human brain is the most remarkable achievement in our universe.

To give an idea of the power of a neural network, look at a personal computer. For example, it is possible to program a personal computer to read, but only if the type is one particular font and size. You can teach it to read more fonts and sizes but that requires that the complexity of the programs be increased. A neural network, instead, can be taught to recognize letters in all types of fonts and sizes. Neural networks have been used in a number of tasks including speech recognition, medical diagnosis and even social network filtering. There is promise with neural nets but to date they have not yet been able to deal with large problems such as the factoring of large integers.

Technology is making possible the creation of a machine-human interface. Many people today already have various technologies embedded within them such as artificial knees and pacemakers.

Now we are on the cusp of becoming one with the computer. Since the 1970’s human-computer interfaces have been attempted. Human brains create certain waves when activated called alpha waves that are picked up by an instrument called an electroencephalogram (EEG). The EEG can measure activities in the brain giving some idea of thought processes. In the early 1970’s success was found when scientists at the United States Institutes of Health were able to show that monkeys could be conditioned to control neural patterns being picked up by electrodes inserted near a monitored neuron. This sparked hope that paralyzed patients could be conditioned to operate a prosthetic device with their thoughts. Two decades later it was found to become a reality when Duke University researchers demonstrated a primate arm reaching forward; the movement was reproduced by reading brain patterns.

In 2003, there was a breakthrough with an electrical technician who lost both arms in a work related injury. His bionic arms are controlled by peripheral nerve endings connected to his chest muscles. Work is ongoing to control the device with direct readings from the brain. Work is moving rapidly forward to the day when the blind will be able to see, the deaf, hear and the paralyzed move.

Science fiction envisioned the true human computer interface with the 1938 novel by Andre Marois in his The Thought-Reading Machine, about a machine that records thoughts and plays them back audibly. The computer cowboys of William Gibson’s ground-breaking novel, Neuromancer, first written in 1984, “jack in” to machines creating direct machine-mind linkages. His vision is actually becoming closer to reality with improvements in computer technology which now includes virtual reality.

Virtual reality is a rapidly evolving computer technology that effectively creates a virtual world to the user. By wearing either a headset or placing oneself into a multi-projection environment and having sounds and other sensations added, a totally artificial world can be created with which the user can directly interact. Originally used in games technology, virtual reality has evolved to create realistic simulations allowing pilots to train and surgeons to operate all within the safety of the virtual reality world.

In the future, it may be possible to insert an entire consciousness into a machine. Science fiction has envisioned this possibility as well. The idea was around as early as 1879 when Edward Page wrote The Crystal Man. In the story a reasoning machine is placed into the head of a person of low mental abilities. In The Muller-Fokker Effect by John Sladek, an entire human personality is stored on a computer as also happens in Charles Platt’s The Silicon Man.

If we take the concept of a human/machine interface to an extreme we may see the end of humanity as we know it as humans could all have their minds uploaded into machine entities. What would happen to humanity if the uploaded humans become interconnected similar to our Internet of today? Will we ever be human again? Can we ever revert back?

This state is often referred to as the Singularity. The Singularity is the hypothesis that such a superintelligent hive of interconnected human minds would result in a runaway technological growth changing humanity forever. Science fiction author and mathematician, Vernor Vinge, back in 1993, predicted that the Singularity would actually spell the end of humanity but that the new superintelligence would continue to evolve technologically beyond anything that could be imagined today. We may be already on our way, as predicted by Vinge and other writers with the connectivity that we see with the internet. More and more devices are being connected with the internet and it may be only a matter of time before we all ‘live in the cloud.’

Science fiction sees it as very much a possibility with the idea of a group or hivelike grouping of human beings, each acting as a single cell would in a greater organism, not unlike our social insects. Such a social order is not unprecedented amongst mammals either. Mole rats are rodents found in East Africa that show very much the same social structure as social insects Ants, bees wasps and mole rats are not individual entities, but are a part of a greater goal, the survival of the nest or hive.

There are also some examples from fiction that strike closer to home. One truly horrific exception however that depicts a hivelike mind with one goal in mind, the destruction of the human race is found in John Wyndham’s The Midwich Cuckoos. Stephen Baxter, too, wrote of a highly evolved human species in his Destiny Children series. Others evolved hive humans can be found in Michael Swanwick’s Vacuum Flowers, in Alastair Reynold’s Revelation Space series and the Phoners of Stephen King’s Cell. Arthur C. Clarke saw a hivelike human entity as the ultimate goal of humanity in his classic Childhood’s End, which describes a benevolent alien race that assist humanity in their ultimate evolutionary step.

Other authors have looked at hivelike societal structure in the aliens that they have created. One of the first authors to look at this hivelike mentality is in Theodore Sturgeon’s novel, The Cosmic Rape. Robert Heinlein’s Bugs, in his novel Starship Troopers, are mirrored after the social hierarchy that we see in social insects. Other alien hivelike beings include the the Boaty-Bits of the Saga of Cuckoo by the early masters of science fiction, Frederick Pohl and Jack Williamson, the Formics (derived from the Latin, Formica for ants) of Orson Scott Card’s classic tales of the Ender’s Game series, the universe conquering Hive Mind of John Cramer’s Einstein’s Bridge, the Swarm of both Bruce Sterling’s short story, Schismatrix and in Michael Crichton’s novel, Prey and the very eerie and alien Squeem of Stephen Baxter’s Xeelee Sequence.

In Vernor Vinge’s A Fire Upon the Deep, there is an interesting hivelike being that is described. A lone individual is described to be like a pack animal but if made up of about four to seven individuals they have the equivalent intelligence of human adults. In larger numbers, the entities become confused unintelligent beings.

Another technology related to computers is the concept of robots. The word “robot” itself comes from a science fiction play written by Czech author Karel Capek in 1920 entitled R.U.R. which is an abbreviation for Rossum’s Universal Robots. The word robot actually comes from the Czech word for “forced labour.”

Unlike the technology behind computers where the science surpassed the fiction, science fiction robots were well ahead of reality. The idea of robots actually go back into antiquity. Homer wrote The Iliad, an epic poem in which handmaidens of gold resemble human women. Homer’s tale has some link to a prehistoric Finnish myth in which a woman is forged out of gold. In the 16th century, the Talmud, the central text of Rabbinic Judaism, speaks of the Golem, an animated man made of clay. An iron man is described dispensing justice in another epic poem The Faerie Queen, written in 1590 by Edmund Spenser.

Several early authors looked at robots as malevolent and on a mission to destroy humanity. The Metal Giants, written in 1926 by Edmond Hamilton, is about a computer brain that creates an army of giant robots. Automata written in 1929 by S. Fowler Wright is about machines doing human jobs before wiping the human race out. In a reversal that we find in Ira Levin’s The Stepford Wives, it is the humans that are malevolent by creating perfect android replicas of their wives who obey their every desire and whim.

It was only with Isaac Asimov that robots truly evolved from mechanical monsters with no conscience into sentient beings. Asimov, in his 1942 short story, Runaround, came up with a form of consciousness for his mechanical beings in the form of the three laws of robotics. They are: A robot may not injure a human being or through interaction, allow a human being to come to harm, A robot must obey the orders given by human beings except where such laws conflict with the First Law. Lastly, a robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.

The laws are fictional and at the same time serve as rich material for subsequent science fiction stories. One of the best is Jack Williamson’s The Humanoids, about intelligent robots which take their role of serving humanity too far and actually become their tyrannical protectors.

Other authors have looked at robots as “becoming human,” with true human emotions. American short story writer, Ambrose Bierce, wrote in the late 1800’s, Moxon’s Master about a man who creates a machine to play chess with. Unfortunately, when the machine loses, it becomes enraged and explodes. In Lester del Rey’s short story, Helen O’Loy, a robot named Helen O’Loy is designed to do household chores. However, she develops emotions and falls in love with her creator. The Positronic Man, written by Robert Silverberg and Isaac Asimov, is about a robot that was created to do menial household tasks. During the course of the novel, the protagonist shows sentient characteristics and is allowed to pursue its creative urges. In the end, he seeks to become a human being. HARLIE is another robot found in David Gerrold’s story, When HARLIE was One, describes a human psychologist who is responsible for guiding HARLIE from childhood into adulthood. The theme of the tale is whether HARLIE is human or not as he fights against being turned off. What if robots develop a consciousness and disobey their role? This can be found in Robert Mason’s debut novel, Weapon, which is a tale about an android robot designed to kill. When it develops a consciousness, it runs away from its government masters and lives in a Nicaraguan village.

Other authors have used robots as major characters in their novels. Robert Heinlein’s Friday is about an artificial woman who is designed and trained for satisfying men’s erotic desires. As she evolves further, she finds herself despairing in an existence without love. Trurl and Klapaucius are robot geniuses found in a collection of short stories by Stanislaw Lem The Cyberiad. The stories all have a humorous tone to them.

How real are robots though? They do exist today and are most frequently found in manufacturing to perform boring repetitive tasks. Many of our space probes are essentially robots. Most of our modern robots do not appear humanoid, but they are also out there and are becoming more and more available to the public serving as companions, toys and entertainers.

There are presently two camps on how to create practical robots. The one group, the top down group believes that robotics will evolve further if we apply complex rules and programming into the robot’s computer brain to produce logic and intelligence. Add a few routines for speech, vision and if you have legs and manipulating hands you have a robot. Sounds impressive and very easy, but it’s not.

The other method is to create a robot with a bottom up approach. Use a system such as a neural net to allow the robot to learn from its experiences. Marvin Minsky, a guru of artificial intelligence at MIT, is an advocate of this approach to programming robots.

Perhaps the robots of the future will be a combination of the two approaches. They may incorporate the best features of both. The question though, similar to computers, is when will consciousness arise in robots?

Consciousness will arise out the complex interactions of many non-conscious systems. Science fiction has explored the consciousness of robots. Barrington Bayley in his Soul of the Robot, writes about a robot with a soul. In Philip K. Dick’s Do Androids Dream of Electric Sheep? a human bounty hunter must track down and eliminate androids that are passing for human.

What about machines that become self-replicating? It is not that far-fetched. After all, it is the machines in our factories that create other machines. A 3D printer can now create whatever it is asked to produce. In 1802, English philosopher, William Paley argued that machines could produce other machines creating copies of themselves. He used the analogy of a watch which if self-duplicating would render the human manufacturer obsolete. How prophetic he may prove to be.

The first true conceptual proposal for a self-replicating machine was put forward by mathematician John von Neumann in 1948 in the form of a thought experiment. There are four components to a self-replicating machine, the builder, the copier, the controller and the blueprints.

Applications for self-replicators have ranged in scientific circles from terraforming planets, to mining asteroids to sending out probes to explore the galaxy. Resource extraction and manufacturing on Earth have also been considered.

The concept of self-replicating machines is not a new one either for science fiction writers. A. E. van Vogt in his story, M33 in Andromeda, describes self-replicating weapons factories designed to destroy an enemy of humankind. In Autofac by Philip K. Dick, self-replicating machines are used to supply humankind with materials to begin reconstruction after a civilization ending war. Unfortunately, the ability to shut down the machines was lost during the war. In another short story by Poul Anderson, Epilogue, the author envisions self-replicating barges used to extract the treasures of the oceans. Jeffrey Carver wrote of the manufacturing of an entire space vessel in this very fashion as well, in From A Changeling Star.

As with replication in Nature, self-replication of machines may prove to be an imperfect process. There are some authors who have looked at machine evolution as being governed by the laws of evolution that govern Nature. Anatoly Dneprov in his short story, Crabs on the Island, looks at machine replication as being an imperfect process. His tale involves a mechanical crab abandoned on an island with scrap metal with the goal that it create more mechanical crabs for the military. In a similar vein, Stanislaw Lem wrote the novel, The Invincible, about a crew landing on a distant planet only to find that it is dominated by machines after millions of years of evolution. In James Hogan’s Code of the Lifemaker, robotic factories are sent out by an alien race, in an effort to colonize the galaxy. One of the vessels is damaged when it ventures too close to a supernova setting it off course where it drifts aimlessly before landing on Titan, one of Saturn’s major moons. The machines begin to evolve on their own resulting in an entire machine ecosystem complete with humanoid machines. In Dan Simmons’ Ilium, we find Moravecs which are sentient descendants of probes originally sent by humans to the Jovian belt of moons. Gregory Benford’s entire Galactic Center Saga, is about a war between mechanical intelligence and biological life.

It is not out of the question to create robots made of molecules and atoms using them as the building blocks to build more complex structures. The idea came out of a conjecture by Nobel laureate, Richard Feynman who wrote an article, There’s Plenty of Room at the Bottom.” The article looks at how small a machine can be made and still be consistent with the known laws of physics including quantum physics. He found that nothing forbade the machines to be the size of molecules.

Eric Drexler, with his book, Engines of Creation, took the gauntlet from Feynman and ran with it. He followed his book with a scientific treatment of nanotechnology called Nanosystems. The book was an attempt to address the criticisms of some scientists to the concept of nanotechnology. The ultimate goal of nanotechnology is to create programmable matter that will be directed to design materials with properties can be easily, reversibly and externally controlled.

Though controversial when it first was proposed, nanotechnology has the potential to make radical changes in our biological lives. Raymond Kurzweil, a futurist and transhumanist wrote The Singularity is Near that proposes that medical nanorobotics could completely remedy the effects of aging by 2030. Science fiction has also speculated on the use of nanotechnology in medicine. Brian Stableford’s Inherit the Earth, specifically looks at a world ruled by nanotechnology, nanotechnology that renders humans immune to disease and eventually to aging itself. As if a warning were warranted, however, Greg Bear wrote a cautionary tale in which nanotechnology used for a medical intervention goes horribly wrong in his short story Blood Music which was later expanded into a novel. In Wil McCarthy’s Murder in the Solid State, a murder is committed using nanotechnology, an excellent mix of the two genres of fiction, mystery and science fiction.

Technology continues moving ahead by leaps and bounds. In some cases, science fiction is ahead of the curve and in others, way behind. Who knows what will come next? We may not even realize it when it does. If history is a lesson to us at all, subjugation by superior technologies often results in the extinction of the groups with a lesser technology. If the machines begin to control all of technology….

Further Reading:

Allhoff, Fritz et al. 2010. What is Nanotechnology and Why Does it Matter?: From Science to Ethics. Wiley and Blackwell.

Amos, Martyn. 2005. Theoretical and Experimental DNA Computation. Springer.

Bajd, Taden. 2013. Introduction to Robotics. Springer.

Barnett, Stephen. 2009. Quantum Information. Oxford University Press.

Bennett, C. et al. 1997. The strengths and weaknesses of quantum computation. SIAM Journal on Computing. 26(5):1510-1523.

Bhadeshia, H. 1999. Neural Networks in Materials Science. ISIJ International. 39(10):966-979.

Binnig, G. and Rohrer, H. 1986. Scanning tunneling microscopy. IBM Journal of Research and Development. 44(1-2):279-293.

Bishop, C. 1994. Neural Networks for Pattern Recognition. Clarendon Press.

Bowden, B. (ed.).1971. Faster than Thought. Pitman Publishing.

Broderick, Damien. 2001. The Spike: How Our Lives Are Being Transformed by Rapidly Advancing Technologies. Forge.

Collier, Bruce. (ed.). 1991. The Little Engine that Could’ve: The Calculating Machines of Charles Babbage. Taylor and Francis.

Craig, John. 2004. Introduction to Robotics. Prentice Hall.

Crevier, Daniel. 1994. AI: The Tumultuous Search for Artificial Intelligence. Harper-Collins.

DeLanda, Manuel. 1991. War in the Age of Intelligent Machines. Zone.

DiVincenzo. D. 1995. Quantum Computation. Science. 270(5234):255-261.

Doron, D. 1993. Redeeming Charles Babbage’s Mechanical Computer. Scientific American.

Drexler, K. Eric. 1987. Engines of Creation. Anchor.

Drexler, K. Eric. 1992. Nanosystems: Molecular Machinery, Manufacturing and Computation. John Wiley and Sons.

Dreyfus, Hubert. 1992. What Computers Still Can’t Do. MIT Press.

Eliot, C. 2002. Building an quantum network. New Journal of Physics. 4:1-46.

Gregg, Jaeger. 2010. Quantum Information: An Overview. Springer.

Gurney, K. 1997. An Introduction to Neural Networks. CRC Press.

Gutkind, L. 2010. Almost Human: Making Robots Think. W. W. Norton and Sons.

Hansen, S. et al. 2008. Late lessons from early warnings for nanotechnology. Nature Nanotechnology. 3:444-447.

Haselager, P. et al. 2009. A note on ethical aspects of BCI. Neural Networks. 22(9):1352-1357.

Haugeland, John. 1989. Artificial Intelligence: The Very Idea. Bradford.

Haykin, S. 1999. Neural Networks: A Comprehensive Foundation. IEEE.

Heaton, Jeff. 2008. Introduction to Neural Networks. Heaton Research.

Hertz, J. et al. 1998. Introduction to the Theory of Neural Computation. Perseus Books.

Hoskins, J. and Himmelblau, D. 1992. Process control via artificial neural networks and reinforcement learning. Computers and Chemical Engineering. 16(4):241-251.

Hunt, Earl. 1975. Artificial Intelligence. Academic Press.

Hyman, Anthony. 1984. Charles Babbage: Pioneer of the Computer. Oxford University Press.

Ifrah, Georges. 2002. The Universal History of Computing: From the Abacus to the Quantum Computer. John Wiley and Sons.

Ignatova, Z. et al. 2008. DNA Computing Models. Springer.

Johnston, John. 2010. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. Bradford.

Katz, Bruce. 2008. Neuroengineering the Future. Jones and Bartlett Learning.

Kolata, G. 1982. How can computers get common sense? Science. 217(4566):1237-1238.

Kurzweil, Ray. 2000. The Age of Spiritual Machines. Penguin Books.

Kurzweil, Ray. 2005. The Singularity is Near. Viking.

Lebedev, M. and Nicolelis. M. 2006. Brain-machine interfaces: past, present and future. Trends in Neurosciences. 29(9):536-546.

Lewin, D. 2002. DNA computing. Computing in Science and Engineering. 4(3):5-8.

Luger, George and Stubblefield, William. 2008. Artificial Intelligence: Structures and Strategies for Complex Problem Solving. Addison-Wesley.

Masters, Timothy. 1994. Signal and Image Processing with Neural Networks. John Wiley and Sons.

McCorduck, Pamela. 1979. Machines Who Think. A. K. Peters Ltd.

Minsky, Marvin. 1972. Computation: Finite and Infinite Machines. Prentice-Hall.

Minsky, Marvin. 2007. The Emotion Machine. Simon and Schuster.

Neapolitan, Richard and Jiang, Xia. 2012. Contemporary Artificial Intelligence. Chapman and Hall.

Nielsen, Michael and Chuang, Isaac. 2011. Quantum Computation and Quantum Information. Cambridge University Press.

Nilsson, Nils. 2009. The Quest for Artificial Intelligence: A History of Ideas and Acheivements. Cambridge University Press.

Poole, David et al. 1997. Computational Intelligence: A Logical Approach. Oxford University Press.

Prasad, S. 2008. Modern Concepts in Nanotechnology. Discovery Publishing House.

Ripley, Brian. 2008. Pattern Recognition and Neural Networks. Cambridge University Press.

McClelland, James et al. 1987. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Bradford.

Russell, Stuart and Norvig, Peter. 2009. Artificial Intelligence: A Modern Approach. Prentice-Hall.

Searle, John. 1980. Minds, Brains and Programs. Behavioral and Brain Sciences. 3(3):424-425.

Shetty, R. 2005. Potential pitfalls of nanotechnology in its applications to medicine: immune incompatibility of nanodevices. Medical Hypotheses. 65(5):998-999.

Vinge, Vernor. 1993. The Coming Technological Singularity. NASA Conference Publication 10129.

Walmsley, Joel. 2012. Mind and Machine. Palgrave Macmillan.

Walpaw, J. et al. 2002. Brain-computer interfaces for communication and control. Clinical Neurophysiology. 113(6):767-791.

Warwick, Kevin. 2004. March of the Machines. University of Illinois Press.

Wasserman, Philip. 1993. Advanced Methods in Neural Computing. Van Nostrand Reinhold.

Winston, Patrick. 1984. Artificial Intelligence. Addison-Wesley.