GF2045(2013) Proceedings: Ray Kurzweil

Lincoln Center, New York City, June 2013

Global Future 2045: Towards a New Strategy for Human Evolution, Congress Proceedings

Immortality by 2045

Ray Kurzweil

Director of Engineering at Google

Abstract: The onset of the 21st century will be an era in which the very nature of what it means to be human will be both enriched and challenged, as our species breaks the shackles of its genetic legacy, and achieves inconceivable heights of intelligence, material progress, and longevity. The paradigm shift rate is now doubling every decade, so the twenty-first century will see 20,000 years of progress at today’s rate. Computation, communication, biological technologies—DNA sequencing, for example—brain scanning, knowledge of the human brain, and human knowledge in general are all accelerating at an ever faster pace, generally doubling price-performance, capacity, and bandwidth every year. Three-dimensional molecular computing will provide the hardware for human-level ‘strong’ AI well before 2030. The more important software insights will be gained in part from the reverse-engineering of the human brain, a process well under way. While the social and philosophical ramifications of these changes will be profound, and the threats they pose considerable, we will ultimately merge with our machines, live indefinitely, and be a billion times more intelligent... all within the next three to four decades.

Transcript

Thank you. It's a pleasure to be with you. I particularly enjoy conferences on the future. I think I had some influence on the last part of that title and I'll talk to you a little bit about 2045, and what I think will happen. I think a lot of you've heard me talk about exponentials, so I'll try to clarify a few subtleties about it, but I'll share with you some recent thoughts I've had about how the brain works and how we can recreate it functionally and how we can use that to make ourselves smarter. We're doing that already – I felt like part of my brain went on strike during that one-day SOPA strike –these brain extensions are making us smarter. We're the only species that actually extends our reach. When we couldn't reach that fruit at a higher branche we created a tool to extend our physical reach. We've already extended our knowledge – a kid in Africa with a smartphone can access all of human knowledge with a few keystrokes – and we will ultimately literally expand the very basis of our intelligence, which is our neocortex, and I'll tell you little bit about the neocortex.

So this is what I wanted to cover. Are there any questions on any of this? So I don't really have to present this, I think it's all pretty obvious, at least to this group. But let me just clarify a few things on exponentials. Many of you have seen this graph. This is the first graph I did, because in 1981 I wanted to time my own inventions, realized the timing was critical to being successful as an inventor, and the inventors whose names you recognize (like my new boss Larry Page) were in the right place with the right idea at the right time. I did some early-stage mentoring and investing, and I get a lot pf business and technology plans, and my key advice to these young ventures is to really make your invention relevant for the world that will exist when you finish the project. If you have any doubt that the world three years from now will be quite different than it is today, think about the world three or four years ago, most people didn't use social networks, wikis, blogs. Sounds like ancient history; that wasn't so long ago.

And we're continually accelerating the pace of progress. Our first invention, which is a communication technology – spoken language – took hundreds of thousands of years to evolve. We then noticed that stories were drifting from storyteller to storyteller so we wanted a permanent record. We invented written language. That went a lot faster, only tens of thousands of years. We wanted more efficient ways of producing written language. The printing press took 400 years to reach a mass audience. The telephone reached a quarter of the American and European population, only 50 years. Cell phone did that in seven years. As I mentioned social networks, wikis, blogs, took three years. We see now major changes in platforms and designs and communication and computation technologies in just one year's time. So I actually make a discipline, for example I have an e-reader company, we write down all the technical parameters that affect our business, like display resolution, the price-performance of telecommunications, mobile computing, and we write down what that'll be like one year from now, two years from now, three years from now. And even though I've been doing this for 30 years, I'm amazed that woah, mobile computing will be that different in two years? You really have to go through the exercise.

My thinking is still linear; we all have linear brains, that's why our brains evolved was to predict the future, but the kind of challenges we had thousands of years ago were linear. That animal we were tracking in the wild didn't speed up as it went along. And that's the primary difference between myself and my critics. They're looking at the world linearly. Halfway through the genome project, the critics are saying “you know I told you this wasn't going to work. Halfway through a 15-year project, you finished 1% of the project, 7 1/2 years, 1% and it's going to take 750 years.” My response was “no, we're almost done, because 1%'s only seven doublings from 100%.” Indeed, it was done seven years later.

But in 1981, I began to examine this question because I saw that the inventors who were successful were in the right place with the right idea at exactly the right time. And so I began to collect data. I didn't think I would find anything, but very surprisingly, I found that certain things are very predictable and the predictable trajectory is exponential, and the things that this applies to are the fundamental measures of information technology. So technologies like health, all of health and medicine, which was not an information technology up until recently – yes, we used computers to track the information, but basically it was hit or miss – progress linearly. It was still pretty useful. Life expectancy was 37, 200 years ago.

But at any rate, I had this graph. This is price/performance of computing, the amount of computation you can get for the same cost, calculations per second per constant dollar. And this is a logarithmic scale. Every label level is 100,000 times greater than the level below it. So we're adding powers of 10 to what we're measuring. So this represents trillions-fold increase even by 1981. And people say it's Moore's law, and a lot of people equate Moore's law with this exponential growth, which I call the Law of Accelerating Returns. Moore's Law is just one example of many of this phenomena. Moore's Law had actually only been underway for seven years when I did this. It has to do with shrinking the size of components on an integrated circuit. It was the fifth, not the first, paradigm to bring exponential growth to computing, and it won't be the last. The six paradigm's already underway. If you speak to people like Justin Rattner, CTO of Intel, he'll show you the six paradigms already working in prototype form: self-organizing three-dimensional molecular circuits. He predicts that'll take over in the teen years, well before we run out of steam with the fifth paradigm which is Moore's law.

But look at how smooth a trajectory that is, it's really pretty remarkable. Some people sometimes ask “gee, if it's so inexorable, why don't we stop working so hard? We can all just sit and relax kind of let it happen.” And then it wouldn't happen. So what is predictable is human passion to create that next exponential leap and my new boss – first boss I've ever had, actually – likes to challenge people to do “10X” projects, meaning make it 10 times better, not 10% better. It is kind of a bold company on that on that basis, but that is the passion we have. We don't know, you know, if something is operating at a million, we don't try to make it a million ten. We try to make it 2 million. And the other major point is that this applies to many different phenomena. All of electronics is covered; you could buy one transistor for a dollar in 1968, you could buy ten billion today, and they're better because they're faster, because they're smaller. The cost of a transistor cycle has been coming down by half every year. That's a 50% deflation rate.

But we actually outpace inflation in that we consume more than double every year. There's been 18% growth in every form of information technology in constant currency, despite the fact that you can get twice as much of it each year for the same cost. And economists worry about deflation – we had massive deflation during the Great Depression – but we actually outpace it. In fact this is actually the source of all economic growth. Because if you look at all of the non-information technology industries, they're shrinking.

Now this is bits shipped of memory. I have 50 consumption graphs like this. We more than double our consumption each year and the reason for that is innovation is enabled by this increasing price-performance. Why weren't there social networks eight years ago? Was it because Mark Zuckerberg was still in junior high school? No, it was, people tried and they found that they couldn't allow their users to download a picture. It just wasn't cost-effective till about 3-4 years ago. Search engines weren't effective until the late 90s. I wrote in the 80s in my first book, Age of Intelligent Machines, of the Internet, which was then called the ARPANET, would grow into this World Wide Web with this massive amount of information, and we would need search engines to be able to find the information, and those would become effective because we would have the computational and communication resources needed by the late 90s. What you could not predict is that it would be these couple of kids in a Stanford dorm with a late-night dorm room challenge that would take over the world of search. They had a good idea, they had it at exactly right time. Maybe that was accidental, but it was a bit of bravado that – well, the idea was “how do we know that a scientific paper is important?”

Well, Einstein's paper on special relativity has been referenced by other papers, millions of times, so we can therefore say it's an important paper. We want to do the same with websites and we can rank websites based on how many other websites link to it, and we can actually assess the quality of those links by how many websites link to those sites and so on – the same method we use in evaluating scientific papers. But then you would have to reverse all the links on the Internet and Larry Page said “oh, you know, I can do that on my notebook computer.” It seemed kind of incredible but they did it in a very short period of time, and they created a search engine with that rank algorithm. It's called the Page Rank Algorithm (there's some controversy as to whether it has to do with webpages or Larry Page), but spread down the hallway spread through the whole dorm, spread through Stanford, spread to other colleges, and I think it's spread a little bit beyond that. But they were in the right place at the right time. And you couldn't predict that that project – because there were probably 50 projects to develop a search engine around that time – that that would be the one that prevailed.

So not everything is predictable, but the fact that the cost performance to provide search engines and that the need was there, you could predict, and I did predict that in The Age of Intelligent Machines. Time magazine ran this cover story, they wanted a particular computer on the graph that they had covered, and it's right on the curve. And this is a curve I laid out 30 years ago, 1981, and I've laid it out to 2045, at which point, based on conservative estimates of the amount of computation we need to a functionally simulate a human brain, we'll be able expand the scope of our intelligence a billion fold. And that's so unfathomable that we borrowed this metaphor from physics. The metaphor is not infinity, because actually exponentials don't reach infinity. It may seem infinite by our current perspectives, but it still stays finite. The metaphor is the event horizon around the singularity.

In fact, in physics, in theory the point in a black hole is infinite, but actually quantum mechanics doesn't allow an infinite value of mass or energy or gravity. It simply exceeds any level that we can measure and it's enough gravity to keep all the information within the event horizon, so it's hard to see beyond the event horizon. We can use our imagination to talk about what it would be like to fall into a black hole, and similarly we borrow that metaphor, we can use our imagination to talk about what it will be like past the singularity, but it's hard to see beyond that event horizon. Communication technology – that's the number of bits we move around wirelessly, Morse code over AM radio a century ago, 4G networks today – is a logarithmic graph that represents, again, trillions fold increase, but notice how predictable a trajectory that is. Internet data traffic – these first few points of the Internet, the number of hosts in the early 80s called the ARPANET and predicted this World Wide Web by the late 90s, the graph on the right is the same data on a linear scale and that's how we actually experience it. So to the casual observer it looked like “woah, the World Wide Web, a new thing that came out of nowhere,” but you could see it coming; and biology is a – we could talk a lot about that

There's a lot of discussion about the future of biology here. I think the key point to understand is that there is a grand transformation, whereas biology, health and medicine was not an information technology, drug development – it was called drug discovery – would basically just systematically go through 10,000 compounds and try them all out and see which one lowers blood pressure, which one kills HIV. We couldn't actually design these tools. We now actually understand biology as the software it really represents. We have 23,000 software programs inside us, and they involved thousands of years ago when conditions were very different.

Many of you have heard me talk about the FIRGO project, a fat insulin receptor gene knockout where they turned off the fat insulin receptor gene in animals at the Joslin Diabetes Center, and so these animals could eat a lot but remain slim, because they knocked out the gene that causes calories to keep being accumulated by your fat cells. And these animals actually got the benefit of caloric restriction, they lived 20% longer, while actually not restricting their calories. You heard from Martine Rothblatt. I've worked with her on her board, and I've worked with her since that company went public over a decade ago. We take lung cells out of the body and in patients that – that company, United Therapeutics, is devoted to pulmonary hypertension. It's a terminal disease. It's a very heroic story, actually. Her daughter had this disease at age 6. Your life expectancy as a six-year-old is one year. She is now a thriving 23-year-old and United Therapeutics basically makes this therapy available. But they're trying to actually cure this disease.

So, we take the cells of the body. It's caused by one missing gene. So we add the gene in vitro, which doesn't trigger the immune system, which is one of the problems with early forms of gene therapy; replicate it a million fold, inject it back in the body, it goes through the bloodstream; and this has actually cured this disease in human trials and is undergoing more human trials. This tremendous work being done now with stem cells, there was just, a couple of weeks ago, a news item in the New York Times. This girl young girl had damaged windpipe and it was threatening her breathing, and she ultimately would not be able to breathe, so they used high-resolution scanning to scan her throat, designed on a computer using computer-assisted design a virtual windpipe, and used a 3-D printer and printed out that windpipe with biodegradable materials, used a 3-D printer to populate the biodegradable material with her stem cells, and then grew a windpipe with her DNA in a laboratory dish, and installed it surgically and now she's fine. This is been done for years with tracheas. It's been done with more complex organs like kidneys in animals, and that'll be becoming to humans soon.

Martine probably talked about a project to grow lungs in laboratories, and those have been successfully surgically implanted in pigs, which actually have a very similar pulmonary system to humans. My father had a heart attack in 1961 and damaged his heart. He could hardly walk. This is true of 50% of all heart attack survivors, have a damaged heart. It used to be there was nothing you do about it, but now you can take your stem cells and actually rejuvenate your heart while it's beating, and I've talked to people who could hardly walk in and are now normal through this therapy.

These are all examples of basically treating our biology as software and reprogramming it. And the key point is that now that it's an information technology, it's progressing exponentially. I mentioned that the genome project itself, which is the enabling factor for this revolution, was exponential. And that has continued, that first genome, even though it sped up, cost a billion dollars, you can now get a genome done for a few thousand dollars. Every other aspect of biology has expanded, basically doubling in power every year. So these technologies are now 1000 times more powerful than they were when the genome project was completed a decade ago, and these therapies are now coming into clinical practice. These will be 1000 times more powerful again in a decade, a million times more powerful in 20 years. Somewhere between 10 and 20 years is going to be a tremendous transformation of health and medicine. There's already fantastic therapies, and I could talk a long about it, to overcome heart disease, cancer and other – really every other disease, neurological diseases – based on reprogramming this outdated software.

But the key point is that it's now exponential. It used to be linear. That was useful. I talked to some junior high school students not too long ago, and said if it hadn't been for the progress we've made you all would be senior citizens, because life expectancy was 20, a thousand years ago. We've quadrupled it in a thousand years, we've doubled it in 200 years. This will go into high gear between 10 and 20 years from now. In probably less than 15 we'll be reaching that tipping point where we add more time than is going by because of ongoing scientific progress.

Three-dimensional printing is turning the world of physical things into information. 3 years ago if I wanted to send you a music album or movie or book I'd send you a FedEx package. Now I can send you an email attachment. I could also send you this violin and this guitar as an email attachment and you can print it on your three-dimensional printer. The materials cost a few pennies per pound. So it's really almost free. The value is in the design and this is expanding exponentially, in that the precision is getting finer and finer at an exponential rate, basically a rate of 100 in 3D volume per decade. It's now a few microns.

To really begin to revolutionize a major sector of manufacturing we need submicron resolutions, we need somewhat lower cost. Those are coming down. We'll have both of those things within a few years. This is one of those revolutions we can see coming, and as the resolution continues to get finer and finer, it will cover a broader and broader array of manufacturing.

By 2020 we'll be able to print out clothing. I've been saying that now for few months, but actually last week, in our newsletter we had an announcement of a project that can actually print out clothing – experimentally, but I think well by 2020 we'll be able to do that. So people say “ah, that's going to be the end of the fashion industry, you can download all these free, open-source designs, many which will be really cool, and print out your clothing at a few pennies per pound, that's going to be the end of fashion as an industry.” But let's look at some industries that actually have done that already like music, movies and books. Those industries have not gone away. In fact their revenues have gone up because of the ease with which these products can be delivered amd promoted.

So, the business models have been regularly destroyed, and there's still tremendous thrashing about business models, and it's not clear what will prevail. But if you look at the revenues of publishing, music and movies they've continued. And what you have – so there are millions of free songs, books, movies, that you can download legally for free; you can have a very good time with these free media products, but people still spend money for Harry Potter or the latest blockbuster, songs from their favorite musical artist – and you have this coexistence of an open source market, which actually provides high-quality products, it's a great leveler and really provides it to anyone regardless of their ability; and you have this coexistence of the proprietary market and the proprietary markets actually continue to thrive.

Education, there's a major revolution coming there. You've got hundreds of thousands of kids now in Africa, taking courses from MIT, Stanford, Carnegie Mellon, Oxford for free, and that's, some people say “oh, well, ok, it's interesting, but there's only a few subjects available, it's only in English, it's only the college and graduate level,” but they said the same thing about e-books five years ago. That's when I started my e-book company, and e-books were kind of a fringe thing. Yeah, maybe if you go to the beach you don't want to carry a print book, but people really like paper books, and most material is not available, and the reading experience isn't very good. Well, all the things were overcome. And the great significant majority of books now are e-books, and same thing is going to happen with education, and it's going to happen with physical products. We're already getting to the point where we can print out electronics. There's already a wide range of materials, plastics, ceramics, glass, metal. This ring I have is a simple example, but you can print out very complex items. This this will revolutionize the world of physical things and make it an information technology.

So, let me talk about thinking, because I've been thinking about thinking for 50 years. I wrote a paper, actually 50 years ago, when I was, well, I started when I was 14, it was finished when I was 15; and it described how the brain worked. And I really did it based on my observation of brains and what they could do. And I described the brain as basically a very powerful pattern recognizer, actually a set of recognizers. We have a lot of modules, each of which can recognize a pattern. And that's the strength of human intelligence. Even back then, which was 1962, 1963, computers were better than humans at doing logical thinking. But we we had great depth of pattern recognition. And I described how we did everything, including composing music, for example, as a pattern recognition exercise. And these little modules are organized in hierarchies, so we recognize simple patterns and then those are inputs to higher-level pattern recognizers, and we have an elaborate hierarchy, and we build that hierarchy with our own thinking. That's what this paper said 50 years ago. I wrote a program based on the principles that I described to find those patterns in musical melodies and then write original music using the same patterns for the original songs. So I fed in Mozart and Chopin and then it would write music that sounded like a student of those composers, and that's how I got to meet President Johnson. I actually took that program on I've Got a Secret, which I think is before most of your time. Maybe a few of you remember it.

And I've been thinking about this and working in the field of trying to emulate human intelligence, artificial intelligence, for the last 50 years, and about a year ago it became apparent to me that we were getting the information to actually confirm these theories. So this book actually says the same thing that my paper said 50 years ago, except now we have the neurological evidence of this. These are logarithmic graphs, these are different types of brain scanning, invasive, noninvasive, nondestructive. We can actually see now in a living human brain with enough precision to see individual interneuronal connections being formed, see them firing in real time, you can see your brain create your thoughts, and we can see your thoughts create your brain, which is really the secret of human intelligence.

A lot of interesting research came out just as I was sending the book off. Five times I promised the publisher I was going to send it the day after tomorrow and then some new dramatic research would come out and I said “no, I've got to cover this.” He said “you're going to miss the Christmas season, you've got to get me this book,” but one of those pieces of research was – okay, you've all heard a plasticity and that we create the connections in our brain as a result of our thinking, but there are units of about 100 neurons where there is no plasticity. The connection pattern is fixed for life, and there's no changes within that group of 100 neurons in the neocortex. And that sort of cell structure of 100 neurons is repeated and it's about 300 million times. So we have 300 million of these modules. The connections between the modules is complete plasticity. That we create with our own thinking.

So, these modules are the key units. It's one of the problems I've had with neural net technology, in that it bases the key unit on a mathematical model of a neuron, and that it has all the connections being formed and weights being computed based on feedback from performance of every connection between every neuron, but actually you have a more complex entity that is able to recognize a pattern and that's fixed, and that we're born with. It's the connection between these more complex units that is based on our own thinking. And I described the algorithm that takes place within the unit, it basically looks like a hidden Markov model that can recognize a pattern with a certain amount of flexibility; and the hierarchy basically looks like a hierarchical hidden Markov model, which is technology I helped pioneer in the 80s and 90s, and what we didn't do in the 90s was we were not able to actually build the hierarchy with the thinking of the of the system. We had to fix the hierarchy initially. So, for example, in speech recognition with a small amount of natural language understanding, we had one level for acoustic states, one level for phonemes, one level for words, one level for simple syntactic structures, and that was fixed. What the brain does is it actually learns all this from its own experience.

I'll show you how this works, and talk about some of the implications of the ongoing Law of Accelerating Returns on AI, and then on our own thinking. But let me first tell you the story of the neocortex which goes back 200 million years ago. A new genus evolved, or appeared, on the planet about 200 million years ago called mammals, and these early mammals were rodent-like creatures, and they had a new brain structure called the neocortex. The neocortex means new rind and it literally was a covering around their small brains. They had walnut-sized brains. The neocortex is the size of a postage stamp. It was as thin as a postage stamp, and it covered the brain and it was smooth. But it provided a new form of thinking that animals without a neocortex didn't have. It had this hierarchical structure and they could change the hierarchy on-the-fly and the patterns that are recognized each level at some flexibility so they can recognize a pattern even if was a little different, part of it was occluded, there was some distortions, it would still recognize the pattern. Their skills are also organized in the neocortex in this hierarchical fashion so they'd have certain strategies for how to evade a predator for example, or how to find food. But if that didn't work they could experiment and try different patterns, and if one worked they could then reinforce that inside their brains.

Animals without a neocortex couldn't do any of those things. They had fixed behaviors which did not change very easily. Now non-mammalian animals could learn a new skill, but not in the course of one lifetime. In the course of maybe 1000 lifetimes using normal biological evolution, they would evolve a new fixed behavior. That was perfectly fine because 200 million years ago the environment was changing very slowly. It would take thousands of years for there to be a significant environmental change, and over those thousands of years they would evolve a new set of behaviors to cope with that new environment.

And that worked fine for over 100 million years but then something happened 65 million years ago. We call it, today, the Cretaceous extinction event and there was a sudden cataclysmic change in the environment that happened in days or weeks. We think it has to do with a meteor. We see geological evidence of this all around the globe. If you dig down to a layer of rock, representing 65 million years ago, the geologists will tell you that this represents a very chaotic sudden violent change in the environment, and we see that evidence everywhere around the globe. And animals without a neocortex could not evolve their behaviors in days or weeks. It would take thousands of years and so that's why we call it an extinction event.

That's when the dinosaurs went extinct and hundreds of thousands of species went extinct very quickly. And that's when mammals overtook their ecological niche and became dominant, and, to anthropomorphize biological evolution, said “ah, this neocortex is pretty good stuff” and began to grow it. And mammals got bigger, their brains grew even faster than their bodies and the neocortex, grew even faster than the brains, so that if you look at that image there, you'll recognize these convolutions and fissures in the human brain. That's basically to give it more surface area so in a primate brain – you see a little bit of it in cats and dogs, there's some beginning of these convolutions – a primate brain is very convoluted.

So, it's still a thin structure. If you took the human neocortex and stretched it out. It would be a flat structure, be about the size of the table napkin, be about as thin as a table napkin, but because these convolutions and ridges are so profound, it's actually now 80% of the weight of the brain and it's where we do our thinking. And we still have that old brain. That's the other 20% of the weight. It's actually more than 20%, the old brain is more than 20% of the neurons. The cerebellum which used to be responsible for a lot of our behavior (but a lot of that's been taken over by the neocortex) has a larger number of neurons, but they're very small neurons. But the neocortex is where we do our thinking.

So, we have the amygdala, which is programmed to put out a whole set of, cascade of hormones and so on if it gets a signal of there's something to be afraid of, if there's some danger. It has an ancient program putting out different hormones to prepare us for flight or fight. But our amygdala no longer had to make the decision as to what to be afraid of. So your boss enters a room, and whether that triggers laughter or fear, that's up to the neocortex. The old brain has these basic, provides our basic drives, sexual conquest or aggression, but that can be sublimated by the neocortex into writing a poem or writing, a computer program, or writing a book, or organizing a conference on the future.

So, our neocortex is the great sublimator, and it's where we do our thinking, and I'll show you how it's organized and talk about, and then link that to the Law of Accelerating Returns. But I'll show you this. One of the interesting pieces of research on the connections between one module and another, the connection pattern is actually already there, but it's not connected. It's got, it's really like the streets and avenues here in Manhattan. There's a cross-network between all of these modules. And so if the neocortex decides “okay, the output of this pattern should feed into this pattern recognizer because there's a higher-level pattern, we've got tp connect these two,” it will find a street and an avenue that connect these and then make a final connection between those axons.

So, this network, this two-dimensional axon network is already there. We actually have twice as many of them when we're born as we do as adults, so half of them actually die out because they're never used. These are basically connections in waiting. And another interesting piece of research has to do with the complete interchangeability of the neocortex.

Now, you might think that the frontal cortex, which is associated with higher-level thinking like irony and beauty and love and humor, that that's, the structures in there and the algorithms must be much more complex than the ones that recognize the edge of this stage, or the crossbar in a capital A. They're actually completely the same. The difference is where they stand in the hierarchy. And a very dramatic piece of research that came out (that was the last time I held up the book for the publisher) was what happens to V1, which is an area where the optic nerve spills into and is generally associated with the very first stage of processing of visual images, very low-level primitive features of visual images, like the edges of objects in the crossbar of a capital A – what happens to it in a congenitally blind person? It's not getting any visual images. Does it just sit there, “well, maybe we're going to get again visual images some day so we'd better be ready”? Or does it do something else? It actually gets harnessed by the frontal cortex to help it with the very high-level features in language like humor and irony, really at the opposite extreme of the continuum of complexity of features, showing the complete interchangeability with V1, which has generally been associated as being a very primitive part of the neocortex can actually do the same things that the frontal cortex does, dealing with these high-level features. And we also known in strokes that if one area gets knocked out, let's say the fusiform gyrus gets knocked out which, generally, if the flow of information operates code normally that'll recognize faces; if you suddenly have a stroke or injury that knocks out the fusiform gyrus then these people can't recognize faces, but they learn that again by basically relearning it in another area of the neocortex.

The physical similarity of the neocortex was actually noticed, when I first wrote that paper 50 years ago, by a neuroscientist Vernon Mountcastle. He examines just visually in autopsies all the different regions of the neocortex and he saw that it all looked the same. It was the same arrangement of neurons, the same connection patterns; and he said neocortex is neocortex because it was all the same. And we can actually now see that in neuroscience research, it shows this complete interchangeability of different areas of the neocortex.

Now, what happened in humans when they could talk? One significant event was the emergence of mammals, and the emergence of the neocortex on earth; the Cretaceous extinction event, which then led to the dominance of mammals and then to the very rapid evolution of mammalian brains and the growth of the neocortex; and then something else significant happened: humanoids came along. And we have this big forehand. If you look at other primates, they have a slanted brow. They don't have this big forehead, they don't have the frontal cortex, and that's basically a greater quantity of neocortex. As I mentioned, it's not qualitatively different, but other animals have enough neocortex to basically provide for problem-solving for the basic challenges of being that animal. We now have an extra amount where we could basically cope with our biological needs, but we now had an additional quantity, and that additional quantity was the primary enabling factor for us to develop language, and art, and science, and technology, and conferences, that – no other species has done that, we don't know conferences on the future held by any other species – or by an evolutionary process of toolmaking.

Yes, we notice that chimpanzees can sharpen a stick and use that as a tool to forage for food, but it's not advanced enough for them to create the next generation of tools using those tools. We had basically went beyond that threshold where we can create this whole evolutionary process of toolmaking. You know, we can create language, and we can create structures with language, and we build one layer of knowledge upon another. Only humans do that. Only humans have art, and science, and technology, and basically this additional quantity of neocortex was the enabling factor for that.

So, hard works is, we have 300 million of these little pattern recognizers and they're organized in hierarchies, so to give you a very simple example (by the way there is a lot of redundancy) so you have more than one module that recognizes important patterns like the crossbar to a capital A, I would actually start out with a very large number and a prune it back in order to free up more neocortex to learn new things, but I've got probably hundreds of little modules that recognize the crossbar to a capital A, and that's all it cares about. You know, a beautiful song could play, a pretty girl could walk by, it doesn't care. But it sees the crossbar to a capital A, it gets very excited, and it says “crossbar!” and it sends out a high probability on its output axonal. And that goes up to a higher level of abstractness and the next module could recognize a capital A, because it's getting other topological features from other recognizers, and it may go “capital A!”, and it goes up to a higher level that might say “oh, the printed word Apple!” If that recognizer, let's say it's getting a signal for A-P-P-L but not E, it'll say “well, you know, there's a good chance we're going to see an E, because I've seen every other part of this pattern,” and it sends a signal down to all the E-recognizers saying “I think an E is on the way, be on the outlook for it,” those E-recognizers will change their threshold, they may see some smeared thing that could be an E, ordinarily it wouldn't say it's an E, but it says “oh, good enough,” and it sends up an E signal because it was expected.

So, information goes up and down the hierarchy in this way. And it can go sideways by going up and then down another branch, and this is how information flows in the neocortex. And each of these modules is about 100 neurons and it's the connections between the modules, as I mentioned, that we create with our own thinking. And we can create one conceptual level of the time.

So, I've been watching this in my 20-month-old grandson. He's been successfully laying down one level after another. Children have a lot of virgin neocortex that's not being used. That's why they can learn language so easily. By the time we're 20 we've used it up and we need to actually forget something old to learn something new. Very often that doesn't have to be an entire subject. It can be just the redundancy that we don't need. I may be able to get along with just 100 recognizers for a crossbar in a capital A whereas maybe I needed ten thousand when I was 20.

So, I've freed up that capacity to learn new material. Some people are better at that than others. Some people are very happy with the information they have in the neocortex and they don't want to learn anything new. This is a critical – being open to new information is a critical part of creativity.

So, let me make one last point, and then leave some time for questions: What's going to happen in the future? I mentioned that, you know, these devices we carry around, our “brain extenders,” they're basically extenders into the cloud.

So, you know, this phone is several billion times more powerful per unit of currency, per constant dollar, than the computer I used as a student, but that's not even the main event. It's really a gateway to the cloud. If I do complex search, a language translation, I ask Siri or Google now a question, it goes out to the cloud. It doesn't take place in that little rectangle. And the cloud is pure information technology and it's doubling in power every year.

So, this is basically gateway for my brain to the cloud, but it is to be modulated by my fingers and the clumsy interface. We'll ultimately put them in our brains. I mentioned that shrinking of technologies – another exponential trend, we see the shrinking of the feature size and three-dimensional printing, improving at a rate of 100 in 3D volume per decade – we're doing the same thing with the size of our technology. People are already putting computers and connecting them into the body and connecting them into the brain, like Parkinson's implants. Those are progressing exponentially. You can now actually place them into hundreds of, connect them into to hundreds of points into the brain; and the power of them is growing exponentially. They're already very small, the size of a pimento, so they can be placed with minimally invasive surgery, but it's still surgery. They'll be the size of blood cells.

By the 2030s, we'll be able to send them into the bloodstream, we'll be able to create an augmentation of our immune system with intelligent nanobots that can detect and cure disease beyond what our natural immune system can do – our immune system, for example, doesn't recognize cancer, it thinks that that's you – but more importantly it will go inside the brain and basically be a gateway to the cloud just as we have gateways to the cloud that we hold in our hands today. And so if I'm walking along and someone is coming, I want to impress that person with some clever thought and my 300 million patter recognizers is not to do it, I can access 1 billion or 10 billion modules in the in the cloud. Just the way today, if you need 10,000 computers, you can access that for three seconds in the cloud to do some complex operation like a language translation, I'll be able access additional neocortex in the cloud. And the cloud is pure information technology.

So, it's doubling in power every year. So, we'll basically expand our neocortex. There are other advantages, that it'll be backed up, download new skills into silicon or synthetic neocortex, and we'll basically expand our thinking. And remember what happened the last time we expanded our neocortex, by developing these large foreheads: that additional quantity was the enabling factor for this grand qualitative leap to create language, art, and science.

So, what's going to happen the next time? Well, imagine asking a humanoid pre-language, “gee, this is exciting, you've got this additional neocortex; what's it going to be like when you invent language, and art, and science?” And of course he wouldn't be able to answer the question. We can actually understand the question now, and we can answer it, at least by analogy. We'll be able to make a grand qualitative leap as we expand our neocortex.

And, concluding comment, that's what I'm doing at Google. I gave a copy of this book to Larry Page in July, and he asked me to join Google and basically create a synthetic neocortex based on the ideas in the book and use it to understand natural language so that we can have the Google computers do search, not just based on words – it's remarkable how well that works, but it doesn't really understand concepts; all the pages that it examines, all the billions of web pages and billions of book pages; it's looking for sequences of words and word patterns but it's not actually understanding the semantic content – so we're developing technology that can actually understand the content of all that material. A good example of that is IBM's Watson, got a higher score than the best human players put together. It answered, for example, this query “a long, tiresome speech delivered by a frothy pie topping in the rhyme category” and correctly responded “what is a meringue harangue?” It got its information by reading Wikipedia. It was not programmed fact by fact by the engineers. It read 200 million pages of natural language documents, and it doesn't understand those pages as well as you or I, it might read one page and say “ah, there's a 56% chance of Barack Obama as president.” You could read that page, and if you didn't happen to know that ahead of time, could conclude that there is a 98% chance that Obama is president. So, you did a better job of reading that page, but Watson has read more pages. A lot more. 200 million pages, all of Wikipedia and many other natural language encyclopedias and extensive databases of natural language text, and understood it well enough, and it has a good Bayesian reasoning system, to reason across, let's say, the hundred thousand pages that deal with Barack Obama's presidency and conclude that overall there's a 99.99% chance of Barack Obama as president. And it did better than the best two human players on a very broad task.

So, that's the task. That's what we're trying to create. We've got some ideas of how to go beyond Watson, extend it beyond 200 million pages to the 10 billion pages on the web and the 10 billion pages in all the books in the world and actually understand all of that, and then you'll be able to talk things over using natural language. It will understand on a deep level what you need, and will actually be an intelligent assistant to you.