SIM, Emulation, Structure & Function

Transcript of the interview conducted with Dr. Randal A. Koene by Adam A. Ford at Fitzroy Gardens, Melbourne, 16 August, 2012. (The interview was part of the events surrounding the Singularity Summit Australia 2012.)

This article has now been reposted at H+ Magazine: http://hplusmagazine.com/2012/09/18/sim-emulation-structure-function/

The interview is available on YouTube at: http://youtu.be/VpNtCsQDrjo

SIM: Substrate Independent Minds

SIM stands for Substrate-Independent Minds. It is the notion that you can take the functions that are going on inside of our brain that produce mind and have them carried out in different kinds of implementations, just like you could have computing code running on different types of platforms.

It relates closely to two other terms. It relates to the concept of Mind Uploading, which means taking the functions of the mind and moving them to another substrate, and it relates to a specific approach, the most conservative one that we have currently, called Whole Brain Emulation, the notion that you need to actually emulate the mechanisms, the functions of neurophysiology and neuroanatomy at a detailed level to get mind functions to work

Differences between Simulation and Emulation

It's not really a strict difference. It is a matter of how we choose to use the terminology in this field and because we like to make a distinction we make this distinction using the words simulation and emulation, where by simulation we mean building a model that is a general model of how some piece of mind somewhere could work . So, it's an average of what you would find a different animals or different people, where-as an emulation is a very specific reconstruction of neural circuitry such that you get the same function, you get the same exact activity that you would find in a specific case and in one specific piece of circuitry.

Why pursue a conservative approach to SIM?

Well, it's really that we don't have a choice at the moment. We don't have the choice to use a non-conservative approach, because we don't understand enough about the higher levels of mental functioning to be able to do something like a top-down approach or to create models of mind that are very optimized, that contained different converted versions of how mental functions could be implemented.

All we have right now is 100 years of neuroscience looking at the very bottom levels, looking at what types of neurons are there, what types of synapse are there, how do you identify them, and how do you measure from them. That's the kind of information that we have that sort of knowledge we can use. That's why: The most conservative approach.

Advances in Technology to Aid Neuroscience

On the one hand it'll help make a difference in understanding what we find in the brain, because being able to reconstruct the circuitry their is still a step removed from understanding what that circuitry's doing. The understanding step will be aided by all that knowledge that acquire in those decades.

On the other hand, knowing more about the brain in that sense of having better tools will also allow us to make better implementations of mental functions. We will be able to come up with more efficient solutions and solutions that are able to do more.

Preserving Identity: An Exploratory Problem

It's also an exploratory problem. It's again, it's this question of: We have a system that's fairly unknown and we want to draw a line around it somewhere, and say, “this is the part we're going to describe, because we think that's the one that contains the interesting effects.”

Then, if you find that this hasn't captured the interesting effects there are two areas you can look. You can look at the resolution, maybe you're not including enough of the signals that you should be looking at, or maybe you're not including enough of the scope. Maybe you are not taking into account as much as you should be taking into account. So, maybe we should be doing simulations of the cerebral cortex plus the spinal cord plus the nervous system in our gut.

But, possibly no. I think it's a good place to start to just say let's look at the brain, the part that is inside of our head and look at simple signals like spikes to begin with , and then work from there.

Resolution

We really don't know yet how precise that resolution needs to be. All we know is that the brain has a certain resolution, because we have individual neurons. We don't just have big clusters that are indistinguishable, where something's happening. So, there is a probability that all of these individual neurons and what they're doing separately counts, that it matters.

My tendency would be to think that it is a good idea to get down to that resolution, but we really don't know. It might be that we can just treat clusters as a single unit.

Grasping Structure & Function

That's really a big problem, and as I described in my talks, this is a matter of the grasping structure and of grasping function. The structural part is something that I think we will be able to solve much sooner than the functional one, because there are many more tools appearing these days to deal with what we call connectomics, being able to acquire the morphology of the system and the structure, the connections between all the different neurons.

On the functional side, we're not that good yet. We can get a global idea of activity in the system at a lower resolution using devices like MRI, or we can look at a sort of pin-pricks within the system by taking electrode readings at various locations.

Then there are few new technologies such as something called a molecular ticker tape or wireless neural probes that are being developed that should be able to give us a much higher resolution and be able to register from many more neurons at the same time. But these don't exist yet, these are in development, though if I've understood correctly, the first versions of the molecular ticker tape have just become available for testing, so that should be really interesting.

I think this is all happening within the next five to ten years. It's happening really quickly, because a lot of the attention in the tool building area of neuroscience has been focused on that now. It's very clear among those people who understand this problem and understand about system identification in neural circuitry that we have a lot of the tools we need for structure, although those can still be improved and perfected, and we need way more in the area of functional recording.

So, this is happening very quickly and it includes developments that we've seen over the past couple of years like the development of optogenetics, where it makes it possible to select specifically which neurons are on and which are off, so that you know which ones you'll be recording from.

SIM: How many synaptic connections do we need to model?

Well, this is the same as with the question of resolution in cells. Again, we really don't know. It's a matter of finding out empirically, testing this. So, you start with a model that assumes some things. We either assume that we need all of them and then we try to map them all in this small piece of neural circuitry, or we assume that we can just measure the strength of connectivity between two neurons and represent that as a single connection, no matter how many synapses are actually there and use that. I think we're going to be testing both of those approaches and see which one works best.

The Human Connectome Project

There are actually a number of these projects now. There is the big one in Europe, that's true, and there are a few others that are popping up, although some of them are not funded just as well.

I think it's just a sign of the fact that the connectome, which really just came into the general understanding of neuroscience, that this was a topic back in 2008, that was the first time it really popped up. This isn't that long ago, but four years later it's a hot topic everywhere and everyone wants to develop tools and extract data and understand more about the human connectome and represent it.

One thing that I find a bit of a problem is that because it's a hot topic and because it means that funding is going into it, people jump on the bandwagon who are changing the definition of what the connectome is.

When the connectome was originally approached it meant getting all of the detailed ultrastructure of the brain, and where the dendrites head, where the axons go, where the synapses are. But now there are a lot of projects that do things like diffusion tensor fMRI, which is really just a way of looking at large pathways, looking at large bundles of nerves that are heading through the brain. So, it's not the same resolution at all, and that's also being called connectomics. I guess, that may also be useful, but it is diverting some of the funds that should really be used for the more ultrastructure work.

What has happened in the last 10 years to move us closer to SIM?

There are very many things in the last ten years that have moved us closer. They relate directly to the different parts of what we call the roadmap for whole brain emulation.

We already talked a little bit about data acquisition on the structural side and on the functional side, and we see that on the structural side, where the connectome was concerned, a lot has happened even just over the past five years. On the functional side, a lot is happening right now.

The rest of the road map, if we look at what's needed there, for example, computation, emulation platforms, things like neuromorphic hardware, we see that a lot has happened there as well. We've seen big programs appear like the DARPA SyNAPSE project, which is specifically about creating a neuromorphic chip, something that would work in a similar fashion as neurons do.

And then, on the other side, when it comes to doing experiments and testing hypotheses, over the past few years we've seen increasingly detailed models of neural circuitry. For example, in 2011 there was this wonderful set of papers by Briggman et al and Bock et al that showed at the ultrastructure level that you can reconstruct something and then say something about function. So, it's really a proof of concept of Whole Brain Emulation.

And we've also seen work being carried out in the neuroprosthetic area. Take, for example, Ted Berger's work, which has certainly been going on over the entire span of the last decade, and has been figuring out how to do system identification in neural tissue and create chips that carry out the same function as the tissue that used to be there.

So, we've seen work in the hypothesis testing area and how you build these models. We've seen work in the emulation sphere, so making platforms on which it could run. And we've seen work on the data acquisition side. We haven't seen as much work in the last ten years on integrating all of those things, but i guess it's only natural that that happens last. That's one of the things we get to now.

Integration

A symposium that I've organized in conjunction with the Annual Meeting of the Society for Neuroscience, which is being held in New Orleans this year - so, this is October - is specifically about that. We're trying to gather the people working in these different projects and talk about integration of their data.

We're taking this forward, also by presenting all of the parts of the road map and the main players in it at the Global Future 2045 Congress (GF2045) that's going to be happening in New York City in 2013.

Those are just some examples of efforts that were making to do this integration step, but of course that's really the main work of carboncopies.org, is keeping track of a road map understanding the big picture and dealing with the integration of various projects.