The Restless Brain
10/09/14 | 59m 10s | Rating: TV-G
Marcus Raichle, Professor, Washington University School of Medicine, St. Louis, argues that the essence of the function of the brain involves information processing for interpreting, responding to and predicting environmental demands.
Copy and Paste the Following Code to Embed this Video:
The Restless Brain
cc >> As most of you have had to put up with me you know that I've introduced about 57 speakers in this seminar series. And many of them are very, very accomplished people, like the former president of the National Academy of Sciences or the current head of the American Association for the Advancement of Science, but Marcus Raichle is at the very pinnacle of his field. And for someone like me to introduce someone like Marcus is a genuinely humbling experience. His accomplishments are phenomenal, and he's still going strong. So Marcus has a medical degree as well as a bachelor's degree from the University of Washington and then he came all the way across the country to Baltimore City Hospitals and then he worked at Johns Hopkins and then he worked his way through various medical institutions, including the United States Air Force Academy for Aerospace Medicine in San Antonio, Texas, and finally he worked his way to Washington University in St. Louis, Missouri. >> Woo!
LAUGHTER
>> And Marcus is actually a professor of radiology, neurology, neurobiology, biomedical engineering, and psychology. He's been appointed a professor at Wash U in each of those fields. He has received many honors. He was elected in 1992 to the Institute of Medicine, in 1996 to the National Academy of Sciences, and in 1998 to the American Academy of Arts and Sciences. And in 2014, he shared the Kavli Prize in neuroscience, a prize which is of particular interest to us because the Kavli Foundation is sponsoring this seminar series. So we're very pleased that they chose Marcus, and he is recorded on your announcements as a Kavli Foundation lecturer, which was something that the Kavli Foundation demanded for each of the people that they were supporting, but he's the only one so far who has been recorded as a Kavli Prize winner. He has published 244 papers. Thank goodness he numbered them.
LAUGHTER
And 143 book chapters. And just to close out the introduction, since 1976 he has been the oboe and solo English horn player in the St. Louis Civic Orchestra.
LAUGHTER
Marcus.
APPLAUSE
>> Thanks, Rod, for that nice introduction. It's a pleasure to be here, for sure. A challenge to give a talk to such a diverse audience, so I hope there's enough for most and something for everybody in what I have to say. I want to talk about what I call the restless brain. The whole idea here is that your brain is very busy. In fact, it's maximally busy virtually 24/7. And yet we've given very little thought to what that really means. And so I want to explore that from a whole variety of different ways. But one of the things that I'm going to be using along the way, of course, is because of my background. I've been in the brain imaging busy virtually since it began some 40 odd years ago. And so just to be sure that we're all on the same page here, I will refer particularly to a positron emission tomography at some point in this, and for those of you who are not familiar with this, this is essentially a technique, an imagine technique applicable to humans that it does in vivo tissue out of radiography. So we can measure all sorts of metabolic and circulatory parameters in the brain. And I will refer to them. I'm not going to get into the details of how we do it, but just to let you know that you will see data from such a machine. And then the more familiar tool that I'm sure you all know is magnetic resonance imaging, and we use that extensively. And, of course, all of these had their birth in the early 1970s when the top image on that screen, X-ray computed tomography, was invented by Sir Godfrey Hounsfield and introduced the whole modern era of imaging the body and the brain. This has produced not only a revolution in the practice of medicine and how we interrogate the human body and health and disease, but with the sister techniques of PET and fMRI, we've gotten very much more information than I could ever have imagined when I went to Washington University 42 years ago. So that's just a bit of background, but without much in the way of details. Now, the other thing that's important maybe to keep in mind, as some of you I'm sure know a lot about this, others of you may not be familiar, but I will be talking about imaging data and showing changes in how the brain expresses its activity using these techniques. And, again, it's probably helpful to tell you just a tad up front about how those signals arise. And, again, I'm going to get into some detail about this, but there's some basics. I don't want to leave too many people behind here. And what we've known, actually, for a very long period of time is that if we present a stimulus, say we give a grading like this to you as you lay in a scanner, this is like a blockbuster stimulus to the visual system. So we expect something to happen and we expect a change in the blood flow to this area of the brain which you see right here. And it was always assumed that when you did this that the reason that the blood flow went up was simply because your brain needed more oxygen. After all, if they took the oxygen out of this room, in about 10 seconds you'd all be unconscious. So it seemed reasonable if your brain does more work, you would need more oxygen. It turns out that isn't the case, actually. That you have plenty of oxygen in your blood. And this bloody thing is very sensitive.
LAUGHTER
So it may get ahead of me, but bear with me. So what happens here is that the oxygen consumption of the brain doesn't really change very much, and as a result of that, you get more oxygen in the area that the brain changes its activity. Now, PET tells us about blood flow, but when we're looking at MRI, we're looking at these changes in local oxygenation. And very nicely, a number of years ago, following on a long history, every time you think we've really discovered something new, you find out that somebody actually had noticed this earlier on. And the basis of fMRI, the basic idea, not the imaging, came out of Michael Faraday because he was interested, as you know, in the properties of molecules and how they would behave in a magnetic field. And in the course of his work, he actually looked at dried blood. And by god, it didn't have any effect in magnetic field, and he wrote in his lab manual "should try liquid blood."
LAUGHTER
And it was Linus Pauling actually, in the year I was born, 1937, who for some reason or other had found this. He must have been an avaricious reader because Faraday kept detailed records. And he did look at liquid blood and found that, by gosh, if blood is non-oxygenated, it disrupts the magnetic field, and if it's oxygenated, it doesn't. Well, MRI is a magnetic field. So it was -- at the Bell Laboratories who knew this history and knew our physiology, and he did the experiment that is shown here. And what he did was put a little rodent in a high field magnet, and, of course, the little rodent brain took oxygen out of the blood. You can see these little black lines, and those are deoxygenated veins. Then he just changed it to breathing 100% oxygen, and the veins disappeared. So that is the foundation of fMRI. It's more sophisticated, but you need to know that much. So this led to the development of fMRI and the so-called blood oxygen level dependent signal or the BOLD signal, which is ubiquitous today in studies of the human brain. It's hard to escape its presence. So I just wanted you to know that little bit about the background to the techniques that I will talk about here today because I'm not going to spend more time in the details. So what I want to talk about now is this issue of the restless brain. This particular picture in front of you is a rather nifty one. It is taken with a combined PET and fMRI machine, which allows us to look at, in fact, the metabolism of the brain, in this case its use of glucose, and also the structural components of the body. And this is just a person like many of you in this audience laying quietly in a scanner doing nothing, but awake. And you can't possibly miss the fact that the most active organ in that body is that brain. It's really dramatic. And if you want to put actual numbers on this, it's 2% of the body's weight and 20% of its energy use. It's much higher in the developing brain, and I'll briefly come to that much later on. So the big question on the table here then is, what is responsible for this cost? I should also point out, and this is something that is computer scientists who are trying to figure out how to do computations efficiently, they ought to pay attention to how the brain does it because that brain costs about 15 watts a day. And so that's vastly cheaper than even this laptop in front of me. So it's an incredibly efficient but relative to the body is quite expensive. So the question is, what accounts for this cost? And this centers around, again, a set of issues that have been around in neuroscience for a very long period of time. And that is, fundamentally, how do you think about how the brain actually works? And one of them is probably best associated with Sir Charles Sherrington in which the idea is, if you will, that your brain is sitting there waiting for you or somebody in a white coat to come along and ask you to do something. It's basically reflexive. So that when we look at the images that we see, it's, if you will, a reflexive response to the incoming stimulus. And obviously this kind of a viewpoint, which permeates research at all levels, whether you're putting in electrode spikes or you're putting something in a scanner or everything in between, this has been the dominant way to look at it. The alternative, which was actually put forth by one of his students, T. Graham Brown, was, wait a minute, it isn't that way at all. It is the fact that the brain is actually active all of the time. That it has a complete set of programs and responses that it can bring to bear on the issues at hand, and it does so in a predictive mode. Now, that idea is much harder to test because what you're asking is, what's it doing when you're not observing anything? So I'll come to that issue, but the first question is, is this even a worthwhile debate? Should we care about this? And so I want to ask the question, essentially, how do you adjudicate that? What sort of measures? And I'm just going to use two
things
cost and connectivity. Connectivity, how much of the real world are we connected with? So let's deal with the cost first. We've had the answer-- Despite the fact that this has not been addressed until more recently. We've actually had the answer to this a long time. Right after the Second World War, Seymour Kety and many of his colleagues, but Seymour in particular and Carl Schmidt came up with a way in which if you put in the jugular bulb and the femoral artery catheters and you could sample what went in the brain and what came out of it and, therefore, you could measure the blood flow and everything that it used. So you could actually measure the cost of running it. And what came out of that was that it uses about 20% of the budget of the body. So these people then proceeded to do the obvious thing. They said, well, what if we asked you to do something, give you a difficult mental arithmetic test. So this is the effect of mental arithmetic, but keep in mind we're measuring the whole cost, the actual energy cost of running the brain. They ask the individuals to do a difficult mental arithmetic task, and lo and behold, as they write here, there were no changes in blood flow or oxygen consumption, which is the main measure of ATP consumption in the brain. No change at all. So, in a sense, we haven't had an answer to this question in terms of cost in 1955. So the only conclusion one can make from this, and there's other evidence you can bring to the table, is that the changes that we spend an awful lot of time, all of us, looking at are really quite cheap, actually. There are a few percent of the actual running cost of the brain, and we can compute that rather accurately in most instances. So that's part of the thing. The other deal is I think we all have this sense that we're really connected to the world around us. I look out at you and the details are amazing. We could talk about this in bits per second, or whatever it is in terms of information. And likewise, if you're still awake, you can see me and hear me and it seems very real as if a movie was playing out in your brain somewhere. The question is, is that really what's happening here? Well, we again, surprisingly, have information that speaks to this. We know from work by Charlie Anderson and Dave Van Essen and others that about 10 to the 10th bits per second, that's a lot of stuff, not all of it but a lot of it gets to your retina. But only about 10 to the sixth bits per second gets to that weigh station called the lateral geniculate nucleus, and about 10 to the fourth. Now, if you can do the math in your head, that's one part in a million. One part in a million. If we took this room and we reduced that, compressed this to that degree, the question is, what exactly is the brain seeing? I used to say that it's impoverished data, but maybe the word compression in the modern computer parlance is a better way of thinking it. You go from a TIFF image to a JPEG. You're doing something to something here, but the brain is left with a major problem of reconstructing the actual reality of what is out there. This is seriously counterintuitive, at least to me. But these are the facts, and so, as we think about how the brain works, we have to consider these kinds of observations. And again, it's one of those things that people have thought about this before. William James, I love this quote, if you're ever hard up for a quote, haul out the Principles of Psychology, by god. His definition of attention is as good today as it was when he uttered it in 1890. But he says that "Enough has now been said to prove the general law of perception, which is
this
that whilst part of what we perceive comes through our senses from the object before us, another part, it it may be the larger part, always comes out of our own head." Sobering idea. But that was 1890, and it just, for those of you who feel a lot has happened and we surely must know more, I always like this quote from Vernon Mountcastle. For those of you who know him, he was one of the preeminent neurophysiologists of our time. He's still alive but was most active in the last century. And he said, and I knew Vernon, it was a statement which surprised me, but when he says it, it gives it added import. He said, "Each of us believes himself to live directly within the world that surrounds him, to sense its objects and events precisely, and to live in the real and current time. I assert that these are perceptual illusions. Sensation is an abstraction, not a replication, of the real world." So this puts the brain, in my humble opinion, in a very special role in terms of predicting the world in which we live. I would have to conclude that the activities of the brain are mainly intrinsic and ongoing and largely non-conscious. If you actually extend the analysis to the bandwidth of conscious awareness, it's less than a hundred bits per second. So your intuitions about how this brain is actually working are at fairly minimal level, actually. And again, these are somewhat difficult, if you haven't thought about this before. But if we were to get a realistic hand on how this organ works, we're going to have to deal with this. So the question is now, we're right back where we were a little bit ago, is, well, okay, if that's the deal, how do we study the brain? If what we're deeply interested in is what it's doing in preparation to do something, if you will. Or if you're seemingly doing nothing. Well, here's the way, and again, serendipity in science, which is part of the fun of it, plays a huge role. It sure has in my life. And so here's a typical, these are actual brains. And there's a task state here. I don't expect you to ferret it out of that picture. And there's a resting state here. There is an actual difference, and, as I indicated, it's pretty doggone small. And we are clever at finding it, as we do in this case. But in the course of the work that we're doing in St. Louis, we had been looking at simple sensory stimuli, and the control state for a simple sensory stimuli is not a sensory stimuli. So essentially a resting state because when you get to more complicated experiments where we're working with language or memory or learning and all this kind of thing, control states become very, very sophisticated. But if the light is on, the only alternative is the light is off. And so we just continued to do this. We collected this sort of thing. And then we began to notice, I can't even remember when I first noticed this, but was that, by god, every time that we did an experiment where you engaged in a particularly challenging non-self referential novel task that areas of the brain decreased their activity. The first part of it that I most often noticed was this area in the posterior aspect of the medial hemisphere, a posterior cingulate precuneus. And I created a file because I hadn't a clue as to what was going on here. The file was entitled MMPA for medial mystery parietal area.
LAUGHTER
this
And it kept showing up on the scans. And then Gordon Shulman, who was in the lab and still is, actually, had a very different agenda. He was interested in finding the brain's attention system. So he put together nine PET experiments, there was no fMRI at this time, 134 people, looking for commonalities related to attention. He never found them, and that's another story because we've since found them. But what became very apparent was this was a common theme in the configurations shown here of medial prefrontal, medial parietal, and lateral parietal. It was there every time you did this. And the immediate response, of course, to this, which is interesting when you present novel findings, was a pushback, obviously we had a badly controlled experiment. What do you mean putting somebody in the scanner, not asking them to do anything, they must be thinking heaven only knows what. It's hard to imagine, however, that 134 people would all be thinking the same thing. But we really wrestled with this, and then it occurred to us that by this point we knew the definition of an activation, an increase in activity. I actually showed it to you. Blood flow goes up, the oxygen consumption doesn't. We can look at that ratio. So you could ask the simple question, do these areas exhibit that property when you're doing nothing? And the answer to that was no. And it's a bit of a long story, and it's detailed in a major paper we published in 2001. And it was called a default mode of brain function, and the analogy here was just like, you know, you get your computer and it has the default settings on your program. Well, the brain has a default setting. That was kind of the simple minded idea. The then-chairman of neurology accosted me, Bill Landau, one day and he said that's a banking term.
LAUGHTER
this
But anyway, it took hold. And what was interesting, this particular network, although this article focused on this network specifically, became known as the brain's default mode network. That wasn't our idea, but it seems to have stuck. So there we have it. An enormous amount of work has been done on this particular thing. We've looked at this. About 3,000 papers have been written about it at this point in time, and they're all over the map. And I don't pretend to go through that for you. But it's fair to say that this is not unique to the human. Is there is remnants of this and clear evidence of this in the monkey. There is evidence of it in the rat and probably in the mouse. Although, the elements of it relate to different, the critical elements in part are related to the difference in anatomy. For example, in the rat, the lateral parietal parts are in primary sensory cortex. So it says something about how these brains evolve. But the critical thing to take away is it seems to be playing an absolutely central role in the way the brain is organized. But there's much yet to be learned. This is my two graduate students, Anish Mitra and Tyler Blazey. I actually went to the literature and did a graph theory analysis of the citation properties of this particular network. And again, I won't belabor it, but there's all sorts of speculation about what's going on here that I will just leave to another day. I've often said, and I say this here again today, while that has generated a lot of interest, certainly on our part and that of others, that if we had been left with that as the only entree to the potential organization of the intrinsic activity, I think we might still be struggling with this story. And it was an observation that actually had a very interesting antecedent history that was unrelated to imaging or anything to the sort, but related to the fact that oxygen waves in the brain do fluctuate. And when I first entered this field many years ago, this was a focus of attention, and the focus of attention was whether the capillaries of the brain were opening and closing. It was just a big, big debate. Nobody cared about how the brain was organized in those circles. And it was brought, Bharat Biswal and his mentor Jim Hyde at the Medical College of Wisconsin, as I understand the story, Jim told Bharat that one of his chores was to figure out what this noise in the BOLD signal was. Well, it was a fluctuation in an oxygen wave, obviously, as I've indicated. So if you look at this kind of data, as he and we all have, this is one of my former graduate students. It's an fMRI study. And so each line is one minute. And so there's five minutes here. And if you look at those pictures, you can see the change in color as we go across each minute. So something is going on in here. And if you do what Bharat did, this happens to be our data, but if you do that and you follow it in time, you come up here with this curious fluctuation. It's not an oscillation. It's got several frequency in here, but there she is. And by god, if that isn't noise, I don't know what it is. And, of course, the way we did treat it, it was just average it out. Well, what he did was to say, by god, is this correlated with anything? And while this is our data, this is what he observed. You're not moving a muscle, and yet your motor system, your somatic motor system is talking to itself internally. And if you follow this logic, you can then march around the brain, and what caught our attention was when Mike Greicius out at Stanford did this little trick in the default mode network. And by god, there she was. You're not doing anything. You're just laying there. And so you can go around the brain and do this kind of work in something that to me is just striking. To suddenly realize you don't have to ask somebody to do something to see the general anatomy of the major systems that have become our good friends and acquaintances as we've gone about mapping the human brain. And you can go back even into lesion behavior. Wherever you want to go. So this was really quite a phenomenon. I must say it has produced a paradigm shift in the imaging world because of the many properties that this has and the applicability to all sorts of populations from newborn children to people with various diseases, it offers tremendous opportunities. But it is, as with everything we do, you take two steps forward but you sometimes left a few things behind here. When we do image subtraction with task stuff, we saw these beautiful pictures, but most of what the brain was doing was left on the cutting room floor. Now we come along and we take a time varying signal, but we look at its correlation structure across the brain. And the minute you do that, you've converted that signal into a spatial representation of it, and time is no longer present. It's now a spatial map. So you might ask yourself, well, wait a minute, these are systems, we know them well, but surely there must be a conversation going on here among them. The brain is integrated, for Pete's sake. So how might we think about that? We do see this from a developmental point of view. I'll come to the moment to moment things. But these relationships do change in interesting ways to suggest that it isn't cast in stone. We were privy a number of years ago as part of the MacArthur Law and Neuroscience program to study incarcerated juveniles the New Mexico prison system. That is, image them. This was work that Kent Kilo had pioneered. And in the course of doing this, we insisted that they get resting state data because we were interested in the issue of self-control or impulsivity. Obviously these young folks had gotten themselves into deep trouble being there. And so the question was, could we find anything in the anatomy of the brain that gave us a sense of how it's organized relative to this very important question. And so we used a relatively sophisticated algorithm that really lets the brain generate the hypothesis for you because we had no idea how to think about this, and we shouldn't constrain our approach by that. And I won't get into the details. You can read the paper if you'd like. But what came out of this was there are two areas in the brain that have to do with motor output and planning. We know this from a variety of other studies. And it seemed to be that the way this was connected up made a difference in terms of whether you were impulsive or not. And the issue here relates to two systems in the brain that talk to this. One is the default mode network, which, if you read anything about it, is very self-referential on its surface appearance. And the other is the dorsal attention and anterior or posterior control system. So in the majority of you, which I trust have good self-control and are not impulsive, that if we were to look at your brain and say how are these areas connected, it would look like something on the top there. But interestingly enough, if you were impulsive, it's completely the other way around such that it is the internal self-referential default mode network that is directly talking to this system, and these control systems are anti-correlated with that. So what emerged out of this was the developmental story. So on a long time period, there is this change in relationships among these systems. There is a give and take among them and there's much evidence to talk about that, but this is, I think, one of the examples that I rather like because it relates to a clear behavior that is deeply important in our development. But if you want to know more, of course, you can read the paper. It's rather detailed and there is a control group of normal children here, which suggests that the problem with impulsivity in this population has more to do with delayed development than something fundamentally wrong with the brain itself. The other is a little closer in time. That is things that happen more regularly. Now, one of the interesting things with regard to default mode network, part of the default mode network is kind of a here today, gone tomorrow companion. And that is the medial temporal lobe hippocampal system. Some people see it, other people don't. And in the study that we did in relation with Giulio Tononi and his group here, we were interested in a lot of aspects of sleep. We asked a similar sort of question, and that is, just looking at the brain, does it matter how your brain is hooked up in the morning as compared to the evening? And this took a lot of care to be sure you slept you well, that we monitored your sleep. We had all the right stuff thanks to Giulio and others advising us. And what's very interesting here is that when you wake up in the morning after a full night sleep, your medial temporal lobe and hippocampal formation is not talking to your cortex. On the other hand, when you get to the end of the day, it is discretely talking to the posterior part of the default mode network. As if to say that something about the accrual of information over the course of the day alters the relationships among these systems. In this case, in normal people just as a diurnal variation in how things are hooked up. What is the difference here? How is information being transferred? How we think about this I think remains to be determined, but it begins to give you a sense of the richness of using these approaches. But what we haven't talked about is the moment to moment issues here. In other words, okay, fine and dandy, over years when you develop things are changing, morning and night things are changing, but what about moment to moment? And as I pointed out here, the problem we have is that we've simply discarded time. The moment to moment timing here. This was something that really, I just thought to myself, there's go to be a way to get at this. But we have a system with fMRI that samples the brain every couple of seconds. It doesn't have great temporal resolution here. And yet we're talking about the possibility that we may be talking about timing differences in a system here that are very small. So one way of looking at resting state fMRI is to think about it like the wings of a bird. And that's kind of how we assumed it. That is, one part of the brain is talking to another as if the parts are going up and down simultaneously like this. What I'm about to tell you is that is not what is happening. And, in fact, there is a beginning and an end within systems and across systems, and the question is, how do you get at that? And hesitated almost using this slide, but I'm using it because, and I won't give you the mathematics behind being able to make this prediction, but I can guarantee you it is accurate. And it just surprised the heck out of me. You can take the time difference between these, man, this thing is really sensitive. I can just touch this. I'm going to do that. The timing differences here, and this was the work of one of my graduate students Anish Mitra. He was an advanced math major at Stanford, so this played a big role in this. But it's the ability to appreciate a small difference in a timing curve here. And, actually, yeah, so we did it. And if you want to read about how we did it and the mathematics, that's fine. But I want to give you the way that this has led our thinking. So this is a new map of the common systems of the brain that I've been talking about. So just to point out a couple that we've talked about. One is the default mode network, this is the dorsal attention, and then there's others in between. And what you have here now is the voxel representation of those systems in terms of parts of those systems that are early and parts that are late. So the blue areas are early, and the red areas are late. And we're talking about a second or two. So it is as if there is an arrow of time moving through each of those systems. So in the default mode network, it's early in the back and late in the front, and in the dorsal attention, it's the other way around. If you're hearing this for the first time and you're into this, you're an aficionado of the field, you'll say it's all blood vessels. I'll just stipulate that's not what's going on here and leave it at that. And we can argue about that later. But this is also the case that it's going on not only within these systems but across them. So you can end up with a picture then of the temporal dynamics of this system in which there are areas where activity begins and activity ends up. So this is, if you will, the arrow of time of the brain. And this is where we were when we published a paper just six or eight months ago. But the issue is, how does this arise in a more complex way? Because this is the average of the story. And what it turns out is that within this very complex matrix of data are actual threads that we've come to call them. Little compartments where they're moving in different directions so that if you, just to back up, look at something like this, it's hard to imagine those systems that come out, like the default mode network, when you start breaking it up like this because what I'm showing you appears to be just orthogonal to those pictures. But what's really intriguing about this is that you can start without any preconceived notions of those systems and go from data that looks like this through this and end up with that. So we have moved from a much more micro level than I ever would have imagined in terms of how these systems are being developed and put together. Now, how that happens and why it moves in these particular ways is a major issue to address, but when you realize the complexity of this thing, you also realize that not only that's very rich in terms of how the brain behaves, but there's a ton of room of mischief here. So that when we think of the complex diseases that we're concerned with, whether it be depression or autism or schizophrenia, we're not talking about a big lesion somewhere. We're talking about sophisticated differences in the organizational structure of a system like this. And what really I find remarkable is to think that even from my perspective, and I'm not suggesting we have the only way of doing this, that with care and thought and imagination and great students and all those great things, the richness of this is only begun to reveal itself to us. So you could say, well, what could be the underlying neurobiology? When you get down to something like this, now we're talking about true changes and excitability of the system. And some of you may know about up and down states. These are classical changes in the membrane properties of cells. And a paper that I've loved for many years by Bert Sakmann -- is that if you have these changes in membrane which are occurring spontaneously and they were originally described in sleep and now they're seen in awake animals and probably in humans, that these are transmitted at great distances and reflected in the ongoing subthreshold electrical activity of the brain. In this case, they had a mouse and they were recording the up and down states in the hippocampus and they were seeing correlated activity and field potentials in the parietal cortex, interestingly, with the same delay as I was talking about in this lag structure of the brain. So we're a long ways from putting all this together, but I think the richness of this and the potential is great. So just a word about spontaneous activity again. I think the best thing, and these are just a couple of papers that are out there, one from our group and others, but I think one of the mistakes that bothers me a lot is when we think about the BOLD signal, oftentimes we're thinking about a vascular metabolic signal that somehow or other is a sluggish representation of the fast activity of the brain. And I would strongly submit to you that that is not the case. That it is accurately representing the slow activity, most likely subthreshold activity of the brain, which is probably most of what the brain is actually doing. That's a mouthful I realize. And so I'll let you digest it without getting too detailed about it. And you might say then, well, wait a minute, we've got frequencies, we've got these very slow things that are going on,.1 hertz, and then we got things all the way up to gamma activity. And one of the things that emerges out of a literature that's been around for a long time is this idea of how these frequencies are correlated in space and time. So these are different frequencies here, and imagine that this gray thing is your BOLD signal. So it's kind of like the underpinnings of excitability and organization of the system, and all these other things, including spiking, lines up with this, with the different phases and it can be different in different places and different times. And interestingly enough, and I don't think you can see it but it's there, there's a black line in this work which is the behavior. And we've known for a long time that human behavior varies with this interesting property of one over F, and it's probably related to this. I read it like this. This is George Bishop. This is kind of putting it in perspective. He said, in general, it is not necessary to infer that each individual impulse traveling up a fiber from the retina arrives as a unit impulse in the cortex and registers there as such. And, of course, we know it's not unit. It's about one part in a million. Rather, we would look upon the cortex as being in constant activity. The physiological activity of the whole network of neurons bearing some direct relationship to the present state of the animal's complex behavior, which is sometimes referred to as its mental state. 1933. There have been a whole host of papers since then, and the metaphor I rather like, I like boats and I'm on the water a lot, are waves. So you can think of them and you could think of things that interest you that might be spikes or like white caps on a wave. Or if you're really special about it, it's called a Kelvin-Helmholtz instability but is to me a white cap. So it's this underlying activity that really sets the stage for the way the system operates. I won't go through any of this other than to suggest that what surprises me is going from George Bishop through all of these papers how much information is actually out there on this general topic and how little this is discussed in major neuroscience circles in terms of the importance of all of this. This has been talked about and dealt with by all sorts of people from various different perspectives, and I think it's time we try to put some of this together. You might say, well, how do we think about the BOLD signal? I'll just offer you my favorite favorite example because what I'm suggesting through all of this is the brain is seriously in the prediction business and creating models of the world in which we live in order to accommodate us. This is a paper by David Heeger and his group published a number of years ago, classic experiment, and I was shown some very interactive results of experiments being done here that are similar to this. So you're put in a scanner and you're told when you see that grading on the screen, you're to push a button, and you're given a cue when it's going to come and then the grading comes and you push a button and guess what? If you look at the visual cortex, you see that. However, there are catch trials here in which you get a cue but you don't get the stimulus. And lo and behold, what does the visual cortex do? Same thing. So this, again, keep in mind, this is a BOLD signal. This is not spikes and whatever. But if that doesn't convince you the brain is in the prediction business, there is a long history of this. This is called contingent negative variations, electrical things that completely recapitulate that idea that are very old. They've been around for years. And I won't, again, burden you with this. There would be a question, and I'm coming around the clubhouse turn here, could there be a biochemical part of this story? So often when we think of neuroscience, we stop at the membrane of a cell or the membranes of a synapse, and we're talking about electrical potentials and the moving of ions and everything and the stuff in the cell or these mitochondria and all this soup and enzymes and stuff, and it's just there to give a hand to all this extra work that these cells have to do. The question is, are we missing something here? And one part of the story that actually underpinned Bharat Biswal's observation was an observation made at the early time, before Bharat was a graduate student, and published some years later by Vern, is the fact that if you're looking with sensitive electrodes in an animal in homologous areas of the two hemispheres, what's utterly striking about it is that there are changes in redox potentials that vary in the homologous hemisphere. So now you're talking about a fundamental biochemical process of a cell. And the question is, do they affect the excitability? Years ago, we did experiments where we manipulated this not even thinking about this, and the answer is yes it does. We were thinking about how to control a blood vessel. I'm rethinking that experiment. But the thing that really kind of nailed this for me in terms of concept was a paper that appeared in Science in 2012 from Martha Gillette's lab in which they were looking at circadian rhythms and the redox potentials of the cells that govern that. And what she demonstrated was it was not the electrical properties per se, but it was the redox potentials of the cells that were the clocks that were running this. And she ends up with this statement that I think we need to take seriously when we try to broaden our horizons about how the brain works. She said, "Energetic fluctuation in the central nervous system has been considered to be a consequence of neuronal activity. However, our study implies that changes in cellular metabolic state could be the cause, rather than the result, of neuronal activity. Crosstalk between energetic and neuronal states bridges cellular state to systems physiology." And lo and behold, there is a vast literature on the properties of metabolic systems that fluctuate just exactly this way, and the poster child of this whole story is glycolysis. So you might say, well, jeepers, what's glycolysis? It's the stuff that shows up, like coal to the furnace. It comes up and you throw it in the furnace and it gets burned up. And I often use this slide, and I'm not trying to insult anybody, but I've often thought if I did a little quiz at neuroscience, handed out little cards, and I would say what has glycolysis done for you today? The answer would almost for sure be it produced a ton of ATP. What did you think? But it turns out, and this is a wildly oversimplified diagram of this, I hesitated to bring anything more detailed. This is an exquisitely engineered biochemical process called glycolysis. Before that little glucose molecule ever gets to the furnace, that is the Krebs cycle, oxidative phosphorylation, whatever you wish to call it, it is deeply involved in biosynthesis and specifically involved in membrane processes and membrane pumps. But biosynthesis is a major issue here. This is what oncologists cut teeth their on because cancer cells, when they decide to make that leap to become a cancer cell, shift their metabolism or enhance their glycolysis, called a Warburg effect. And when you think about it, it's very clever. It's been extensively studied in cancer biology, but surely, it was not given to us so we could have cancer. In all likelihood, in an organ that is renewing itself in minutes, hours, days, and weeks, it has to be playing a role in the brain. So we ask ourselves, and this has become a very major part of my research agenda is to understand what do we know about this. And I'm going to give you a whiz bang trip through this as we head towards the finish line here. One of the startling features of glycolysis in the brain, that is glucose consumption that there's plenty of oxygen to burn it up but it never happens that way, and the references to all this are in the little -- here if you want them, is highly non-uniform. So areas of the frontal control systems and the default mode network are highly glycolytic. And the medial temporal lobe and the cerebellum are very non-glycolytic. So this was a big surprise when we discovered this. In fact, we looked at it multiple ways to be sure we hadn't messed it up. And this is related, as I show here on the right, which I find really interesting, you can take data from genetics and begin to probe the cell biology of this coming at it to begin with or orienting yourself to this through the human metabolic data. And again, the details I won't get into, but I threw this up here just to indicate that's where we're going with this. As I mentioned, this is the response to an increase in activity, and the thing I didn't mention was the fact that when you increase activity in the brain, glucose consumption goes up and oxygen doesn't. And you have to ask yourself why. And it probably relates to both ion pumping and the potential for biosynthesis. Glycolysis was shown a number of years ago for disproportionately goes down in slow wave sleep. And there's much thought about its role in reorganizing synaptic structures. Giulio Tononi here has been a pioneer in this work. One of the stunning things is the metabolism in the developing brain is twice what it is at age 10 than it will be in you as an adult. And a very large proportion of that is aerobic glycolysis at a time when you're building new synapses and pruning them and the like. There is a very nice genetic correlate that has been published of how this is managed between. It isn't as though you put synapses in there, trim them up with your scissors or your genes or whatever you want to, and then there you are. But rather, it's a dynamic equilibrium between proliferation and elimination which continues throughout your life, giving every reason to think that glycolysis has to be playing a role. Interestingly enough, and you probably can't see the details, but just notice there's a big bar in here. This was a marvelous experiment done by Peter L Madsen many years ago in which he looked at a very difficult task and was measuring the whole brain metabolism and noticed that when you increase the activity, you got just what we get with the various imaging techniques, but what was really interesting was that after you completed the task there was a many hours period of time in which aerobic glycolysis was up. Again, suggesting that biosynthesis was ongoing here. And if you then take this one and you take this idea that this is, stay back there, is regional, you can develop a collaboration as we did with the Allen Institute in Seattle, and you can combine their genetic data and you can ask the question, what genes migrate with differing levels of aerobic glycolysis? And what you find is that areas of the brain that exhibit genes related, and these little diagram, those are -- genes, there are thousands of genes in this little picture here, are related to plasticity. So it's a way of combining really divergent levels of analysis that didn't require us to do the genetics or the Allen people to do the metabolism but was a real collaboration. And that the areas that had a high glucose use and oxygen consumption had a very different metabolic genetic underpinning. And, finally, the thing that really struck us was that if you take a look at the brain, which we can now do in patients with Alzheimer's disease, and, as you know, one of the characteristic features of that disease is amyloid deposition. Amyloid plaques. So we can look at this with PET, and thousands of these are being done around the world now and lots in our lab. And if you look at that and you look at the map of aerobic glycolysis and you look at the map of those plaques, they overlap. And it turns out that amyloid proteins are a regular constituent and response to changes in activity. And they are followed in that response by changes in aerobic glycolysis, work that's being done in Dave Holtzman's lab in St. Louis. But then, astonishingly to me, was to then take a look at the distribution of genes related to specific cell types and ask how do they relate to the deposition of amyloid. And we did this first in humans making the great leap of faith that you could take genetic data that was generated at Stanford and apply it to a human data generated in St. Louis, and you could be very skeptical of this, but we found that it was related to, remarkably, a particular interneuron, that is the VIP interneuron. And if you then go to the transgenic mouse and do this same experiment, you found that this was even more dramatic. That little cell has this marvelous property of inhibiting inhibitory interneurons, on the one hand, through GABA. So when you want to increase the gain on the system and do more work, it plays a key role. But it also transmits VIP to the astrocyte, which is a potent stimulus to glycogenolysis, where the only glycogen in the brain exists, and when you combine that with norepinephrine, you get a multiplicative effect. So it sits at this interface between activity and the metabolism that supports it. And here you find that activity is responsible for the deposition of the amyloid, and we've got a potential biochemical and cell biology way of thinking about it. And I guess what I'm trying to illustrate here is not that I'm a cell biologist, I'm not, but that you move these things forward by the way in which you collaborate with people that have different disciplines because it's critical in tackling the difficult problems. So in summary, the organization of the brain is intrinsic, as captured in space, time, and metabolism. The functions are intrinsic and largely nonconscious. It's taken care of business for you. This intrinsic activity forms the basis of what I would loosely call prediction. And it's based on this remarkably compressed data. So there's really an interest in understanding, and it's something I harp on, and that is the BOLD signal is not just a ubiquitous representation of everything going on in the brain. It is highly specialized in what I'm telling you. So here's the picture. And the way the metaphor that I find very useful in thinking about it is this one. That for far too long we have looked at the brain as if we were looking at an iceberg and paying attention to what we can see and ignoring what's, in this case, under the water. The majority of what the brain is doing is deeply important to our understanding of it, and it has for far too long, in large measure, been ignored. And I think the way forward is one in which there is multiple levels of analysis, from the imaging stuff that I do and others to the neurophysiology to the cell biology to the genetics and everything in between and all of the techniques. And while new techniques will be valuable, I think we also know it to ourselves to even better understand the techniques that we have. It isn't always the technique, it's the people that are using it. Thank you very much. It's been a pleasure.
APPLAUSE
Search University Place Episodes
Related Stories from PBS Wisconsin's Blog
Donate to sign up. Activate and sign in to Passport. It's that easy to help PBS Wisconsin serve your community through media that educates, inspires, and entertains.
Make your membership gift today
Only for new users: Activate Passport using your code or email address
Already a member?
Look up my account
Need some help? Go to FAQ or visit PBS Passport Help
Need help accessing PBS Wisconsin anywhere?
Online Access | Platform & Device Access | Cable or Satellite Access | Over-The-Air Access
Visit Access Guide
Need help accessing PBS Wisconsin anywhere?
Visit Our
Live TV Access Guide
Online AccessPlatform & Device Access
Cable or Satellite Access
Over-The-Air Access
Visit Access Guide
Passport













Follow Us