Stuart Firestein on Why Ignorance and Failure Lead to Scientific Progress – Episode #14

Steve and Corey speak with Stuart Firestein (Professor of Neuroscience at Columbia University, specializing in the olfactory system) about his two books Ignorance: How It Drives Science and Failure: Why Science Is So Successful.

Corey: This is Manifold. Our guest today is Stuart Firestein, professor of neuroscience at Columbia University, where he specializes in the olfactory system, and Stuart is the author of two books — the first, Ignorance: How it Drives Science; the second is Failure: Why Science is so Successful. Welcome to Manifold, Stuart.

Stuart: Well thanks, Corey, it’s a pleasure to be here.

Corey: And I’m Corey Washington.

Steve: And I’m Steve Hsu.

Corey: Stuart, I’d like to start off with a quote that you have in your book. You start one of the chapters with this quote from Isaac Asimov: “The most exciting phrase to hear in science is not Eureka, but Hmm, that’s funny.”

Stuart: Yeah.

Corey: What do you mean by that?

Stuart: Well, I think the most interesting thing that you can find in science is something you can’t understand. So of course we do experiments, and we hope that we find some great result, and then I guess we would yell “Eureka” and that’s fine. But really what’s often most interesting is when you do an experiment and it doesn’t work, which is sort of the idea of failure, if you will, that failure is the way to get to the sort of deepest kind of unknown, if you will, the unknown unknown — which unfortunately was made most recently popular by Donald Rumsfeld, but that’s not who first said it or came up with it. It first appears — I did a lot of research on this, because I couldn’t stand the idea of quoting Donald Rumsfeld about it— it first comes up in an epic poem by D. H. Lawrence, published in 1917, called “New Heaven and Earth,” sort of about the transition between this plane and the next plane or something like that. It’s a bit overly romantic for me, but he talks about his hand reaching out and touching the unknown, the real unknown, the unknown unknown, and that is the deepest kind of ignorance or mystery that we have, what we don’t even know we don’t know. And the way into that, I think, is by failure. So when an experiment doesn’t work out the way you think, then now the game is on, if you will, you’re onto something more interesting than what you originally thought you even had.

Corey: What I enjoyed about your books as a former philosopher is that it’s history and philosophy of science written by a scientist, and it comes off as a lot more realistic than the familiar text you get from philosophers, which tend to be idealized at high level. You get into the very kind of apparently mundane features of science, like writing grant proposals and weekly meetings, and I think it brings out a sense of how science is actually done. Am I right in thinking that that’s how you wanted to depart from the previous literature?

Stuart: Well yes, I mean at least partly, because I’m not a philosopher or a historian, at least I’m not trained as one. I like to hang out with them, I like to read philosophy of science, I like to read history of science, but I don’t consider myself really an expert in either of those places or trained as one. What I do have, however, is the daily experience of being a scientist, and I suppose enough of an interest in the philosophy and history behind it, enough of a belief that the philosophy and the history of science can add to the way we do science today and should be considered a part of it, that I like to think there’s this little niche in there that I can fill.

Corey: You actually came to science late Stuart, and there’s a lot of talk about wedding art and science or that something is an art, not a science, but you were actually a kind of artist for a long time. Could you tell me how you came to it and explain one of the statements you make in the book, where you say that being a production manager is actually pretty good training for science?

Stuart: Yes, so I worked in the theater for many years, almost 15 years I guess, directly out of high school, because in those days — not to overly date myself — but in those days, if you wanted to work in the professional theater, there were no university courses, no university programs and professional theater training — as there are now by the way, but there weren’t then — so the way you learned the theater was to apprentice yourself to people that you thought whose work was interesting, or wherever you could get a job in some cases, and learn the theater by an apprenticeship. And so that’s what I did: I didn’t go to college out of high school, I kind of ran away with the circus, if you will, and I worked initially schlepping scenery around and going out and buying props and all that sort of stuff, hanging lights. Eventually I learned a little bit of lighting design, so I did that because that led to being an assistant stage manager and then a stage manager, and then finally the opportunity to direct, which is what I wanted to do and finally did as a career. But the remark that you’re referring to is that I feel that it was sort of the ideal way to learn things, and I think it still is the way we should teach things, even in science, at least at a certain level, which is this kind of apprenticeship system where you have responsibilities. So the nice thing about being especially an assistant stage manager, even better than a stage manager, the assistant stage managers just sort of do whatever comes up, which could be anything from running to get coffee, to fixing a costume that ripped at an inopportune moment or, who knows, anything that comes up. You’re just the stage manager’s assistant. But the great thing about that is most of the time you don’t have very much to do, so you can sit and watch, and you can watch a director work, you can watch the production come together, you can watch the way actors work, what they do with each other, and you’re nonetheless involved deeply in the production, you’re being paid by these people, you have a role there, people come to you for things. So you’re involved but also you have a lot of time to watch, you have a lot of time to see what some people do that you think is right and what other people do that you think is wrong. And I still think that’s the best way to learn how to do things. And I think it’s true in science, too. I think that’s what graduate education is. We call it school, we call it graduate school, but I don’t think it’s school at all. I think it’s work, it’s the beginning of a job, it’s an apprenticeship.

Corey: One of your books, Ignorance, and in a sense also Failure, originated with a course you taught, where you would invite people to come discuss things that they did not know but would like to know in their field.

Stuart: Yes.

Corey: I assume that part of that discussion also included discussion and perhaps cases where something did not work that may have led to another discovery for them.

Stuart: Yes, also.

Corey: What I’d like to ask both to you and to Steve, are if you can give us examples of cases in your personal work or even in your field that you were really struck by, cases where you think failure or ignorance has driven research ahead.

Stuart: Well, I’ll give you a current example, one that we’re working on right now in the laboratory, that’s turned out to be a very big change for us. So several years ago a woman named Linda Buck, then working in Richard Axel’s lab here at Columbia University, she was a postdoc, discovered this huge family of genes. Now these genes, encoded, were the genes for the olfactory receptors in our noses, so these are proteins that are in cells in your nose that are able to detect odors by binding the molecules or odors. Odors are very small molecules, they’re typically, I don’t know, made up of carbon, hydrogen, and oxygen, and they may have anywhere from, I don’t know, anywhere from 10 to 50 atoms of carbon, hydrogen, and oxygen put together in different shapes, in different ways, and with slightly different properties. And these receptors are able to bind them in a kind of a lock-and-key mechanism, where the receptor is like a lock and the odor is a key, and if it fits into a receptor it activates the receptor, and then your brain says, oh well, that must have been flowers, or citrus, or woody something or another, or grass, or something like that, whatever sort of odor there is. When Linda discovered these receptors it really altered the field, it really changed the field. It was revolutionary because it was a large, large family of these receptors, a lot of them. Human beings have about 500 of them, elephants have 4,000, and other animals have somewhere in between that number, so it’s almost between 1 and 5% of your whole genome devoted to these receptors. So we thought, well, this is going to be the basis for understanding how it is we’re able to recognize so many different odors and so many different kinds of odors — I mean, there are thousands, maybe hundreds of thousands, maybe even a million or so, different odors out there that we can recognize and more or less discriminate between. It’s kind of a puzzle how we did that. So the thought was well, we’ll just have all these receptors and then we’ll try different odors on each receptor and some experimental preparation where we can isolate each receptor, and then we’ll see which receptors bind which odors and which odors bind to which receptors, and there’ll be some sort of a code, a kind of what we call a combinatorial code, where a fruity odor will bind receptors 1, 52, 201, and 305, whereas a sweaty odor will bind receptors 5, 8, 25, 200, 300, etc. etc., and there’ll be different patterns of receptors that are activated by different odors, and that’s how we’ll understand the code. That all seemed to be pretty good and that started 25 years ago, and we’ve all been working on that program. Then we recently were able to make use of a new recording technique that enabled us to look at a large number of cells all at once in a tissue, rather than cell by cell or receptor by receptor. And we thought, well, the way we really smell things in this world, of course, is not one odor at a time, but in complicated blends or mixes: so even a rose, which is how it smells would be easy to identify as a single odor, the smell of a rose actually has about seven different molecules that make up that rose smell, and when we smell those seven molecules together, that’s really the smell of a rose. Coffee, for example, has over 780 molecules in it, each of which has some odor and adds something to the coffee odor. Now it really only takes about 30 or 35 of those [molecules to fool people] into thinking it’s coffee, and that’s what Nestle and other food companies know about, but nonetheless it still takes a blend of odors. So we thought, well, we’ll try blends of odors rather than single odors on these cells and see what happens. And what we learned was — to make this sort of simple — much to our surprise, rather than seeing different cells activated by all these different odors in the blend, we saw that some cells were actually, their activity was suppressed by some of the odors in the blend, so that apparently odor molecules can act not only as an activator of cells, but they can act to suppress the activity of cells that might respond to another odor in the mixture. So now it becomes much more complicated, and now we actually know less than we knew before, I would say, so we’ve managed to increase our ignorance significantly just in the last couple of years.

Corey: And where does this lead to Stuart, once you make this discovery, what’s the next grant proposal?

Stuart: Well, the next issue then is of course to try and figure out how the… well, first of all to understand the extent to which this goes on, to which there is both enhancement and suppression of responses at the very first level of detecting an odor, which is the cells in your nose, and then to understand how the brain can make sense of that, because that’s a whole new problem for the brain. So that’s very different than what we thought it would be, which was modeled on color vision, right? So color vision, you have three “receptors,” if you will, for different wavelengths of light, three cells that are sensitive to three different wavelengths of light — red, blue, and green wavelengths, that’s what they look like anyway to us — and then you see all these different colors, shades, and hues, thousands of them, maybe even millions of those, by some combination of the activity of the red, the blue, and the green. And that’s enough, three is enough, and you just mix and match them. Any particular wavelength of light will activate a little bit of red, a whole lot of green, and very little blue, or something like that, and that will turn out to be yellow. I don’t know exactly, I’m just making that up, but that’s how we see different colors. So we thought odors would work the same way, but now they don’t because, for example, what we now see about odors, if we applied that to vision, it would be as if a photon of red light not only activated a red photo receptor in your eye, but it also suppressed a green receptor, and that doesn’t happen, that never happens, the brain never gets that kind of information in the visual system. So now we have to really rethink how it is we would encode this other sort of stimulus. These chemical stimuli can’t just be like the way vision has figured out how to do it, or even audition, where you combine a whole lot of different pitches and frequencies and make a complicated sound out of it.

Steve: So Stuart, in the case of vision, if I just put another processing layer behind where the optical nerve — sorry, where the activation count from the rods and cones feeds in, then I could get the effect you wanted. I could have red signals acting to decrease the interpreted strength coming out of the blue sensor or something like this. So in your case, is it clear that it’s actually the sensors that can subtract from each other, can have negative effects on each other, or could it be a second layer that’s doing this?

Stuart: So you’re absolutely right. Not only could you do that in vision, but vision does do precisely that. There are all sorts of mechanisms of inhibition and suppression which enable your retina and your brain to sharpen up the image. And there are many cases, for example there is something called — it’s a little complicated so I won’t go into the details — but there is something called red-green color opponency, and one of the results of that is, if you think about it, you have never seen the color reddish green. We actually can’t see that color, because we use similar circuitry for both red and green and they oppose each other at some level. And so there are certain little bits of color in the spectrum that we actually can’t see because of that. So yes, that absolutely occurs in the visual system, and it occurs in the auditory system, and it occurs in every other sensory system we know. But it appears that this is not the case in olfaction, because we’re recording directly from the primary sensory neurons. It would be like recording directly from the rods and cones. In our preparation, the brain isn’t even there.

Steve: Got it. So it seems like you’re detecting specifically some really chemical effect where, okay, I have these chemicals around, and they actually inhibit firing of one of the particular types of sensors.

Stuart: Right, and that shouldn’t have been unexpected in a way, because we know that these sorts of receptors, the ones we use in our nose to detect odors, are very similar to other receptors we use elsewhere in our body, in particular in our brain, for example, to detect neurotransmitters like dopamine or serotonin or epinephrine, etc., adrenaline, those sorts of things. And we know many drugs are directed at those receptors to block them, the most famous of them, I guess, being called beta blockers. So beta blockers are a drug: it’s a chemical, a small molecule that sits in the receptor for adrenaline, the so-called beta adrenergic receptor — that’s why it’s called a beta blocker — and it sits in that receptor and stops adrenaline from activating. It therefore lowers your blood pressure. So we know that that sort of thing should be going on. I guess what’s surprising to us is that it’s so extensive, we see quite a lot of it. I mean, even in a three-odor mix, we can see as much as 20% suppression and even 20% enhancement. And the tricky thing is, if you work this all the way through, is that we’re only using, let’s say, a three-odor mixture, and we see that one of the odors in the mix inhibits one of the other odors in the mix at some of its receptors; but if we had a different set of odors or just changed one of the odors in this mix of three, then we would see a whole different pattern of suppression and enhancement. And the question is, how does the brain know what’s in the mixture before it knows what’s in the mixture, if you see what I mean. It’s really kind of a conundrum, in a way, how the brain can figure out what’s in that mixture, because that mixture has very specific kinds of abilities. The components of that mixture are very specific for certain receptors, and if you just change one or two things in that mixture, you would change not only the cells that were activated, but also the cells that were being suppressed.

Steve: Is there a paradox in principle here, or just that it’s pretty complicated? So in other words, if I have some set of inputs and some nonlinear function connecting those inputs to some output that your brain can then process, is there some reason to think that couldn’t be how it’s operating?

Stuart: No. I mean, there can’t be a paradox here because we do smell things. We smell these blends. We’re quite good at discriminating a rose from the manure that it grows in, obviously, so it can’t in the end be a paradox. It’s just, we can’t figure out at the moment how the brain manages this, what is effectively — I don’t want to get too technical here — but what is effectively a very high-dimensional stimulus versus other stimuli that tend to be low-dimensional. So vision is a very low-dimension stimulus. For example, color vision is only a one-dimensional stimulus wavelength. That’s the only variable dimension for color vision is wavelength, whether it’s red or blue or green or something in between that. Sound is the same thing. It’s frequency, it’s one dimensional. Even the visual scene, when you look at a visual scene it’s only two dimensions, because your retinal tissue is a flat tissue on the back of your eyeball, and so it takes three-dimensional information in but it reduces it to two dimensions, and your brain has to figure out how to add the third dimension back in. We actually make up the third dimension in our brain by a process that, to my knowledge, we still don’t really understand very well. So even a complicated visual field still only varies in two dimensions for the brain to interpret. But olfaction varies in multiple dimensions, a very high-dimensional stimulus. There are lots of different chemicals, and those chemicals have many different properties. They vary along many chemical dimensions, if you will. They have different numbers of atoms, different molecular weights, different shapes, different ways they’re put together, they can be saturated or unsaturated, all sorts of organic chemistry terms that we needn’t get into here, but they can come in many, many flavors, so it’s a very high-dimensional stimulus. And then the question is, how does the brain understand or encode or perceive, build a perception out of a high-dimensional stimulus, because the brain itself is a very low-dimensional organ. It’s three dimensions basically.

Steve: So from the viewpoint of vision research and machine learning, a visual field is actually a very high-dimensional input, because for example, in principle, each pixel has an intensity and if you like three different colors, and so the actual enumeration in terms of data of all the information that’s coming in through your visual field is actually classically referred to as a super high-dimensional megapixel-type data set. I’m wondering if you were to record output from these cells, I assume that’s also a high-dimensional output from the sensor, it seems like what you want to do is build a high-dimensional nonlinear function that takes the information about the chemical distribution and intensities or concentrations, and then map that to a high-dimensional output from the sensors.

Corey: But then you don’t understand that, Steve, you may have a model of it but it doesn’t give any clear idea of actually what’s happening. It’s clear we have actually a theory of color vision. We know about opponency. We know about center-surround for visual systems, we have a set of principles, but simply modeling it by neural network.

Steve: Well, you might ultimately actually learn the way that the input data is processed by the sensor, so your model might actually describe how the sensor is processing the information that comes in.

Corey: It might match it, but I’m not sure it will lead to any understanding. I don’t want to get too far in the weeds on this, but…

Steve: Well it’s very similar, I think, to what happened just recently with face recognition. So in primate systems, which they can monitor, they found a neural net structure which is quite similar to — in terms of first breaking it into primitive features and then combining those primitive features perhaps in a nonlinear way — that structure’s very similar to what human-made in silico face recognition does. And so that was quite an amazing, to me amazing result that they showed that evolution had produced, and it had to basically — because for information, theoretical reasons — had to basically produce the same architecture for processing it. Now you could say we don’t actually understand it, because the model is pretty complicated, but on the other hand, I would say our understanding of face recognition went up as a consequence of this research.

Stuart: Yes, I think that’s absolutely true. But of course, to go back to our ignorance-failure theme, it raises just as many questions as it answers. It just raises better questions.

Steve: Yes, I wanted to ask you in what sense the olfaction example you gave is an example of failure. I mean in the sense that, oh we had a kind of wrong model, or oversimplified model inspired by vision for how olfaction works, and then we realized that’s not actually what’s happening and there’s a richer system at play here. To me, that’s all positive progress in science. The failure part of it is where people stuck for a long time because they were clinging to this oversimplified model? Or in what sense is it failure?

Stuart: Well, I think in this case, the sense that it’s a failure is that yes, we had what we thought was a pretty good explanation for how olfaction works, and we just had to fill in the details. But it turned out that filling in the details was never going to give us actually an explanation, or filling in the details we thought were necessary was never indeed going to give us an explanation or a deeper understanding of how olfaction encoded the stimulus.

Steve: Got it.

Stuart: So in that sense, I think it’s a failure. There are technical failures in olfaction, which have become problematic, of course, as well. I mean, we can’t express these receptors in systems that allow us to use high-throughput screening. There are all sorts of technical issues as well that one could talk about as failures that really are obstacles, but then they often create interesting ways to working around those obstacles.

Steve: But it sounds like if you have an initial hypothesis, and it turns out that’s not correct and you need to broaden it out, that’s an example of failure for you.

Stuart: I would say that’s one example, sure.

Corey: So Steve, do you have any thoughts from physics?

Steve: Well, I’m glad we had this conversation because I wasn’t sure exactly what you meant, Corey, what you were looking for when you asked for an example of failure in my discipline, which is theoretical physics. The one that came to mind, I’m not sure it fits exactly into Stuart’s classification, but let me try it on you, and then it’s an interesting historical story anyway. So the Nobel Prize for quantum electrodynamics was given to three theorists, Tomonaga, Schwinger, and Feynman, and it’s a much celebrated period of work really, which laid the foundations for something called quantum field theory, which is the basis for essentially all modern theories of fundamental physics right now. And it was a very interesting story, because there wasn’t anything actually wrong with the original theory of how, say, charged objects, like electrons, interact with photons — that’s fundamentally how electromagnetism works — and it wasn’t that there was actually anything fundamentally wrong with the theory that people had, actually going back to the earlier work by people like Heisenberg or early quantum mechanics, people of an earlier generation. But when people tried to do detailed calculations, they kept getting infinities. They kept getting answers like, oh, the cross section for this to scatter off this is infinity, or the quantum corrections to the charge of the electron is infinite. And so people just didn’t know mathematically what to do with these infinities. It took a huge amount of both developing a lot of mathematical tools, and then also actually refining concepts of what’s actually happening with these particles, what’s the difference between a particle and a field — all kinds of things like that had to be invented by literally the smartest people in the field, because everybody else just couldn’t understand what they were saying for years. But finally they figured it out, they could do calculations that actually gave finite answers and agreed with experiment, and then gradually the rest of us were taught, after it had been discovered by these geniuses, how to actually understand what they had done, and it’s led to a kind of modern understanding of how particles and fields interact. And so that was a failure in a sense — it was a failure, I think, of understanding: it wasn’t that new laws of physics were developed or new particles were discovered, it was that the existing theory was just too complicated for basic human brains to understand and had to be figured out by really, really good human brains and then re-explained to the rest of us.

Corey: Is it possible, Steve, that that was a case of failure that people at the time just didn’t realize was a failure?

Steve: People definitely knew that if they tried to do calculations like okay, let’s try to be really careful and include the quantum effects for how an electron scatters off a photon, then they would try to do a calculation that would get kind of nonsense. They would get infinities or inconsistent answers, and it turns out they were just not doing the calculation properly. And it took people like Feynman and Schwinger to figure out how to do the calculation properly, and once they did, they got reasonable answers that agreed with the experiment.

Stuart: So I would consider that a really good example of the way science progresses by failure, or even more importantly, the way failure is a crucial part of science. If it worked out every time, if every experiment worked out or every theory just worked out right from the beginning, every good idea we had or seemed to have worked out, I don’t think we’d have that much faith in the process. If we thought it was infallible, you know what we think of infallible processes: they’re no good, they’re just authoritarian usually. I think that one of the important things to recognize is that the failure part of it is integral to the process of science. It doesn’t actually matter if later on, if retrospectively, it led to some insight or this or that. It often does, and that’s great. But you could say, well, wouldn’t it be better if we just had the insight right off the bat, instead of wasting 10 years or 15 years doing stuff that didn’t really matter until we had the insight of why things were failing? But I don’t think you could do that. I mean, I don’t think that’s correct. That’s not part of the process.

Steve: I definitely want to agree with your view that failure is a necessary step, because your existing theory has to fail in order for you to then develop the new and better theory, and one of the problems we have in particle physics right now is that the theories are too good. There are essentially no experimental observations which disagree with the theoretical predictions, and so we keep building these bigger and bigger accelerators or looking at more and more exotic phenomena in the hopes of finding evidence for what we call new physics or physics beyond the standard model of particle physics, and so far we’re in the situation where we don’t have such signals. We had hoped that this big $10 billion Large Hadron Collider would provide such signals, but so far they’re absent.

Stuart: Yeah, so I agree that’s kind of a problem. I mean, you really don’t want to come to an abrupt conclusion, where suddenly everything seems right.

Corey: So this is a discussion we had with one of our previous guests, Sabine, how do you pronounce it?

Steve: Sabine Hossenfelder.

Corey: Hossenfelder, yes.

Stuart: Yeah, who wrote a book.

Corey: Yeah, who thinks that particle physics is experiencing a crisis presently, because it’s actually not getting the kind of negative signals that you might expect to get.

Steve: Yeah, not enough failure — in the connection between theory and experiment, not enough failure — and that failure, I think you would agree, signals some interesting new stuff. And so she’s written a book describing the current situation.

Stuart: Yeah, so I guess I largely agree with her. I read the book quite some time ago actually now, but I think she also kind of attacks the idea that physicists are too wedded to the idea of elegance and beauty, and that their theories are only that that’s a way of judging a theory, and that they should forget that to some extent. They should get themselves involved in some very ugly mathematics or ugly theories.

Steve: Yeah, in fact her claim that that’s true is based on a failure. So people imagined, based on aesthetic ideas, new physical models which would encompass the models that we currently have, but they were more beautiful, and then those led to some predictions for exciting new physics or phenomena that we would see at this current generation of colliders. And it’s turned out we haven’t seen those, those predictions have not been confirmed, so these “more beautiful models” turn out not to describe nature. And so she regards that, I regard that as a failure. There’s a kind of sociological observation that the field was so taken by these aesthetic principles that tens of thousands of theoretical physics papers have been written basically by my generation and the previous one of theoretical physicists, and those papers turn out not to describe reality. Of course, on the other hand, theories are very cheap compared to experiments, so maybe it’s okay.

Stuart: So why not try as many as you can.

Steve: Yeah.

Stuart: [laughs] Yeah, it’s not as difficult.

Corey: You’ve got a similar example in your book Stuart, actually, about the current theory of the cause of Alzheimer’s disease.

Stuart: Well yeah, this is a problem where we… It’s a slightly different kind of a problem, but it’s related, I suppose, where we too often go off chasing after some fad because of a single or a couple of results that look very promising, and they all seem like oh, this would be the perfect answer, this surely looks like how it ought to work. But how can I say this… It’s a subtle — and I’m sure most scientists would claim they don’t think this way at all — but it is a very subtle, unfortunately, and dangerous — what’s the best way to put this — it’s as if you believe in intelligent design when you do things like this, when you think that biological systems, why you see something that looks so reasonable as a way that a biological system would work, that you begin to accept it because of its reasonable looking-ness, and that has this sort of whiff of teleology about it. Biology still operates in this kind of curious teleological way that physics and chemistry gave up much, much longer ago.

Steve: Stuart, maybe define teleology for our audience.

Stuart: Oh yeah, so this is the idea that you can understand the way something works because of its purpose. The famous example, I suppose, is a watch: if you find a watch in the woods or something like that, there’s no doubt that it’s different than a stone or a rock of some sort or dirt, because it appears to have a purpose, so therefore, it was probably designed by somebody or some intelligence for that purpose. And so that’s what intelligent design works on to a large extent. But the idea that things have a purpose, that things happen for a purpose was an old… You know, Aristotle believed that stones moved or things rolled downhill because that was their purpose, that’s what they wanted to do, as it were — that they were imbued with this sort of purpose-driven motivation.

Steve: But when you criticize biologists for use of teleology, I think maybe they’re not usually intelligent-design adherents, so I think maybe what you mean is they’re assuming evolution designed it for a particular purpose?

Stuart: Well yeah, so of course it’s sort of subtle, and no self-respecting biologist would say he believed in intelligent design, at least today. So that’s true. But it comes closer than you like, because evolution doesn’t have any design and purpose in mind either. I mean, evolution doesn’t have any purpose in mind. That’s a misunderstanding that’s all too common about evolution, that it optimizes things, it’s always looking for an optimal solution. It’s not looking for anything. It’s a random process with a feedback loop.

Steve: In the case of Alzheimer’s — I don’t know actually what mechanism, maybe Corey will explain — but is the idea that, oh they have this neat hypothesis, and it’s sort of justified by some kind of evolutionary story that says, oh yeah, nature ended up with this mechanism and it’s very neat, and that’s how we can explain Alzheimer’s?

Stuart: Well, that happened. That’s not the case with Alzheimer’s, but that is the case with many things. I mean, this is why I think we should all be suspicious of evolutionary psychology or whatever they call it, these sort of post hoc explanations of why humans like to have wars or whatever it is human sociology is supposed to be doing, because we evolved to do this. We didn’t evolve to do anything, in my opinion, we just evolved. All sorts of things come along with that, and human beings are definitely the product of evolution, but they’re not only the product of evolution, so you can’t just say all of our psychology is due to evolution. In the case of the Alzheimer’s business, it was a finding that was sort of elegant. I mean, there’s this substance called beta amyloid, which is found in normal brains, but was also found to be highly prevalent in Alzheimer’s brains, and particularly in the so-called plaques that an Alzheimer’s brain has all over it, which are thought to be the cause of Alzheimer’s. Of course, nobody even knows whether these plaques are actually the cause of Alzheimer’s. It’s just that that’s what it looks like, because when you open up somebody’s brain post mortem who had Alzheimer’s, you see that their brain looks like a mess because it’s full of these tangles and plaques. And then they saw that these tangles and plaques were made of this substance called beta amyloid, and so everybody thought, well, that must be the cause of Alzheimer’s, and if we could just figure out how to down-regulate the expression of beta amyloid or break it all up somehow or another, we’d have a cure. But that turns out not to be the case at all. Beta amyloid seems to be an emergent phenomenon of Alzheimer’s, but not a causative phenomenon.

Steve: So maybe this is a case of jumping too fast from correlation to causation?

Stuart: Yes, which I think happens a lot, and the way we jump from correlation or causation is that we put a purpose in there. It looks like it should be right.

Corey: It’s also driven by the fact you seem to have this very natural explanation, which seems to have driven a lot of theoretical physics. It’s a beautiful, simple theory, and you can do research on it, and you can publish lots of papers on it, so it’s kind of institutional support.

Stuart: Yes, yes, so we tend to be in science very, in my opinion, non-pluralistic. We tend to have a belief, I suppose, that there will be one answer — which there’s no reason to believe that that’s absolutely going to be the case it seems to me, but we tend to believe that — and so we chase after one answer after another one at a time, which is not in my opinion necessarily the best way to do things. That would be the case with Alzheimer’s. I think there’s a good example in cancer research now. Immunotherapy for cancer has been around since the late 1940s as a current idea in cancer treatment, and it was largely dismissed for well, there were all sorts of reasons. There were one or two trials that didn’t go well and people died from them, but people die from chemotherapy and radiation all the time as well. But I think to a large extent it was the medical industrial profession, if you will, that made the decision that surgery and either chemotherapy or radiation would be the treatments that we would use, and immunotherapy was just kind of dismissed for years and years. And now we’re finding that immunotherapy is a very powerful way to treat many cancers and holds out a great deal of hope for treating cancers that were otherwise incredibly resistant to treatment. But for 40 or 50 years we paid no attention to it, we just let it simmer away on some back burner somewhere, and everybody traipsed after the same thing. There’s an old Chinese saying that says one dog barks, and 100 dogs bark at that dog, and then somebody says wow, what’s going on? Nothing’s going on. They’re just barking at each other.

Corey: It sounds like in that case, you think they didn’t take the proper lesson from the failure of those trials. They simply walked away.

Stuart: Yes, I think that’s true. I think we too often walk away from failures. Now, it’s always a little different when you’re talking about medicine, that’s true, because lives are at stake. It’s a little different than theoretical neuroscience or theoretical physics or even experimental physics or neuroscience, where it may be nice that things worked out, but it’s rarely going to kill somebody or save somebody. Whereas in medicine, you do have to make those sorts of decisions. But that doesn’t mean that we’re making them the right way, and in point of fact, we weren’t helping people by lopping tissues out of them and subjecting them to significant amounts of radiation or toxic chemicals, when immunotherapy could have been developed. I’m sure this is somewhat more controversial in most people’s minds, but I feel the same way about nuclear energy and climate change. I think if we hadn’t been scared off of nuclear energy, inappropriately so, in the 1970s, then we would have developed nuclear energy in a much higher and better and more sophisticated technology that would be helping to solve the problem of climate change now. But instead we’re stuck with a form of nuclear energy that is dangerous, because it uses 1960s or ’70s technology.

Steve: Yeah, I agree with you on the nuclear energy observation. I mean, there’s a weird path dependence here where people got very scared of it, perhaps irrationally, and it is potentially a very good contributor to solving the climate change problem, but we’re not really open to considering it now.

Stuart: No, I blame it all on Jane Fonda, to tell you the truth. [laughter] Well you know, she made that movie called “The China Syndrome.”

Steve: Yeah… there was also Chernobyl… [laughs]

Stuart: Yes, so there was Chernobyl and there was Three Mile Island, and it’s true. But since then there’s been the Exxon Valdez and the Deepwater Horizon thing — I mean, it’s not like carbon is the safest thing around. People talk about nuclear waste — terrible, terrible — well there’s carbon waste too, but instead of parking it in an underground vault in the desert, we’re parking it in the atmosphere. That’s a bad idea, it seems to me. So I guess my plea is that we’re not pluralistic enough. This goes back a little bit to Occam’s razor, which I think is an interesting bit of science that people misinterpret or interpret wrongly or use wrongly, and that’s Occam’s razor is this notion that the simplest explanation is likely to be the correct one, which I think is not at all true. I think this would go along with what’s her name?

Corey: Sabine.

Stuart: Sabine’s book, yes. I was once in a situation… I know a professional magician named Mark Mitton, who was also a philosophy major in college at Swarthmore, so he’s a very thoughtful magician as well, and very interested in science and mathematics and so forth. And we were having a discussion one night and Occam’s razor came up, and Mark’s first response was, oh yeah, Occam’s razor is the magician’s best friend, you know. And I thought, well now wait a minute, that can’t be right. It can’t be the magician’s best friend and an important principle in scientific investigation and judgment of data, because magicians are trying to fool you and science supposedly is trying not to fool you. I think he was right though. I think this constant idea that the simplest explanation is the better one can get us in a lot of trouble. Sydney Brenner, who died a few weeks ago, a great molecular biologist, he had not only Occam’s razor but he invented something called Occam’s broom, and Occam’s broom was this broom that you use to take outlier points and sweep them under the rug, so it makes your theory look a little simpler and prettier.

Corey: So I think a lot of the problem here is just that what appears simple is highly subjective, and this seems to be what magicians are taking advantage of. With simple [unintelligible] in our visual system, our conceptual system is not an objective feature of nature.

Steve: I would also say that it may be that in our ordinary lives, and maybe in certain areas of science, Occam’s razor works quite well. But when you have a kind of adversary that’s trying to exploit your weaknesses and thinking, i.e. a magician who designs tricks, obviously they’re going to exploit whatever thinking tool that you use. And evolution is what makes biology so complicated, and so often in biology I think you can’t assume systems are simple because they’re very, very complex evolved systems. And so Occam’s razor in biology, I think you should trust probably much less.

Stuart: Yes. Yeah, I think that’s quite right.

Corey: Before we leave this topic, Stuart, I want to come back to your TED Talk, and you said something which I think summarizes your complicated view about creativity and scientific discovery, but that also got you a fair amount of both attention and criticism: you said that what scientists spend most of their time doing is farting around — and I take it that’s a technical term.

Stuart: Yes. [laughs]

Corey: What do you mean by that?

Stuart: Well, I think very few that I’m aware of practice what we’re all taught is the scientific method, which is this very recipe-driven mechanism or method, if you will, for making discovery supposedly, which is to make an observation, come up with a hypothesis, test the hypothesis, make new observations, revise them, and so forth. And I do think we use the scientific method after we’ve made a discovery, I just don’t think we use it to make discoveries. I think discoveries are made in all sorts of different ways by all kinds of different processes, and there is no single recipe for it. And the idea that there’s a recipe gives a wrong and distorted view of science, which is a much messier, and should be much messier process than just simply following a simple set of rules, or any kind of a recipe. And I’ll note that that scientific method, that set of rules doesn’t anywhere really specify the importance of things like intuition, or counterintuition even, or inspiration, or surprise, or creativity, or thinking in new and different ways. It doesn’t really explicitly talk about any of that. Maybe it’s wrapped up in coming up with a hypothesis or something, but that’s not very helpful if it doesn’t say it explicitly. And so I think it tends to remove all of those things from science that are crucial to it. I mean, having worked in the theater, if you will, which I suppose is all about creativity and all the rest of that — that’s not entirely true either, there’s a lot of workaday work to do in putting the production together — but I don’t find science as a daily practice to be any less creative or any less reliant on inspiration or intuition than I found my work in the theater to be, so… Or teamwork, for that matter: one of the nice things about the theater is you work as a group of people, and in science you do the same thing. And so there is no, I think, simple method for doing it. I think what we do is we do fart around — we don’t just casually fart around, if you will. We thoughtfully fart around.

Steve: When I teach Physics 101 — and that’s typically teaching to people who aren’t going to become scientists, but you want them to have some appreciation of science — then it’s important for them to understand that you have a hypothesis, and then you have to gather data to decide whether there’s evidence in favor or against your hypothesis. And of course that elides this whole issue of how do you come up with the hypotheses that are worth investing time and effort into, and that’s what I think you’re describing. The space of possible hypotheses is so large that we need intuition, we need hints, little accidents that happen in the lab that sort of stimulate us to formulate a hypothesis. All that is the art of science, which we don’t really teach when we formalize “what science is” to the non-practitioner, but absolutely for the practitioner the world looks, I think, as the way you described.

Stuart: Yes, but don’t you think we do the so-called non-practitioner a disservice? These are the people I worry about the most. I mean, I know there’s a lot of talk now about STEM education, and are we educating the next generation of scientists, and are we getting them… and I think all in all we don’t do such a bad job of educating scientists. And on the one hand you hear we don’t have enough scientists, and then every other week in Nature there’s some editorial about we’re making too many Ph.D.s and there are no jobs for them, and so nobody really seems to know whether we’re making enough or too few or too many scientists. But I will say the people that really worry me are the people who will not become scientists, will not go to graduate school or any kind of postgraduate work in science, and for whom their last formal interaction with science will be an introductory science course in the university meant for non-scientists, or an AP course maybe they had in high school, which I feel will give them a largely distorted view of science. And then when they come back to us and say wow, I thought you guys knew this, because you told us science was these facts, and you told us there was this method for doing it, and now you’re telling me that no, that’s not how it works, and we’re 87% sure of this or 95% sure of that, but we need you to stop driving your car around because we’re sort of sure about it. And then people say well, let me know when you’re really sure, because that’s the science that I thought I was paying for.

Steve: I think it’s important to teach people, to get across the idea that you test your hypotheses with evidence, and of course, even that can lead to this notion that you could be less or more certain that a particular hypothesis is true. But I think the other thing that — and I agree with you that we should enrich the education of students, even if they’re not going to become practitioners, by going deeply into at least one scientific discovery and showing what did the actual scientists do, how many years were they “farting around” or lost in the woods or wrong, and then how did it actually happen, and how credit really is assigned to one guy but it’s really 10 different people, including some women who deserve… I think all those things should be done. But there is a sort of separate thing of what is just the purely logical sense of we formulate hypotheses and then we try to gather evidence for or against them, versus what is the human experience like.

Stuart: Yeah, so I certainly agree with that. And the notion of evidence, as you know, is a complicated issue all on its own. Philosophers continue to debate, and historians and other classicists continue to debate about what the nature of evidence is, not to mention legal scholars and all the rest of that — and today especially, we’re living in a world where evidence that we understand, or we think we understand what evidence is, suddenly there’s other people who feel that evidence is quite different than that, who consider what we would consider a conspiracy theory [to be] perfectly good evidence for beliefs. A lot of people I think subscribe to the idea, or would subscribe to the idea that common sense is a good source of evidence, but I think common sense is a disaster most of the time. It’s a disaster at least partly because it sounds like a reasonable place to go for authority, for a view that you can trust because it’s the wisdom of the elders or the ancients, or the wisdom of the crowd and all the rest of that, but it’s often wrong obviously.

Corey: Stuart, you actually have a pretty radical view that I think goes far beyond what Steve’s proposing for how to teach science. And you think we should not just focus on say a single line of research, of single results showing where it came from and how long it took to get there, but we should actually immerse ourselves in these past theories that are now known to be wrong, and that’s the way to go. Is that correct?

Stuart: It’s at least partly correct, yes. I think there’s a great deal of value in teaching a lot of things that failed and why they failed. I don’t mean to be overly controversial here, but I honestly believe this, I think we should teach intelligent design in science classes. I think we made a huge mistake letting intelligent design become a religious issue instead of a scientific issue, because before Darwin virtually every scientist — more than likely every scientist who we consider to be a great scientist and whose work we appreciate, from Galileo to Newton to Kepler and Faraday and Maxwell and all these characters, and Lavoisier in chemistry and Harvey in physiology — they were almost all surely entertaining some version of intelligent design. I mean, they’d almost have to. If they believed that they could figure the universe out, they must have entertained also the idea that the universe was probably a rational place, because otherwise how would you have any hope of figuring it out, and that that rationality was due to a creator of some sort, that there was a design, and we could learn that design, we could figure it out, we could reverse engineer it.

Steve: I think if you were a physicist pre-Darwin, and evolution as your organizing principle, you had to sort of believe there was some intelligence or something behind it in order for there to be recognizable rules by which the universe operates. And I should point out that as a physicist, we don’t rely on evolution to explain the basic laws of the universe. So secretly we’re thinking, well, there are these simple rules, mathematical rules maybe, that govern how the universe operates, and we don’t really know where those come from. They could come from some intelligent “creation.” We just don’t know.

Stuart: Yes, it wouldn’t be an unreasonable hypothesis, if you will. Now, it’s quite hard to gather evidence for that, and so you can just continue gathering correlative evidence in the sort of, what are they called, the anthropocentric view of the…

Steve: Anthropic principle?

Stuart: Anthropic principle, sorry.

Steve: Yeah, and there’s also an increasingly popular view that we might actually be living in a simulation, and then of course there’s somebody who designed that simulation that we live in.

Stuart: Yes, right. [laughs] There’s a great line by Douglas Adams in one of The Hitchhiker’s Guide to the Galaxy books about if anybody ever figures out how to explain the universe, it will instantly disappear and be replaced by something even more inexplicable. And then there’s a second theory that says this has already happened.

Steve: By the way, I like your idea of keeping a little bit of intelligent design in the curriculum, because I feel that intelligent design is not as easily dismissed as most biologists think it is. So for example, imagine that you had a very subtle god in the universe, who just biased probabilities a little bit every now and then — maybe made the timescale required to evolve an eye a little bit faster by biasing some random events in our favor — it would be very hard for us to exclude that, we only have one example. And so what we think of as the rate of advance due to natural selection is only due to the historical record, and that could have been secretly manipulated by “the god.” So I think actually philosophically, most people who reason about this are not actually even reasoning correctly.

Stuart: No, I think you’re absolutely right, and there is now this sort of knee-jerk reaction that it’s just a religious issue, and I don’t think that’s true. I think there’s a legitimate reason for teaching this as science and understanding why it is or is not valuable to science and what part of it is. Maybe an easier example, to me at least, is phrenology. So phrenology, believing that the bumps on your head can predict something about your personality or other intellectual and personal traits, is now a completely discredited idea, but it was practiced for some 40 or 50 years as a legitimate science to the mid 1800s. There were journals, there were people trained in it, there were schools of it, there were experiments done, and it was considered quite a legitimate science. Now we know that there’s no relationship whatsoever between bumps on your head and whether or not you are creative or criminal or something like that — but importantly, two very foundational principles of modern neuroscience would develop by virtue of phrenology. So phrenology was the first time that people believed and made a case for the fact that all personality traits and intellectual traits resided in the brain — that love did not come from the heart, that anger was not in the spleen, that you don’t have gut decisions about fear and things like that, that everything in fact is in the brain. And the second idea is that things are localized in the brain. Now, that’s what modern fMRI is largely based on, and we’re now recognizing that localization may not be the be-all and end-all of the brain, but that there certainly are certain things that are localized in parts of the brain. We know that lesion studies can help us understand strokes and all of these other sorts of neurological things. So here’s phrenology, a science that’s completely discredited and a failure, and yet it really is the source of two foundational principles in modern neuroscience. I think that’s an important process for people to understand. It leads to a whole bunch of neuroscience that they’re just going to forget anyway.

Steve: At least the bumps they were focused on were bumps on the head. [laughter]

Stuart: Yes. There were other people doing other bumps.

Corey: So we have I think a choice to make at this stage. Stuart, this has been a great conversation. There’s actually a lot more we’d like to talk to you about. So my thought is maybe we cut our discussion at this point right now, and hope you’ll come back to talk more about how science education should go, and the idea that there should be a narrative behind what’s taught in the classroom, and also a lot of interesting comments you have about the NIH killing creativity.

Stuart: Ah yes. [laughs] I’ve changed my opinion about that a little bit, by the way, but I still think it’s true — I mean largely — but that would be an interesting conversation to have, I certainly agree, and probably far more than we have time for now.

Corey: All right, with Steve’s agreement?

Steve: Yeah, let’s stop. I can’t resist just saying I didn’t know about your view on NIH, but I kind of agree with it. I find that for NIH-funded people, the term grantsmanship assumes so much importance in their careers — much more so than I would say in physics and other areas — that it’s very alarming. So it’s this complex thing you have to learn how to navigate, and I can totally understand how it kills creativity, because you just have to worry about how you’re going to get your next NIH grant, and it’s terrible.

Stuart: Yes, and it is a silly process. I mean, I’ve come to be a little bit more — just very quickly, because this doesn’t have to go into this podcast, we could develop this more — I’ve come to be a little bit more understanding about it, because I think one of the things you do have to do sometimes is take the perspective of other people. So the problem is that the people who run NIH, the administrators who run NIH, we think that they should be serving us. We think that the purpose of NIH is to fund research and to promote research, and so we think the administration should be serving us. But really, the administrators of NIH think their job is to get Congress to put more money into this — and that is their job, I agree — and they think the best way to do that — whether it’s right or wrong — they think the best way to do that is to talk to Congress like they’re idiots…

Steve: That may be the right way to do it. [laughter] Maybe they’re right, I don’t know.

Stuart: … and say all our research is hypothesis driven, we don’t just let these guys fart around, everybody uses the scientific method, it’s really strong and it’s intense and all that, and that’s how they get money. Because if they went to Congress and said look, we need some more money to let our scientists fart around, because when they fart around good things happen, it’s not likely they’re going to get funding.

Steve: I think NIH could try the other tack, because for example, now there are famous things like 20% time at Google, where they let the engineers spend 20% of their time on some completely offbeat creative project, and it’s turned out to be extremely valuable. So you might be able to convince congressmen that farting around is something to be supported.

Stuart: Right, but like any organization, there’s a great deal of inertia in the administration. But you’re absolutely right. Things could change and things could be made better, and it would make the whole grant-getting process more sensible.

Corey: You actually speak to the head of research at Google as part of one of your books, and try to reform the publication process. Let’s leave that as a teaser.

Stuart: Okay.

Corey: And let’s pick this up, hopefully soon, on our next podcast with you Stuart.

Steve: Great.

Stuart: Great. Well, this has been fun.

Corey: Absolutely. Thanks for your time, Stuart.

Creators and Guests

Stephen Hsu
Host
Stephen Hsu
Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University.
© Steve Hsu - All Rights Reserved