Jamie Metzl on Hacking Darwin: Genetic Engineering and the Future of Humanity – #22
Steve: Thanks for joining us. I’m Steve Hsu.
Corey: and I’m Corey Washington, and we’re host for Manifold.
Steve: In our discussion with Jamie, we got onto a side track because he had done PhD dissertation on the genocide in Cambodia. This is a little bit off the main focus of the episode, which was his book Hacking Darwin. So, we’ve moved that content to the very end of the podcast, so you don’t have to listen to it unless you’re actually interested in it, but we do referenced it a couple of times in our discussion. So, that’s why we’re putting this note at the beginning of the podcast.
Steve: Corey, our guest today is Jamie Metzl, an old friend of mine. Jamie comes as close as anyone I know to being a renaissance man. He does so many different things. He is a technology futurist, a geopolitical expert, and also a writer. He’s a senior fellow at the Atlantic Council, and formerly the Asia Society’s executive vice president. I’m just getting started here.
Steve: He is a former White House fellow. He held positions in the Clinton administration, both the National Security Council and the US Department of State, and also previously with the Council on Foreign Relations. He is a graduate of Brown University. He has a Harvard Law School JD, and he has a PhD from Oxford in Southeast Asian history.
Steve: Now, his writing covers both fiction and nonfiction. He’s written two science fiction novels, Genesis Code and Eternal Sonata. Today, what we’re mainly going to focus on is his new nonfiction work called Hacking Darwin: Genetic Engineering and the Future of Humanity.
Steve: Jamie, welcome to our show.
Jamie: Thanks so much, Steve. Thrilled to be here with you.
Steve: Back to genetics and genetic engineering. So, Jamie, what caused you to get interested in that particular, and when did that happen?
Jamie: Sure. So, I talked about my background with Cambodia. I then worked when I was 18, worked in a refugee camp in Thailand with Cambodian refugees. Then I worked for two years as a human rights officer for the UN in Cambodia. All of those experiences made me feel, one, that we all have a tremendous responsibility to try to solve these problems because when you see people living in civil wars or living in refugee camps, it’s just horrible. It’s unacceptable.
Jamie: My second realization is that you could spend your entire life working in refugee camps, and you wouldn’t fix anything because refugees are at the bottom of the stream. So, the top of the stream is making smarter political decisions, so that we don’t have these terrible crises.
Jamie: So, then after, that was what set me on the path to government. So, after I graduated from law school, you mentioned that I was a White House fellow, my first job was on the National Security Council working for a really great guy, mentor, and now a very close friend named Richard Clark.
Jamie: This was in 1997-1998. At that time, Dick was telling everybody who would listen, which was pretty much nobody, that terrorism was this huge and fundamental threat to the United States, and that we had to be serious about it, and we had to go after this obscure terrorist organization called Al Qaeda and, particularly, its leader, Osama bin Laden. All these people were saying, “Oh, this is dead. The cold war is over.” He’s looking for a new job, and he was really ignored.
Jamie: Of course, Dick’s crescent memo was on George W. Bush’s desk the day that 9/11 happened. Dick had a whole plan that wasn’t realized for what we could have done potentially to prevent it.
Jamie: So, even before 9/11, Dick always used to say that if everyone in Washington was focusing on one thing, you can be sure that there’s something much more important that’s being missed. So, for him, it was terrorism and cyber. For me, as I looked around the world, I saw these little data points that told me the story, at least in my mind, that genetics and biotech revolutions were going to fundamentally change our world, and not that many people were thinking about that. So, I started educating myself. I’m a veracious reader, reading everything I could, tracking down people I thought I could learn from, and talking with them.
Jamie: When I was ready, started writing articles on the national security implications, the potential national security implications of the genetics revolution, and then a crazy eccentric congressman, still in congress, named Brad Sherman gave me a call, and he’d read one of my articles, and he said, “This is so important. You’re the only person talking about it. I want to do hearings based around your article, this one article. Will you be the lead witness and help me organize the hearing?”
Jamie: So, I did that, and then was doing a lot more writing and speaking. I felt like Dick in the ’90s like this is such an important issue, but I’m not breaking through. There’s a few experts who were listening. I published articles and journals like foreign affairs that are wonky. So, I felt like I needed to reach a broader audience. That was what led me to write my two near term sci-fi novels, Genesis Code and Eternal Sonata.
Jamie: When I was on my book tours describing the underlying science that went in to the stories, when I explained the science in ways that this regular people could understand, I could just see their eyes widening, that they’d heard the words, but they hadn’t heard the story of what this revolutionary science is and what it meant to them.
Jamie: It was then that I realized that I needed to write a book for everyone, the nonfiction story of the genetics revolution, where it come from, where it is coming from, where it is now, and where it’s heading, but not as some kind of wonky book that people would read like people used to take castor oil like it’s probably good for you, but you don’t want to do it, but something like a book that you could take to the beach, that you could read on the subway, that you’d be excited to read. So, that’s what I’ve tried to do in Hacking Darwin.
Steve: So, both of your sci-fi novels, I think, have genetic engineering components. Is that right?
Jamie: Correct. Yeah. Genesis Code on a US-China genetics arms race, and then Eternal Sonata focuses on the science of extreme human life extension.
Steve: So, you’ve covered both the fictional narrative approach to it, and then also more a science fact-based version of it. Let me read a couple of sentences from the dust jacket of Hacking Darwin, and then you can react to it.
Steve: “Genetic engineering isn’t some far-off fantasy. It’s arriving faster than most of us understand or are prepared for. When we can engineer our future children, massively extend our life spans, build life from scratch, and recreate the planet and animal world, should we? At the dawn of the genetics revolution, our DNA is becoming as readable, writable, and hackable as our information technology, but as humanity starts retooling our own genetic code, the choices we make today will be the difference between realizing breathtaking advances in human wellbeing, and descending into a dangerous and potentially deadly genetic arms race.”
Steve: So, I think you hit on almost all the important aspects of this topic. Any thoughts since you wrote that?
Jamie: Well, lots. I mean, I certainly stand by everything in the book, but or I should say and the science is moving forward so rapidly. I’m just now finishing the edits for the paperback version, which is coming out in April 2020. It’s just incredible. This one year of science, the rate of this, I mean, it’s hard to imagine now at least for people in this world that it was only 2012 that the CRISPR-Cas9 gene-editing system was essentially invented, and it was six years after that that the world’s first gene-edited human babies were born, six years from this abstract concept-
Steve: Why did it take so long? Oh, sorry.
Jamie: Exactly. No, no, but that’s the thing. I mean, that’s the essence of all of this is the speed of change, and that’s why I tell everybody that if you’re looking historically for how long it takes for technologies to emerge, you are by definition being too conservative. I mean, that all of these technologies are super convergence of technologies, and they’re all leaning against each other, and propelling each other forward. We have more people who are literate, educated, connected to the world of knowledge, connected to each other than ever before, and all of those figures are just going up and up and up, and you think the different parts of the world figured out copper, bronze or whatever thousands of years apart.
Jamie: So, imagine if, at first, wherever in the world they figured out copper first, they just sent an email to everyone else on earth saying, “Hey, just figured out copper. Here’s how to do it. That’s a 2,000-year developmental jump for some parts of the world. Then that’s the starting place. Then the next day, somebody figures out bronze, boom! Emailed to everybody, and we’re doing that in every field around the world, and it’s unbelievable. So, where this is heading is just beyond in many ways what our very practical brains are designed to follow.
Steve: Soon after the first CRISPR results came out, Corey here was leading an effort at MSU to build our own gene-editing core on campus. So, we have a gene-editing core up and running here, and I don’t know what the latest statistics are, but plenty of modified mouse models and other species well have been produced there.
Corey: I think that’s pretty obvious to everyone who’s engaged in biology, the CRISPR was huge and it, in fact, didn’t come out of the blue right there or zinc fingers and TALENs, which were optimized just a couple of years before, but you’re right, there was an acceleration, and has been acceleration over time.
Corey: I want to push back because I think I agree with part of what you’re saying, but I think it’s complicated. In some ways, technology, I think in a gross level clearly accelerating, but there’s a long history of predictions of technological developments from discoveries that never happened or took much longer to happen than we expected. You self-discussed stem cells. Remember, just after stem cells were invented, there’s all this hype about curing a huge range of diseases with these new tech. We may be injecting them into our brains, and our bone marrow, and all these chronic conditions, neurodegenerative conditions we had would be cured within a few years. That turned out not to be true.
Corey: Same thing happened with the human genome project, those initial burst of enthusiasm about identifying all of these diseases that had genetic basis. That proved far more difficult. I think you have aligned your book by a scientist who was having a mea culpa saying, “We’re confused understanding from, I don’t know, clinical practice or clinical effectiveness.” So, it’s complicated. I agree that there’s a broad trend of acceleration, but it turns out to be very, very hard to predict actual developments from scientific discoveries. I just like to hear your reaction to my-
Jamie: Yeah, I know. I totally agree, and there’s even a whole field of research on the hype cycle, and what it looks like. There’s some kind of groundbreaking discovery. Everybody says, “Wow! This is it. It’s going to change everything.” Then we get too excited, and then it turns out that it’s complicated, and it’s low and painstaking. Then people say, and you can just insert any technology, “God, we thought this robotics thing was …” or just insert anything, “What a dud!”
Jamie: Then along the way, after this disappointment, it turns out there’s really a there there, and it ends up building and building and building, and eventually for many of these technologies over time it becomes even more revolutionary than people in the early stages may have thought. That hype cycle, really, it applies across lots of technologies. Just because all of this of life is always more complicated than our simplifying narratives would like it to be. I still stand by this general thesis that if you get billions of people interconnected, solving problems, nobody has to solve a problem that’s already been solved.
Jamie: So, you’re just optimizing human brain power in a way that just never happened before in our history, and we have a … It’s not like we’re just imagining, whatever, some unimaginable technology. We are laying foundations upon which many, many things will be build. Will they be exactly like we are imagining? No, but will there be other things that we can’t imagine? Yes.
Jamie: So, for me, I guess, as a self-declared futurist because there’s no governing body of futurism. You didn’t play music and give you that little paper. That’s always the thing is that you try to have as much intellectual rigor in making predictions based on your analytic framework and try to figure out, “Well, what’s real and what just sounds good but doesn’t have the potential to be real?” Life is always more complicated than our narratives would suggest.
Steve: I think if you look at older science fiction, and I’d like to actually get your reactions later, Jamie, to some of the more famous science fiction novels or TV shows that incorporated genetics in them, but I think the possible ways in which our civilization, humanity could be changed by genetic engineering have been explored quite a bit conceptually, but I think what’s different at this moment in time is that I would say there’s almost no chance. I’m highly confident that in the next 10 years we will see really significant impacts on society and human life from genetics. I just want to ask Corey whether he thinks that that’s still too optimistic, 10-year timescale.
Corey: You think there won’t be?
Steve: There will be.
Corey: There will be.
Steve: I think there will be.
Corey: I’m pretty skeptical of that, but what I’d like to say is I think it’s important for all of us to try to get a little feedback on our own predictions. I actually have been keeping track of my predictions on this topic. So, what we-
Steve: Mine are published.
Corey: Okay. Well, have you been looking at whether they come true or not?
Steve: Yeah. They’re right on scale. They’re right on time. I mean, I predicted that we would get the first accurate complex straight predictor. I’ve predicted that in about 2014. It happened actually a little earlier than I suspected. I have a prediction for when we’ll be able to do cognitive ability with some accuracy. That’s another five years.
Corey: That’s slightly different, I think. That’s not … I agree that’s in the line between research and, I wouldn’t call it engineering, I wouldn’t call it technology, but I’m asking about actual improvement in lifespan. That’s a different thing.
Steve: Oh, I don’t have any predictions about that.
Jamie: Let me jump in because I think defining the terms is actually really important. So, the reason why I used the word in the title of my book Genetic Engineering is I’m really thinking of the broad category. So, when I tell people I’ve written a book on human genetic engineering, nine people out of 10 will say, “Oh, you mean CRISPR?”
Jamie: I say, “No, no. I don’t mean CRISPR. What I’m saying is imagine that there’s a pie that’s genetic engineering, and in that pie, there’s a slice that’s gene-editing, and in that slice, there’s a sliver that’s CRISPR.”
Jamie: So, Steve, with the work that he’s been doing, he and others, on polygenic scoring, there is a real world application. Now, that real world application, now, there a few of them. I mean, the simpler one is just people getting their direct to consumer genetic information, and that’s already starting to happen. Then second is this migration into IVF and embryo screening, and that’s also, it’s very early days. That is starting to happen. Then there’s the thing of gene-editing embryos, and everybody knows that we had the world’s first two gene-edited babies were born in China last year. The third has very, very likely already been born, although no one seems to know for sure.
Jamie: So, I think these things are happening. All of the categories that I write about in my book in broad terms are things that have already happened, and the question is, is how long is it going to be? What is the adoption curve going to look like? It’s not a binary. It’s not a yes or no because, yes, it has already happened.
Steve: Yeah. Going back to my question to Corey, I think we had in mind different types of impacts. So, 50% increase in human longevity or something. That isn’t what I was thinking of would happen in 10 years, but I meant things that are at least in my mind very substantive, but in your mind might have been like, “Oh, these are not really very impactful things.”
Corey: No. We should be precise about what we’re talking about because I think that’s where a lot of the fudge factor happens.
Steve: Yes. Absolutely.
Corey: Predictions are suitably vague, and then it gets interpreted retrospectively accurately.
Steve: So, you need some rigorous definitions of what you mean.
Steve: Yeah. Jamie, give us one example that you’re confident will come to fruition in the next 10 years.
Jamie: I’ll give a bunch. One is that we’re going to see increased use of IVF and embryo screening by a much broader set of potential parents. So, right now, we have a little under 2% of people in the US are having kids through IVF. It’s about 5% in Japan, and around 10% in Denmark and Norway. I’m pretty confident that those percentages are going to go up because taking conception outside of the human body just it allows us to apply science to the process of baby-making.
Jamie: I also think that we’re going to change in a little bit by a little bit, but it’s going to add up, just the way we think about what a human is and what human potential is. Right now, a kid is born, and we say, “Wow! This kid, they could be a great mathematician or maybe they’ll win the Olympics in the 100 meters.” Most kids aren’t going to win the Olympics in 100 meters. Take me, for example. I like to run. I do extreme sports. There’s nothing I could do to win the 100 meters with the biology that I have. I think we’re going to have to recognize that we are going to have relevant information, not entirely predictive, but probabilistic information that’s available to us that’s real. That’s going to change the way we think about fate and potential, and even what a person is.
Jamie: So, that, I’m pretty confident that it’s going to happen over in 10 years. Right now, as I said, there’s two, maybe three gene-edited babies have been born. My guess is in 10 years, there will be 20,000, 30,000, very, very big numbers. It’s not going to be … I talk with George Church from time to time about this. He thinks we’re going to make 10,000 changes. I don’t think that, but I think there’ll be single gene mutation changes to lots of pre-implanted embryos either to reduce some kinds of risks or to provide some kind of enhancements, and it’s going to be impossible to categorize what is a therapy, and what is an enhancement.
Jamie: So, I think that we, and I’m pretty confident that these things are going to change, and none of them are going to happen in these huge steps. It’s just going to be, as I write about it in the book, a bunch of gradual steps, each very logically emanating from the last.
Steve: So, I felt that on a 10-year timescale, I counted three predictions from you. I agree with all of them. I just want now you and Corey to fight.
Corey: Look, I’d give you a pass on the first prediction because I think that’s a very high probability event already. We already know that IVF is out there. We know that people are doing screening for certain kinds of conditions, and that’s going to expand. So, that’s almost not a prediction. I don’t want to be too harsh on you, but that’s almost not a prediction.
Steve: It’s okay. These are just category things, right?
Corey: I want to come back to the gene-edited question. I think that’s a pretty substantive one. The other is the prediction on traits. The gene-edited one is really interesting because that’s going to require some infrastructure. It’s not going to be a Chinese scientist allegedly doing this undercover, although we, in fact, don’t know that he did it undercover or with the support of the Chinese government. You’re going to have to have hospitals. You’re going to have to have regulatory bodies or at least have the regulators look the other way in these cases. So, I think that’s an issue on public policy and the question of whether people actually try to force these bans or not.
Jamie: There are no bans.
Corey: You’re right. There are no bans. There are these discussions of bans, right? Yeah. The question is, is this going to turn into anything real or is it just a lot of hot air?
Steve: Right. Let’s take just that one because you have another one after that.
Steve: Let’s take that one. So, I would say the infrastructure needed is basically what you have at any medium, large IVF clinic, actually, to do this. That’s actually where he did it, Hu did it. So, the infrastructure requirement is relatively low, and there are many countries, so many different regulatory situations. So, even if 90% of them ban it outright, there’ll still be at least a handful of countries where it’s legal. So, so far I think your comments don’t preclude it happening. I think to me the biggest variable is-
Corey: No, I’m not arguing it’s not going to happen. I’m just saying I think a lot of other attendant changes will have to occur.
Steve: Okay, but 10,000, by the end of 10 years, 10,000 will have been born, 20,000 will have been born. Does that seem too aggressive to you or-
Corey: No. Honestly, I don’t actually know how you’d assess even by an order of magnitude right at this stage, but I’m hoping that Manifold goes on for 10 years, we’ll have Jamie back, and we will check. That’s extremely important.
Steve: Okay. So, you’re on the fence. You think 10,000 could be way up.
Corey: Yeah. 10,000 is plausible, right?
Steve: Oh, it’s plausible. Okay.
Corey: It’s plausible. I mean, 2,000 is plausible. 20,000 feels-
Steve: Too high.
Corey: Too high. Yeah.
Steve: I’m in the same ballpark as you. I think the main variable to me is what is the main benefit that you can identify from, say, a single-
Corey: So, that’s what I wanted to ask. I was going to ask Jamie what traits do you think will be engineered over the next 10 years.
Jamie: Yes. So, in engineered is a strong word. So, because, just in my general philosophy, when I think, “Well, what are … There are these very complex systems that we always don’t fully understand,” and then there are different categories of intervention, and in very crude terms, I will say there are categories of intervention that don’t require a complete understanding of the systems that are being manipulated and ones that do.
Jamie: So, for the applications where you don’t really have to fully understand the system, those are much easier for us to do. So, for example, with the microbiome, I mean, there’s this whole effort to fully understand the microbiome and how it functions. It’s so massively complex. It’s really tough to do. We also know that without fully understanding the microbiome, you just give a fat mouse a fecal transplant with some skinny mouse’s fecal matter and somehow the fat mouse gets skinny. It doesn’t require a complete understanding. That’s why when I talk about IVF-
Corey: It’s not just mice. It’s not just mice. This phenomena has happened in people, too.
Jamie: Yes, yes, yes. So, shifting that to genetic technologies, that’s why I think that the real driver of these technologies is going to be IVF and embryo screening because we’ll just have more and more information forever, and that’s why the work on polygenic risk scoring or just polygenic life scoring is so important, and we won’t need to have a complete understanding, and if you’re selecting from among 15 pre-implanted embryos, they just have a lot of information. Someday using induced stem cell, you may be choosing from 10,000 or a million pre-implanted embryos, and we’re going to be able to drive a lot of change through that.
Jamie: So, then after we’ve done the embryo selection, provided that it’s safe and proven, and it’s certainly not there yet, I’m part of the World Health Organization International Advisory Committee on human genome-editing. It’s a mouthful. So, we’ve put out a very strong statement saying anybody who gene edits a human embryo designed to be taken to term is acting unethically at this time. So, that doesn’t mean that’s always going to be the case.
Jamie: So, I think that the interventions that we’re going to do, as I said earlier, when they’re safe, if and when they’re safe, are going to be single gene mutation changes that confer that eliminates some kind of harm, and for that kind of harm, I will often be dominant single gene mutation Mendelian disorders or it will be benefits where there’s a single gene mutation that confers some kind of advantage, whether it’s virus resistance. There’s a long list.
Jamie: Actually, George Church probably does the best job of maintaining this list of these single gene mutation changes that could be potentially targets. That’s where [inaudible 00:26:28] I mean, that’s how he got to the CCR5.
Jamie: So, I think it’s going to be … It won’t be engineering whole traits because most of our meaningful traits are genetically complex. For those traits, we’re going to use probabilistic embryo selection. There will be relatively small and discrete numbers of single gene mutation change. I think that’s what’s going to be the growth area.
Steve: So, just to summarize, I think we maybe all agree that to get to that 20,000 number 10 years from now, there will have to be some countries in which it’s legal, there will be some beneficial single gene, single edit mutations that people find are desirable, and then the off target safety issue will be largely solved by improvements or further testing of CRISPR. Those are the three ingredients, and then 20,000 is not a crazy number. Okay.
Jamie: Yeah, yeah. I agree. So, for the off target, I mean, off target has been a huge issue, but when you look at the speed with which that problem is being solved, it’s not fully solved yet, but it’s getting more and more and more precise.
Steve: I was on a panel this summer with a University of Chicago molecular biologist named Bruce Lahn, who claimed to me that, actually, already from what he knows about mouse, the off target problem is a very minimal risk right now for many choices of vector. He thought it would be solved for sure in the relatively near term.
Corey: Just to add a little context, you just got a fact from your book, Jamie, you state that on average as a man ages, there are 2.9 mutations that occur in sperm cells each year, and it’s been realized that that’s the main thing that is happening pretty much from the time you’re 18 or over. If we’re worrying about off target effects, which might be one off target edit in a whole genome from CRISPR, a guy like me who waited till age 47 to have kids looks like I’m putting-
Steve: You’re a criminal. Well, I think you raised a great point, Corey, because, okay, again, for our audience, what he’s pointing out is that older men have a higher error rate in the production of their sperms, so that they get more mutations in their sperm. You could compare what’s the added risk to your progeny from waiting a year to have the kid versus doing the CRISPR edit, right? I think that’s the analogy you’re trying to make.
Corey: That’s right. Well, it’s partly that, but it’s also the question of just what’s the overall risk that we’re concerned about. We seem very concerned about risk in CRISPR for off target effects. We’re not actually focusing on just natural.
Steve: No. Absolutely. Right.
Corey: Be concrete, right? These are not abstract discussions. It’s quite well-known that as men get older, rates of autism in their children go up. In fact, they start going up by around mid 30s.
Steve: Right. We even know a phenotypical effect of those extreme mutations is higher risk of autism.
Corey: Exactly. Yeah.
Steve: I think you’re making the point that people, they don’t think consistently about risk. So, familiar risks, they tend to discount. Then if somebody comes with, “Hey, new treatment,” then the bioethicists look at it extremely critically and they say, “Yeah, but what about this tail risk?” and then your point is that we’re already dealing with tail risks of that size or larger in just men waiting longer to have kids.
Corey: That’s right. That’s true for risks across the board, whether a number of people dying from storms, lightning strikes, mass shootings, et cetera, et cetera.
Steve: Car accidents, yeah.
Corey: Terrorism events.
Jamie: Yeah. Yup. Exactly. Nature has an error rate. So, if this nature, whatever that is, was perfect, we wouldn’t even be having this … We probably still be-
Corey: We wouldn’t be evolving.
Jamie: … but nature has an error rate. So, that’s, I mean, it’s the rationale conversation about self-driving cars that we’re not able to have. We have a million people dying from car accidents every year. If we convert entirely to self-driving cars, and only 900,000 people buy, well, that’s a victory, but people don’t see it that way. So, we need to do better than nature, but we need to do hugely better than nature to have very prevalent adoption of any technology.
Corey: One of my favorite stats is regards auto fatalities was the rise in fatalities after 9/11 because after 9/11, a lot of people stopped flying and started driving everywhere. It turns out the increased on auto fatalities during that time was good than the number of people killed in 9/11.
Corey: So, more people died basically by trying to avoid planes then.
Steve: So, I don’t mind when average people or even policymakers because they’re not statistically trained typically think inconsistently about risks and utility, but when “bioethicists” think inconsistently about utility, that does bother me because they often make arguments in which they’re super, super critical about any new technology, but they don’t do the correct calculation comparing to risks that we already deal with in our everyday life.
Corey: I would hop on bioethicists as opposed to anybody else pushing policy of certain kind. Most policymakers, most people advocating some kind of policy have a blind spot. Often, most people have blind spots.
Steve: Oh, I’m not disagreeing. I’m just saying I expect a higher … I have a higher bar for people in the academy who are purporting to actually advise society on what should be done.
Steve: I don’t know. I just do.
Jamie: Well, they’re getting paid.
Steve: Yeah. They’re getting paid for it.
Corey: People get paid for all sorts of stuff.
Steve: So, now, back to the third prediction 10-year timescale, you were going to comment on it.
Corey: So, the first thought that popped in my head is, again, although the technologies categorically differ, and this is not something that’s new to us. Remember, East Germany had incredibly successful Olympic teams for years and years without real genetic information about people, but they could look at kids early on, they run them through the obstacle courses, all sorts of athletic abilities, and they could … It’s before they get the kids to their rights, but they would be able to pick out who was potentially very, very talented athlete.
Corey: I think it’s quite plausible that you could get this information very early on from an embryo. No doubt about it. The question is how widespread this will be and whether people would be acting on it.
Jamie: Yeah. Exactly. I think that’s a great point. My novel Genesis Code is really about this, where US learns that China has a secret genetic enhancement program where they’re identifying these kids who have certain capabilities and enhancing those capabilities using gene editing, and then placing in them the equivalent of their Olympic sport schools, but for science and math and engineering and all sorts of things, and then weeding out those people to find out who are the champions among champions.
Jamie: So, certainly, just the phenotypic expression of life will allow us. It does allow us to identify who has the potential to be good at something, but as I write about in the book, how many Mozarts, potential Mozarts are languishing in refugee camps in Syria? So, I think that with this knowledge, we’re at least going to be able to identify potentially much larger pools of people who could be Einsteins, and then some of them may not be Ensteins, but for societies to decide to organize themselves around that model of how human potential could be realized, then this will be very appealing. It’ already starting to happen.
Jamie: I mean, Kazakhstan is screening their potential athletes to see who goes with the Russian military as announced, and that they’re doing genome sequencing of the recruits to try to use that information figuring out who goes into what function. I mean, it’s very rudimentary now, but it won’t always-
Steve: I think that when it comes to state selection for talent in very, very narrow things like, say, you want to find the tallest kid to be on your national basketball team, it is clear that genetic technologies will actually do better than what the old Soviet system or the old Chinese system was doing like measuring the X-ray in your hand to see how long your various bones were, things like that. You can do better than that now.
Steve: I think one of the things I was thinking of when Jamie mentioned impacts on society and the way society thinks about life and fate, and things like that from better genetic technology, one of the obvious ones to me is that already with the cognitive predictors that we have, you can predict upward and downward social mobility within a family or within a group of people based on DNA alone. So, once the general public after willful obfuscation by the powers of this technology, after the general public becomes aware that this actually possible, I think we’ll change the way people view the dynamics of society like how fair is society, how much is hardwired in at the beginning. These are all really fundamental questions people have been grappling with since the dawn of time, but we’re getting closer and closer to being able to actually answer them. That could have a sociological impact, I think.
Corey: I just want to say that the governments you mentioned, Jamie, don’t exactly warm the heart, Kazakhstan, Russia, East Germany. I guess I mentioned East Germany, and I’m wondering whether this is likely the kind of technology that gets exploited by essentially anti-democratic authoritarian governments before it actually moves in to the mainstream if it does and if it should, but you have to admit, you have to admit, we’re not in great company when we say that these are the leading states on this area.
Jamie: That’s one possibility, but well beyond what the Chinese government may well do, what the Russians may well do, what the Kazakhs may well do is the primary driver of the adoption of this technology. This driver is going to be so aggressive that it’s going to be extremely difficult for any political system to resist them, and that will be parents who once they believe this technology provides a benefit are going to demand it, and if they can’t get it, they’re going to either go where they can get it or they are going to organize and force their governments to adapt to their wishes.
Jamie: The first tier of this will be people who are carriers of single gene mutation diseases and disorders here in the United States. There is a group for, basically, every disease has its community. Some of them are extremely well-organized, extremely political, extremely powerful, and they’re going to demand this. I have a lot of friends in Korea. In Korea, they have a national law requiring cram schools, which are these extra education schools people go to, to close at 10:00 PM because people having their seven and eight-year-old kids going to these cram schools past midnight every single night to prepare for college entrance exams they were going to take a decade in the future.
Jamie: When I told my friend who had 12 tutors coming to his house every week, I said, “If you could screen embryos to select one implantation that has higher IQ, would you do it?” He looked at me like I was an idiot. That was an unimaginable question because what is the alternative to doing it? Not doing it? It was almost unimaginable.
Jamie: I think that parents, yes, there will be states pushing this, yes, there will be a regulatory issue of where it is, whatever it is, is legal or not legal or gray area, but once it’s safe or people think it’s safe and beneficial, parents are going to demand it.
Steve: I want to quote a line from your book paraphrasing, not exactly quoting. China, by 2020, wants to sequence 50% of all newborns as investing nine billion over the next 15 years in this project with no privacy concerns. What implications does this have for US policy in your view?
Jamie: Absolutely huge. For all of us, we feel very rightly that privacy is a personal issue. If somebody comes in to my house and snoops around, I feel violated. If somebody gets into my email account and reads my messages, I feel violated. If I have sent in my mouth swab into 23andMe and they should sell, for example, my personal information to GSK, I feel violated. Yet, there is that just the basic fact that for everybody to benefit from genetic technologies, we need big datasets. We need pools of massive numbers of people’s genotypic and phenotypic information, and see the big experts in this on how big are the databases that we need, but it doesn’t matter how big they are because we are going to get databases in the tens, hundreds of millions and then billions.
Jamie: There will be a right answer to what’s the right amount of privacy that will create a competitive advantage for a society because you can imagine societies where everybody has 100% complete and absolute control of their personal genetic information, and every use of their genetic information needs to be, would need to be approved by them, every research study, everything. In that case, the people in the society will have lots of protection. They just wouldn’t have really any innovation in the field of understanding complex genetics.
Jamie: You can understand the other end of the spectrum where there’s no privacy protections. Everybody has access, the researchers and the government have access to everything. You could understand that on one hand, that could unlock a lot of research and a lot of ability to figure out decode the secrets of the genome, but you could imagine, “Well, maybe people might revolt because they would feel that their genetic information was being manipulated and used against them.”
Jamie: So, if you say no privacy and complete individual privacy has two ends of a spectrum, over time, we’re going to figure out where is the optimal place on that spectrum, but the communal answer to that question, and each person’s perceived individual answer to that question may be different. That could mean that societies where individuals are more empowered end up losing some of their national competitiveness in the name of individual rights, and societies where there aren’t those kinds of privacy protection, and China is certainly a good example of that, but you could even go more extreme like North Korea, they’re going to have potentially some kind of competitive advantage, but this isn’t an abstract question.
Jamie: I mean, we’re going to … The way we’re going to test this just like the way we’re going to test what we were talking about before with gold medals is by who gets more gold medals. In 20 years, there’ll be a genetic Amazon, and it’s going to be based somewhere, and the answer to this question will determine where it’s based.
Steve: So, Jamie, I’m curious. In the current environment where policymakers in Washington are really obsessed with China as a strategic competitor, are you sensing some potential for action on the US side given these what China might do in this area?
Jamie: Yeah. We’re already seeing it. So, you know this Steve through your work with BGI. iCarbonX acquired patients like me, which it seemed like a relatively benign thing, this little company that connected different communities of people who would organize around mostly single-gene mutation diseases, but now, that acquisition has been blocked by CFIUS. Marco Rubio and some others have been making sounds about how genetic data has become a national security resource. So, yes, this is very much in the sights of our intelligence services, our elected officials, and it’s only going to become more so.
Steve: Yeah. I was aware of CFIUS blocking that what seemed to be pretty benign acquisition, but I have not yet seen any really meaningful investment on the part of the US government in advancing the science on our side of things. That’s what I have my eyes peeled for.
Jamie: Then we have great investment in the basic sciences. So, that certainly, I mean, China is investing a lot, but all in all, the US still tops the world by a long shot in investment in the basic sciences. That’s why our universities, I mean, no universities in China on the whole can even remotely compete in these areas with our best universities.
Jamie: So, we move toward application, and in the world of applications, China is much more aggressive than the United States. The near term application, it’s not going to the primary application isn’t going to be reproduction or direct to consumer genetics, it’s this transition from generalized to precision, to predictive medicine and healthcare.
Jamie: So, China is certainly all in, and they’re able to move much quickly than we are, and in a way, they’d benefit from these economies of scale, that they have to move toward algorithmic medicine in China because they just don’t have enough doctors, and they need to find a way just to decentralize care. In the United States where we’re spending just an obscene amount of percentage or our GDP on healthcare, we have so many builtin stakeholders that creates a level of conservatism that’s really difficult to overcome.
Jamie: So, again, because this medical technology, most of it is not proprietary, it’s in the applications and China is really moving quickly on the application. That doesn’t mean the US isn’t doing anything. The NIH is paying a lot of attention on precision medicine wanting to get it right, but it’s game on between, certainly, the United States and China.
Corey: I have another topic, actually, that I think I hope we don’t miss because we are running out of time, but it was the thought experiment that you described in your book of accelerating the reproductive cycle, basically shrinking the generation time potentially down to six months or less. I’d never thought about this possible application of iPSCs as we can call them, as they’re called.
Steve: Can you define that for our audience?
Corey: So, Inducible Pluripotent Stem Cells, these are, basically, cells that we can create from any normal cell, and we can … Since you walk it back in time to an early stage in this developmental point where it has capability of turning into any possible cell in the human body, and Jamie lays out a very interesting theoretical possibility, maybe practical possibility of simply, I don’t want to state your thought experiment for you, but using early stage embryos to generate sperm and egg cells, and then basically reproduce while ever getting to the adult stage. So, that sets the stage. Please describe in more detail.
Jamie: Yeah. So, it’s exactly as you’ve said. Right now, what you could do is if you are using the technologies that we’ve described, you have one set of parents, and they want to have a child using these technologies of IVF and embryo screening, but they want to have a larger number of eggs to choose from, and so we’ll make it near term, so the mother has a skin graft taken. Those skin cells are induced into stem cells into egg precursor cells, and then into eggs.
Jamie: Now, let’s say she has 10,000 eggs, which are fertilized by the male sperm. Average male ejaculation has about a billion sperm cells. So, now, you have these 10,000 pre-implanted fertilized embryos, and you grow them all about five days, extract a few cells from each using an automated process, and sequence them all because the cost of sequencing is trending toward negligibility.
Jamie: Now, from these 10,000, you pick one. Let’s just say that you pick one, which is a male embryo, and then, actually, because at five days you haven’t even had full differentiation into gender, you could just have any embryo, but just to make it simple, we’ll call it a male embryo, and then another set of parents, they do the exact same thing and they then select a female embryo. Again, forget the gender issue. Now, it’s a little more complicated.
Jamie: Now, you have a five-day-old male embryo and a five-day-old female embryo. When you extract cells from each of them, these are embryonic stem cells. So, the whole point of an embryonic stem cell is it can become anything. So, then you have the boy embryo and the girl embryo, and you extract sperm cells from the boy and egg cells from the girl, and then you make 10,000 more of these early stage embryos, sequence them all, select one of those based on whatever it is you are optimizing for. Most likely, it would be for some kind of polygenic trait you could be genetic component of IQ or height or personality style of whatever it is.
Jamie: You could just keep doing that over and over and over forever. So, because of all the technologies, let’s say it takes six months per generation. That means in 10 years, you have 20 generations, which is when the average human, it’s about a little less than 30-year per generation. So, using that, you could really push change across the population. What I write about in the book is you’re knowing nothing about genetics, our ancestors took wild chicken laying one egg a month and turn them into domestic chicken laying one egg a day with all of this knowledge.
Jamie: Let’s say we were optimizing for something. Whatever it is, we could really push that thing in ways that would be very unfamiliar, could be very unfamiliar to be able to think about humans in our current format.
Steve: Now, for our audience, I just wanted to point out that that first stage where you take a skin cell and you induce it to behave like an egg, become an egg cell, that has been successfully done in mouse. I think that’s a pretty well-accepted result, and there are at least a couple labs and I think one startup trying to actually perfect the process for human.
Corey: You’ve done it for all sorts of cell types in mice, neurons. You’ve done it for muscle cells.
Steve: Specifically for eggs, for oocytes, it’s been done, and there are people trying to work it out in primate or human.
Jamie: Yeah. I was in the lab about a couple of months ago, I’m in the lab of Mitinori Saitou in Kyoto. I actually spent a full day with him and his team, and it’s a great dinner really talking about this. So, it’s not like just there’s a direct line from anything you can do in a mouse you can do in a human, but we know that IPS, that this whole thing of taking back cells in time, we know it works in human cell. That’s already been proven.
Jamie: So, I think it’s a pretty good bet that this science is going to progress whether it’s going to be that the error rate of this process will approximate the error rate of nature that’s for humans that’s doing on them.
Steve: Right. So, at least that first step of your generational process, I would feel fairly confident or I would not be surprised if 10 years from now that were fairly well-perfected. I’m curious what people in Kyoto thought.
Jamie: Well, they were very conservative because what Saitou-san said was the only way we’re going to know this is safe is if somebody born through this process lives an entire life because we’ve seen in some of the cloned animals, they seemed pretty good earlier in life, and then they have problems later.
Steve: That conservatism could have been applied to test tube babies, the first IVF baby. So, yeah.
Jamie: That was my point. So, he thought three generations. So, he thought, “Well,” he said, “Well, it’s going to take about 10 years to get where the tech works, and then three back-to-back generations.” Then he’d put those, because it was Japan, 80 years each, which was 250. That’s just not the way humans work. I mean, I told you about six years from CRIPS-Cas9 to the first human baby, nobody’s waiting. I mean, Louise Brown is 41 this year. It could be that all IVF babies drop dead at 42. We just don’t know, but nobody is waiting. We have, whatever, six, seven million kids have now been born. No one is waiting three generations. I don’t think that will happen yet.
Steve: Right. So, it sounded like taking a part that three generation careful checking. They think 10 years is also roughly the right timescale.
Jamie: The 10 years is for people making predictions. 10 years is a sweet spot because it’s far away to be far away, but no one is going to come back to you next year and say, “You remember.” Lots of things seem to be 10 years.
Steve: Understood. I think we’ve focused in this discussion on about a decade as the timescale, not infinity. We’re all young enough to at least see another 10 years. Now, I would point out to these Kyoto folks that, obviously, if they the technology pretty well working and they can try it out on, say, monkeys, whose life expectancy is much shorter, then I think the safety issue could be largely not completely resolved, but largely resolved if you did three generations of monkey.
Jamie: I told them that. Yeah. I told them that. I think that there’s also just a builtin conservatism to most scientists. Most scientists, if you say, “Hey, this is really exciting science. My plan is to totally recreate life on earth,” your funding dries up. You’re actually like, “I’m trying to be extremely conservative. I’m trying to be responsible.”
Steve: So, I think Corey raised this as a way that human evolution could just speed up tremendously. I think it’s not technologically out of our hands. In a century, I would be very shocked if we didn’t have mastery of these technologies.
Corey: It’s a kind of technology that reminds me of a way in which we can bridge the gap between biology and silicon because in some sense, this is how silicon generation tap in. You try something out, you make small iterations in it, and over time, things change much faster than they do in biology. If you’re able to do something like this, I’m not arguing that one should, but you could easily see basically revising human biology very, very quickly in a way that would not quite give you Moore’s law, but that would definitely be fascinating than anything we’ve ever seen.
Steve: There are old essays by Freeman Dyson pointing out that what’s likely to happen when humans finally get out into space is that we genetically modify ourselves to, for example, be able to do photosynthesis or be much resistant to radiation, much more resistant to low gravity, et cetera, et cetera. He thought when we finally people the solar system, it will be a different people humans, but engineering, too.
Jamie: Well, it must be. It must be. I mean, that’s the thing. I mean, we’ll just use it’s easier to talk about crops. We’re changing our climate so rapidly that we’re going to have to change crops in order so that people can do farming and stay where they are. I mean, we are a specie that’s optimized for this kind of life. If the environment around us changes, the old-fashioned way is, well, if you’re not optimized, you just die, and that’s why so many species just die out.
Jamie: We don’t like that as humans, and if we are, I mean, we have to become an intercellular species because our planet we know and our sun are going away. We can’t do it in this form. That’s why with all of this, we have to think about what’s the connection between now and what’s possible now and what’s ethical now, and tomorrow where we may be facing a different set of circumstances.
Jamie: So, everybody likes to think, “Well, the world stays exactly like it is whether this set of options, but maybe our world could change fundamentally because of climate change, because some kind of pathogens that wipe people out because of some exposure that something, some asteroid crashes here. Who knows?
Steve: … or wanting to live on Mars or Titan.
Corey: So, I’m not really convinced that we’ll be inhabiting other planets comfortably as a result of changes in our biology. I think it’s almost certain that we will develop technology that allows us to live there far sooner and to far greater extent than we will to our ability to generate energy by using photosynthesis.
Steve: Okay. Photosynthesis is a pretty big reach, but if you accept this hundred-year timescale that we’ll have this fast six-month generation evolution under control, then the timescale to develop the Star Trek like technologies that you need to be comfortable in space without changing us, and the biological technologies that they are competing on a similar timescale, right?
Corey: I think they’d be competing, but I think their details of this sped of evolution if we really think about, and that’s that often evolving traits has negative consequences that would take a while to recognize. I used to study fruit flies, and one of the well-known papers involves selecting fruit flies for learning ability. You could basically raise a fruit fly’s ability to learn a task pretty substantive in a couple of generations, but these flies had serious problems, and maybe we’ll have genetic technology to identify those problems in the embryonic, but they’re much susceptible to stress. I think they had low reproductive rates.
Corey: So, all these consequences that I think creating researchers drawing attention to potential for, and you may not see that unless you actually run the live experiment where you allow the animal to develop for at least a few years, and that’s going to rapidly slow things down. I think we’d be running this experiment in the embryonic stage pretty quickly, but as far as having people walking around 10 generations down.
Jamie: I agree because evolution is a balance. I mean, there are certainly lots of bugs in evolution, and sometimes we call them cancers or diseases or whatever, but evolution is a balance that’s happened over billions of years. I mean, we may be optimized in certain environments and things can change quickly. So, we must have a level of humility. We must recognize that this is really serious stuff. We can’t be quasi about it. We have to have a series of inclusive and forever dialogs about the ethics to make sure that we are values-driven, but this science is very, very real. It’s moving rapidly, and my view is we should shape it with our best values rather than sit back and let other people’s decisions shape our world and ourselves.
Steve: Yeah. I certainly agree that technology is moving faster than society’s rate of learning about it or rate of having a deep understanding of it. I think it’s really important what you’re doing to try to make people aware of what’s going on here.
Corey: I guess I’m a little bit of a cynic as far as the world values in this discussion. I don’t think this stuff is going to be developed on the basis of our best value. I think, in fact, another argument you pushed, which it’s going to be parents’ desire to see certain offspring just the raw desire to succeed and have your kid be better than other kids, I don’t think that’s based on the best values. I think that’s the drive things.
Jamie: Yeah, and yet society has developed norms for what is, and isn’t okay. I think that’s where we do have these elements of social consensus, and sometimes we can’t even see them, but I think that … So, now that I feel like the norms will drive everything, but I actually do believe that social norms set a range of at least socially accepted behavior, and that accounts for something.
Steve: Corey, I might agree with you, the cynical view, your cynical view that norms are not going to drive things self-interest may be will and maybe possibly more in a dystopian than a utopian direction. However, on the other hand, it’s quite noble to try to be a person who talks to policymakers and leaders like Jamie does and tries to make them aware of what we can try.
Jamie: Hopefully more than noble. Hopefully, it will have an impact, but I’m doing my best.
Steve: Well, Corey is saying that you’re not going to get there. I think he-
Corey: I think what he’s doing is really noble and it’s a good analytical and intellectual involvement in a very complex issue, but I don’t know if good arguments and reason is going to win the day in this context.
Steve: Right. None of us do.
Jamie: That’s exactly right. None of us do believe had technologies that could have been used in all sorts of ways. For example, biological weapons or nuclear power, and the norms have actually guided how they have and have not been used. I feel like this is another one of those things where norms aren’t everything, but they’re also not nothing.
Steve: Do you remember the Asilomar Conference when they … Yeah. So, in the early DNA days, they had this meeting to try to set world standards for what scientists should do and shouldn’t do. I felt like that wasn’t really actually necessary at that time so much, but it is becoming more necessary now, and I think that that’s what people like Jamie are trying to do.
Jamie: It’s funny. I’m just writing … I’m just in the middle of writing a little blog post that I haven’t finished or were put up, and basically, I talk about Asilomar. Everyone in the science community says. “Asilomar, that’s the model of what you should do is that early stages of recombinant DNA.” The scientists and other stakeholders got together. They laid out a responsible set of guidelines. Those guidelines were followed. That’s why consuming GMO crops today is 100% safe. There’ve been more than 40 years of studies, and it’s never been shown that consuming GMOs is any, human crop is any less healthy for people than otherwise.
Jamie: Asilomar, in my view, was a total failure. It could have been worse, but because there wasn’t a broad public engagement, the public felt that all these scientists and companies like Monsanto are pulling a fast one on us. That’s why now millions of people die in Africa and South Asia because they can’t access, they can’t use GMO crops. Otherwise, they won’t be able to export mostly to Europe.
Jamie: So, even if the scientists organize themselves well, it’s not going to work unless we have an inclusive public, not just engagement, but engagement and empowerment process, so that everybody feels at least like they’re part of something, and part of a decision making process.
Corey: I think you’re entirely right about GMOs. I’m actually not sure there’s much you can do because I think a lot of the fear about them is deeply, deeply irrational. You have scientists out and front trying to explain to people the positive benefits of GMOs and the low risks. You’re going up against fear of the unknown, and I think what I call tyranny of the natural.
Steve: Yeah. I think you’re counterfactual that they ran Asilomar with more public participation then that would have fixed this problem. That might be a little bit too optimistic. I think I agree with Corey.
Jamie: No, but it’s not a 100% story, but it’s not a 0% story. I mean, just because there are like with the early stages of immunizations, the Jewish community with Tay–Sachs screening. I mean, there are things where people get that there’s something happening, and it’s significant.
Jamie: So, I think there are these moments early on with the birth control pill. In the beginning, there was this whole negotiation with the Catholic church because the people who developed the pill were trying to say, “Look, this is just perfectly natural. That’s why they have the off days and they have the monthly menstrual cycle.”
Jamie: So, the framing of these issues at the beginning can often have, in some ways, determinative effect on how at least the public discourse plays out over time, in my view.
Steve: I agree with you. All right. Well, we really are out of time. We’re going to have to have you back, Jamie, because this was so much fun.
Corey: Yeah. It’s really a pleasure.
Jamie: I loved it.
Steve: As my final item, I’m going to throw out three sci-fi items, one a movie, one a TV show, and one a book related to genetic engineering that I really like, and you just add what for our listeners. So, as movies, Gattaca, TV show, Space Seed, which is where the character Khan is introduced in Star Trek, and the book is Dune by Frank Herbert. So, whatever you want to suggest.
Jamie: Wow! Old school. Well, I actually-
Steve: I’m old, man.
Jamie: I don’t watch much … No, I’m old. So, I don’t watch a lot of television, but I was addicted to the Battlestar Galactica remake. So, I’m sure because your podcast is young, most of the people who are watching or listening are total nerdy people like us. So, I highly, highly recommend that. For books, I’m a huge fan of Richard Preston, and just everything that he writes. I just read a book, Orfeo that is great, and it’s a wonderful story about a guy who’s a … He’s a composer but who develops this just complete passion for biology, and creates his biology lab and then he’s accused of being a terrorist, and he’s driving across the country to go visit his daughter, but reliving his whole life, but it’s this beautiful thing, the connection of the symphony of music and the symphony of biology.
Jamie: Then movies, I don’t know if there’s a genetics movie that I’ve particularly loved. I mean, I liked Gattaca. Although Gattaca just it really pissed me off because I felt like the Ethan Hawke character, he’s putting everybody’s life at risk.
Jamie: The last person you want on a space program is someone who’s not genetically optimized. I mean, they should arrest him for that.
Steve: … and a pathological liar, a deceiver.
Jamie: Yes. Exactly. Exactly.
Corey: … but he’s human.
Jamie: That’s the problem. I love humans, but we’re not optimized for everything. Every individual isn’t optimized for every outcome. It’s painful for us because we are addicted to this wonderful belief that anybody can be anything. I just think that this is this change that’s coming. We are looking under the hood of what it means to be a human being. Hundred years ago, everybody is saying, “Oh, we are hormones. Hormones are everything. That just defines who you are.” So, we are much more than our genetics, but our genetics really defy in many ways a range of possibilities of what we can be. We’re going to have to face that.
Steve: Great. Jamie, thanks very much.
Jamie: All right, guys. I really enjoyed it.
Steve: Take care.
Jamie: All right. Bye.
Corey: Steve, I should point that you actually missed two pretty important works of Jamie that I was fascinated by, but we may have a chance to talk about today but two books on the Cambodian genocide. Is that correct?
Jamie: Yeah. My first book was a history of the Cambodian genocide, and specifically why the world failed to respond to something so terrible. The second book, which is also, I’ll explain later, connected to my belief and the connectivity between nonfiction and fiction, that we have to learn about the world, but then tell stories about the world to bring people into the conversation. So, my second book was a novel called The Depths of the Sea, which explored issues around the Cambodian genocide, but in the context of stories of different imagined people involved.
Steve: Well, I was trying to focus us more on futurism and genetic technology, and maybe its relationship to policy, but it’s amazing stuff that you’re referencing, Corey. I am curious since we’re on the slight digression, maybe this will get cut out, but Jamie, have you seen this documentary where I think they go back and they actually interview some of the people responsible for the genocide? Do you know what I’m talking about?
Jamie: I don’t know which one. I certainly have seen a lot of things about the genocide and there was this whole hybrid UN Cambodian tribunal that’s interviewed a lot of people. So, it’s really unimaginably terrible, and that it’s so recent. It’s in all of our, at least the three of our lifetimes, and I think that these kinds of things, we really, we have to understand and just sweep them under the rug.
Corey: I think genocide is really a fascinating topic to understand political perspectives from because whether someone cares about particularly genocide, often, it seems to depend upon their political orientation, and what happened is that many people on the left did not particularly care about the Cambodian genocide often played it down in some ways. You find often atrocities that people don’t happen in other countries, people on the right don’t care about. So, it’s a fascinating prism to understand what looks like an objective evil, but people bring their political perspectives to it and they’re minimized or played up for those reasons.
Jamie: Well, people bring their political perspectives to life. When I was at Oxford and I was working on my dissertation, which became that first book Noam Chomsky came, and at that time, when you’re writing a PhD, you are the world expert in your little narrow thing. So, I knew absolutely everything, every detail about the international response to the Cambodian genocide.
Jamie: So, I asked Noam Chomsky this question because he’s a real villain in this story, and he was one of the people who was denying these very credible stories that the genocide was taking place. So, he brushed me off, and then he said, “All right. Well, send me a letter about it.”
Jamie: I sent him this letter, and it was this point-by-point as only a PhD student can do just saying, “Here was what was happening. Here’s what you said.” Then we got him this very heated response. So, you’re absolutely right. I think that maybe then to pivot to our topic of today, in every generation, there are so many different huge morally vexing issues, and for the challenge for all of us as humans, whatever our political perspective, is to take a step back from ourselves and really try to say, “How do we do the right thing?” Doing the right thing is often really complicated because when it’s clear, and if it was clear and easy, everybody would do the right thing, but it’s always complicated and difficult.
Jamie: Whether the issue is how to respond to a genocide and maybe when you don’t even have all the information or maybe when there are political forces that make intervening extremely difficult or even something like now where we have the tools to recreate life on earth, and we could do it in a way that helps our planet and helps everybody or we could do it in a way that harms us all or wipes us all out or other species out. We have to find a way to really engage with these tough, difficult, complex issues, so that we can find the best way forward that optimizes the good stuff and minimizes the bad stuff.
Corey: Just one last point I should say because Chomsky is actually, he’s an old friend of mine. I was a student of his, and I was actually thinking of him as I was making those comments. One thing I really do like about Noam is that for all of his ideological rigidity on many points, he will engage. You write him a letter, and he write back. He’ll write a long letter back. I mean, he’s not writing these now. He’s 90, but I’ve had friends who had debates with Chomsky that went in the tens of single space pages.
Jamie: Yeah, Me, too.
Corey: That’s something that I think is completely, it’s something that disappeared. I don’t think anybody ever really did it at the extent he did it, but these days are even it’s often devolved into name calling, and Chomsky didn’t do that. He’d respond from the facts from his perspective. He’d wait for them to come back with your perspective. It was a respectful argument. I wish we could really go back to those days. I think something was … It’s really a wonderful time that I think is lost in current political debates.
Steve: Okay. I have to tell my Chomsky story just because it turns out I guess all of us know Chomsky a little bit.
Steve: When I was a junior fellow, he had been a junior fellow as well. We could have guests for dinners and lunches. So, I invited him to dinner, but he had a very tough relationship with some of the senior fellows. He didn’t get along with Bert Drobbin, maybe you know that name. So, he came to lunch because he wanted to meet the younger junior fellows, not the senior fellows.
Steve: So, he came and we hung out and we had a great time. I really enjoyed getting to know him. He actually has a pretty decent sense of humor. I worked with a mathematician who also had an interest in Chomskian grammar. This guy was like Chomsky’s proof checker. So, he attended every one of these seminars at MIT that Chomsky used to have. Whenever Chomsky got stuck, he would ask my friend, “Oh, can you fix this for me?”
Steve: So, I actually went through, I forgot what it’s called, the name for his structural, the way that he deconstruct sentences.
Corey: The transformational grammar?
Steve: Yes. So, I went through that in great detail with my friend, and I was a skeptic. I didn’t think actually it followed rigid rules. I think Chomsky was making things up at various times, and he was never able to convince me that, actually, what Chomsky was doing was fully axiomatized or rigorous. So, anyway, we should discuss that on some other episode.