David Skrbina on Ted Kaczynski, Technological Slavery, and the Future of Our Species – Episode #7

David Skrbina is a philosopher at the University of Michigan.

Steve: Okay, our guest today is David Skrbina, a professor of philosophy at the University of Michigan. David has very broad interests, but today we specifically want to talk to him about the philosophy of technology. Perhaps his most well known work is a book which was written in collaboration, in a sense, with Ted Kaczynski, the Unabomber. That book is called Technological Slavery, and it includes both the original manifesto, which the Unabomber compelled The New York Times and Washington Post to publish, but also correspondence over a period of years between David and Ted Kaczynski the Unabomber. So David, I want to welcome you to our podcast.

David: Thanks, glad to be here.

Steve: I just want to say a little bit about your background or hear a little bit about your background. So you started out in stem, so you were actually studying mathematics, actually through a master’s degree, is that correct?

David: Right. That was my initial degree program. It led to a master’s degree in math in the mid ’90s, that’s right.

Steve: And coincidentally, or perhaps not, that was the same graduate program where Ted Kaczynski earned his PhD, also in mathematics.

David: [affirms] Entirely coincidental by the way, so… yes.

Steve: You didn’t know him?

David: No.

Steve: And have you ever met him?

David: No, I’ve never met him, no.

Steve: Now, my understanding is, your interest in the philosophy of technology sort of pre-existed, predated your becoming aware of the Unabomber.

David: Right. So this goes back to the early ’80s when I was an undergrad at U of M, and I was studying math, studying science, studying computer science. So I understood the issues, I understood how technology worked, I was, had a pretty good grasp of the basic issues; but I was also sort of interested in humanities and philosophy, concerned about the human implications and then eventually the environmental implications of technological society. And there were some people at the time, philosophers at Ann Arbor and some others that I had been reading, that were skeptical, critical of technology in certain ways that was relatively new to me. But I found their arguments pretty strong, pretty compelling, and I did a little background research and eventually tied into other books like Jacques Ellul’s book The Technological Society, which was published in English in 1964. So I read that one very carefully, and he had a lot of compelling arguments against how technology worked, what it meant for society, and what some of the implications were. And this was, like I say, this was even into the ’80s, well before the Unabomber spree started. So I had a pretty good background, understood the issues. I was pretty skeptical myself, hadn’t really developed much of a philosophy of technology yet, but I was aware of the issues and certainly interested when the whole Unabomber case came along in the ’90s.

Steve: So were you actively following the case? So, for example, were you aware of who he was and why he was blowing people up?

David: Yes well, so, yeah, I mean, I was vaguely interested in the beginning. You hear about a serial mail bomber, that’s kind of interesting. But it really got interesting when I found out there was an ideology behind this person, that they actually had a kind of a mission in mind, and that they had this anti-technology stance that they were promoting. So then I was really interested, I was really following the case pretty closely. This was in the early ’90s. And it was funny how the news media would only dribble out little tidbits of information about the manifesto, and you’d get just a sentence or two or maybe a paragraph at most, and it was really interesting. And so I was really intrigued to follow how this thing was gonna develop.

Corey: During the course of the bombings, was it clear what his ideology was? Was he making pronouncements? How would people know what his motivations were, before the manifesto was ultimately published?

David: Yeah, no, it was only at the very end. It was kind of a mystery what the motives were, I think, until the very end, when he started communicating with letters to authorities — it was the government or media authorities, I don’t know who all he wrote to — but for quite a long time I think it was pretty unclear what the motives were, and only at the last maybe year or two was it coming out that he actually had a structure or a plan to this system. So it was a relatively recent development, yeah.

Steve: So you first got in touch with him after he had been arrested and you wrote to him, I believe when he was already in the supermax prison?

David: Yeah right, so that was several years later, right. So he was, the manifesto was published in ’96 — sorry, late ’95 — he was arrested in ’96, there was about a year process for the trial, ultimately he ends up in the supermax prison in Colorado. I was out of the country at that time on a work-study program, came back, completed my PhD in philosophy in 2001, started teaching at U of M Dearborn in 2003, so several years were going along at this time. And one of the first projects when I was teaching was to teach philosophy of technology. It was an area of interest of mine, and it turned out that they had no such course at U of M Dearborn — in fact, in the entire U of M system they had never taught a course called philosophy of technology. And I thought, well, this is strange: we’re in a technological society and there’s many philosophical issues, and it seems like we need such a course. And they said, well, if you want to do that, then you have to make one. So I said ok, so I’ll create a new course — which I did. I created a new course called philosophy of technology, and I created initially a course pack — ultimately it became a textbook of reading material — for the students for the course. And it was basically a set of historical critiques going back to the Greeks through recent times — and of course, for recent critics, you have to include people like Kaczynski and the manifesto. So I’d had the full text of the manifesto, but at this point it was several years, three — sorry, six or seven years past the time of the arrest, and Kaczynski really dropped out of sight for those six or seven years as you heard nothing about him: there was no news coverage, I wasn’t really sure where he was at, I didn’t even know if he was alive — I mean it was really, really a black hole. So I knew if I wanted to follow up on the latest information I would have to contact him directly. So I just wrote a letter — this was late 2003 — just a letter out of the blue. It was like, you know, hello Dr. Kaczynski, I have a few questions for you if you’re out there. And I sent it off to a prison address, had no expectation of getting a response. About two weeks later I got a one-page handwritten letter from him, and he said thanks for your letter, I’ll answer it in detail shortly. And about two weeks after that I got about a twenty-page handwritten letter from him addressing all my questions. And so that was sort of the initiation. The process of course led to more questions and discussions, and what material have you been writing, and what are your recent thoughts, and have you altered your ideas in the manifesto, and then a whole interesting string of events followed after that.

Steve: Now I’ve looked at a fair bit of your correspondence, and is it fair to say that you were really engaging him really at the level of ideas as one intellectual to another? There isn’t so much about the crimes themselves, the bombings, maybe not even so much about his psychological motivations for doing this, but really just his, the philosophical case for why he felt moved to do these things.

David: Right, that’s exactly right. In fact, I made that clear right at the very beginning, that I had no real interest in his background, his personal history, the trial, his motives in a sense. I think at one point I said, you know, I’m sure that’s all fascinating stuff but that’s not my area of interest. I’m just interested in the philosophy of technology, how the system works, how it’s going to progress, and what we can do about it. And that’s exactly what he prefers to talk about. He wants to talk about technology and how we’re going to respond. So that was the bulk of, yeah for, well over our 120 letters that we’ve exchanged, that’s really been the sole topic of discussion.

Corey: So I just want to ask, just a defining question: How would you define the philosophy of technology?

David: Well okay, so you have a philosophy of any topic, right? So you really want to understand of anything, what it is, how it functions, and how we should respond in the world. I guess that’s a very general question. So philosophy of technology, you want to understand what is this technological phenomenon, how does it function in the world, and how can we react or respond to it? I guess that’s a very general definition, but I suppose that’s what I would say in a larger sense, right.

Steve: So I’d like to get into in more detail both how you interpret Kaczynski’s thesis and your own philosophy around technology and technological advancement in society. But before we do that, let’s cover one slightly different thing, which is that is it ethically defensible to examine the ideas of evil people? And in some sense, one could even say that you are the instrument that Kaczynski tried to create by killing people, right? Because his motive, stated motive for killing people was to draw attention to his ideas. And I don’t think he could have asked for more than to have a highly intelligent, stem-trained philosopher at a leading university writing books with him. So how do you respond to that?

David: Okay, right, so that’s the obvious question. I guess it’s, the first thing is it’s not his ideas. These are very old ideas which go back thousands of years, and there’s a long history of critics who have been just as critical as Kaczynski. So what he does is, he takes very old ideas and he puts them in a very modern context. So that’s the first thing. Secondly, I think we have to separate the crimes, which I’ve never condoned, from the issues at hand, which is the technological system. So yeah, I have nothing to say about the crimes. I don’t advocate them, I don’t support them. That’s a completely separate issue. To me the entire issue is the technological system, because it presents grave threats to humanity, to the planet, and we need to talk to people who are willing to engage with that discussion in a serious way. And Kaczynski is an intelligent, informed critic, and he’s willing to engage in it in the most serious manner. I don’t agree with everything that he says, but he raises some very important issues, and I think we need to debate the pros and the cons of his position and understand the arguments and decide how we collectively will respond, because it could mean life or death for the planet. So to say that well, he’s a criminal, he’s done evil things and therefore we shouldn’t talk about him, serves no purpose at all. We do ourselves no favor by shutting him out just because he’s done bad things. We still need to talk about the issues, we still need to talk about the topic, and this happens to be one context to do so, and I’m willing to take that opportunity.

Steve: So just staying with this for just a bit more before we get on to the actual thesis. We recently had a horrific shooting in New Zealand. The killer there has posted a — not as lengthy as Kaczynski’s — but, you know, fairly substantial manifesto online. I think your position would be that despite the horrific crimes this person committed, that it is okay to engage with the ideas that are in his manifesto if you find them to be defensible.

David: Well, I don’t want to issue a blanket statement for every criminal act, you know, that occurs around the world, but in general if there are intelligent, rational, well-defended ideas we should at least engage with them. Now I don’t know anything about this current manifesto. Many of these cases are, as far as I know, they’re sort of rambling and incoherent documents which really don’t deserve much discussion. That’s what makes Kaczynski’s case stand out, is that it was a very rational, very well argued, very lucid case. But I guess in general that would apply to any such case. If we have any action that brings an ideological critique to light, that we probably should engage with the arguments at least in a cursory way, understand what’s going on. There are things that are not acceptable to the mainstream society which sometimes only come to light through these sorts of extreme actions, and I think we should at least be open to looking at these ideas. It doesn’t mean we give them any special credence, it doesn’t mean we condone the actions that were associated with them, but perhaps it’s at least worth a look at the arguments, understand the basis for them, and then we go from there.

Steve: So my understanding is that the New Zealand government now is in a perhaps quixotic attempt to completely suppress the manifesto of this shooter. And to take another example, in Germany for many many years one could not even buy a copy of Mein Kampf. I think that’s now not true anymore. So it sounds like your position would be that you’re against absolute censorship of ideas based on the actions of the people who perhaps originate those ideas or propagate those ideas.

David: Yeah, I mean censorship always serves the interest of those who are in power. That’s why censorship works, and so we have to oppose that on all fronts. And whether it’s politics or history or technology, yeah, censorship is always self-interested. So as academics and free thinkers, we have to be open to every idea, and, yeah, it serves no purpose to block these ideas at all.

Corey: You know, again, I don’t want to focus on Kaczynski’s crimes, but I guess I’ve found his actions puzzling to this extent: that he did not actually tell you what his motives were until the very end, very much unlike the New Zealand shooter and more recent terrorist attacks who made it pretty clear why they’re doing things. I know you didn’t discuss this with him. Did you have any sense why he would go on for years, with not communicating why this was happening?

David: Yeah, well that’s a good question. I don’t know in the early years ’cause — what, I think the first bombing was in the late ’70s, right? So I don’t really know, I’ve never asked him. I don’t know if there was a well thought-out plan. Very early in the process, my guess is that there was probably not. Maybe it was just an ill-defined way of striking back at leading figures in the technological system, maybe it was kind of a violent anarchist movement for him early on.

Corey: Were his victims primarily technologists?

David: They were leading figures in technology, yeah, advertising, research, airlines, that’s where the Unabomber label came from. So my guess is that over time he gradually decided that the notoriety could be useful to promote the critique of technology rather than simply trying to bomb people, which obviously has limited effect. And he knew he was probably gonna, it was gonna come to an end or he would be caught at some point, probably he would die. So somewhere in that process, I don’t know exactly when, he probably decided that it was an optimal means to promote an anti-technology critique.

Steve: So I know this wasn’t your primary interest in Kaczynski, but one of my primary interests in Kaczynski in following this case pretty carefully was his psychology and his childhood. So he was a very gifted kid, entered Harvard at sixteen, finished a degree in math, and then went on to do what is considered, I think, a pretty strong thesis in harmonic analysis at Michigan, which got him a job as an assistant professor at Berkeley. But he clearly had a very difficult time dealing with people growing up. I think he was alienated from a young age. And also there was a very notorious psychology experiment that he was subjected to, or he volunteered for without knowing how intense it would be, while he was at Harvard. I’m curious — again, I realize this is not your primary interest — but I’m curious whether you have any insight in any of these issues.

David: Yeah, I’ve never talked to him directly about this. My suspicion is that that is less important than people would have us think, because we tend to look for a cause or a source of his extreme behavior, and it’s easy to pin it on things like, you know, childhood abuse or, you know, some CIA experiment, you know, that messed him up psychologically in college, and somehow that this is the explanation why he did what he did. And I think this is a way of taking the attention off the ideas and putting in some aberrant historical event in his personal life which somehow explains this, because the last thing that people in authority want us to think is that these are rational ideas and that a rational individual would actually do such a thing. Only somebody who is sort of, you know, crazy through some psychological incident would conduct such actions. So that’s my impression, that these things either did not happen or are far less important than we’ve been told.

Corey: But this is actually not a theory that comes from authorities, right? This is a theory that’s promulgated by his brother in his book, how where he talks about, you know, I guess him being in the hospital for a certain period during his childhood and not having physical contact, and then he also talks about it, so…

David: Sure. Well, I mean, yeah, obviously something unusual happened, because normal people don’t just go up and start sending mail bombs. So I mean, there’s some kind of unusual background, you know, to his personal story; but, you know, whether that was the cause for his actions or the cause for his ideological writings, you know, to me that’s, I think that’s probably not true. His brother David has been completely disavowed by Ted, seems to have his own motives — of course, the brother was responsible for him being apprehended because he turned his brother into the FBI, so I don’t know the brother’s motives. I would be skeptical of his position on these things.

Steve: My interest in his psychology is a little bit different than I think what you just described. So in reading his manifesto — and I remember reading it very thoroughly, I was an assistant professor at Yale when it came out, and we got The New York Times in the common room in the physics department, and I remember us reading it very carefully when it came out — what’s interesting about him, as you might expect for a brilliant mathematician, is his arguments are logically consistent. What you might question is his priors, the assumptions that lead into the analysis that he then makes. And it seems to me very clear that the assumptions he makes are influenced by his alienation from society and his experiences growing up — and of course, what else would you expect? If you’re trying to talk about a system for society and whether it’s good or bad for people, the main thing you might draw on is your own life experiences. And so don’t you feel that perhaps he’s especially pessimistic in his basic assumptions about technology because he himself had a very difficult psychological experience?

David: Yeah, that’s an interesting question. I don’t know how to directly answer that. I don’t know — I mean, the fact that he gives us very rational and justified arguments which seem to not — I don’t know, I think they do not relate directly to personal history events. I mean, he talks about things like, you know, one of his — in the manifesto he’s critical of leftists, and he talks about the power process and why people refuse to engage in technological criticism, and you can see some of that may be coming from a personal life story, but much of that is really incidental to the main thrust. The main thrust is analysis of the technological system itself, how it functions and where it’s heading, and how we can respond. So I would say portions of the manifesto maybe are, as you say, you know, rooted in his life history, but probably the intellectual critique, which is the most interesting and most important part, I think that’s a fairly objective standalone argument. And again, he’s not the first to make that argument, and obviously he won’t be the last, so…

Corey: So David, if you could just lay out what you see as the core arguments, right, in some detail — I mean, how many are we supposed to give? Give us the top three arguments.

David: Yeah, well there’s a kind of a central argument, right? So he basically says technological society is profoundly unnatural, in the sense that it forces us to live in ways that are completely against our evolutionary history. So — and this is pretty much beyond dispute, right? — so humans evolved as hunter-gatherers in small groups living in vast wilderness, eating natural foods, you know, obviously vastly different from how we live today. So our living conditions, environmental conditions are hugely different from our evolved conditions. This puts a lot of stress on humanity, because you cannot change your genetic nature over even a couple of decades — I mean, it takes centuries at least — so we have this internal stress put on us by a technological system which we cannot resolve, because genetically we’re still basically hunter-gatherers, and yet our lifestyle and our society is a highly complex, large-scale industrial complex. So we try to address the problems, we try to fix the problems, but we cannot fix the problems. The system cannot be reformed, because the system itself is the problem. So, you know, we seek remedies, we try to, in a sense, adapt people to the system rather than addressing the system itself. So we create entertainment technologies, and diversions, and drugs and medicines that will allow people to function in an entirely unnatural and unhealthy environment, which is modern technological society. And in the meantime technology just continues on with its own accelerating development opposed to the interests of humans, opposed to the interests of the planet from an environmental standpoint, leading potentially to a catastrophic state where it could literally destroy humanity, destroy civilization, or even destroy the planet. So yeah, the argument is it’s a semi-autonomous system which is out of our control, the system cannot be reformed. The only alternative at this point is to end the system. So this is his conclusion. He concludes with the revolutionary thesis that we cannot modify or adapt the system; therefore it has to end, and the sooner it ends the better. The longer we wait, the more catastrophic the outcome will be.

Steve: So, can I separate that into two different sub-theses? So one is that, because we evolved for most of our recent genetic history as hunter-gatherers, we’re ill-suited to an urban, dense lifestyle perhaps based on agriculture. That’s sort of one thesis. The second one is, I think, maybe more the existential-risk thesis: that as we, as technology gets more powerful and out of our control, it could lead to something that threatens the existence of humans or even life on earth. And so those are sort of two logically separate issues. So you could have a situation where we actually are well adapted to the technological world that we live in but we still might blow ourselves up; and similarly, you could have one where we have developed agriculture, we’re not in danger of killing ourselves off with really powerful technologies, but we’re maladapted to that agricultural world because we’re primarily hunter-gatherers. So those are two separate things that… Do you want to make a comment on which of those you find more… stronger?

David: Yeah, I see what you mean. I think it requires more of a unified philosophical analysis of technology to link those two together. This is one of the things I’ve tried to do in my own book The Metaphysics of Technology, and maybe we can talk about that a little bit later. I think those are not disconnected, you know. How we live and how we function in the world is a function of our technology. Even as hunter-gatherers, of course, we used technology: we had stone tools, and we had weapons, and we had clothing and shelter, basic shelter. So we used technology, it was very simple, low-intensity, non-toxic technology. So we’ve always manipulated things around the world, around our environment, for our advantage. But in progressive steps, and particularly since the Industrial Revolution, where we’ve accessed energy sources that were not available to us as hunter-gatherers, then the system really accelerates and really, in a sense, adopts a life of its own at that point. It starts to take on this autonomous, self-evolving character which really threatens to run away from us. So there’s a lot of interesting philosophical issues related to how technology evolves. There’s this question of technological determinism, which says the technological system has a lot of its, it develops under its own its own rules and its own principles, which we basically do not control. And as the system gets stronger, it becomes more and more out of our control, which in fact is what we see when we look around the world today: we see a system that’s escaping our control. The problems are growing, the risks are growing. So what’s required to address both of those issues is kind of a unified metaphysical analysis of technology that’s something like, I said, I tried to provide at least an outline of. And it’s not an encouraging picture when you see how things work over time, and you see the risks posed, and you see how humans are unable… because our minds and our psyches are geared towards really very simple environmental technological conditions. I think we really think in terms of hunter-gatherer type technologies and small-scale groups, and we have a very hard time grasping large-scale issues and long-term trends. We can talk about it abstractly, intellectually, and a few of us can deal with them, but you’re talking about society, and society as a whole has a very hard time grappling with these large-scale issues because they think in very short-term — yeah, you know, almost immediate, almost tribal contexts. And there’s reason for that, because that’s our genetic history. So the system poses problems which are uniquely incapable for us to address on a collective scale, and that’s an extremely dangerous situation that we’ve found ourselves in.

Corey: So David, there are actually three things that your comments have brought to mind. First, because I actually think there’s a third critique in Kaczynski, in addition to our maladaption to technology and the potential of technology to end our existence: the other is the idea of technology as a means of control by authorities, and that’s a kind of consistent comment, you know. He suggested technology may have a life of its own, but it’s also being kind of guided by authorities in an attempt to control the masses. So I’d like to hear… This is a tenet that’s kind of come a long way but became really crystallized in 1980 or else 1984. I’d like to hear your reaction to that. The second is, I’m a little interested in the critique of leftism. I grew up in a very leftist community, kind of western Massachusetts. This idea of back to nature and getting rid of technology was, this might have been the guiding ethos of western Massachusetts — a highly educated place, very, very liberal, but resolutely anti-technology. So I’m curious about Kaczynski’s resistance to it. I know he was at Harvard, which is the other side of the state, but it wasn’t that far, actually, to see, you know, people frolicking the hillsides. And the last one is, it sounds like his ideas are very similar to what you hear from evolutionary psychologists. And I just wonder if you ever had a discussion with him about that anyway.

David: Yeah, so okay, there’s a few different issues. Yeah, so the whole question of technological determinism, to address the first question, is a complicated issue. It seems to be in a semi-autonomous state right now, at least that’s what I’ve argued: that in a sense it develops according to its own rules and principles and imperatives, but of course we also have a controlling role in how technology evolves. Obviously it’s useful, it’s a system for manipulating energy and information flows, and so authorities have an incentive to use it to their advantage. So we see authoritarian governments and militaries developing advanced technology for their benefit. And…

Corey: But he also talks about the role of technology and like, education, like educational technology, just used by leaders to sculpt the masses.

David: Well right, sure, right, yeah, it’s a means of control and manipulation, right, which I think even most people would be troubled by this, even forgetting about the larger issues of technology. People generally don’t want to be controlled and manipulated. You don’t want to be surveilled by your government, you don’t want to have your personal data used by the military against you, so I think even on a superficial level people are opposed to these uses of technology. And this is only the very, most superficial angle of the critique against technologies, how it’s being misused by authorities in power. And unfortunately, these very authorities are the ones who are driving the forward progress of technology. It’s the military and it’s the governments which are generally promoting further advancements in technology, and they claim various benign reasons to serve their customers, or to serve the public, or to prevent crime, or to make the nation safe, but ironically they’re doing exactly the opposite. And this is, what we see when we look at technology in a larger sense is, we see, many times we get the opposite of what we intend. Technological products produce anti-results, negative results from what we hope for. Entertainment becomes, you know, enslavement, and safety becomes danger, and time-saving becomes time lost. It’s a very interesting phenomenon, how in technology we expect one outcome and we tend to get the opposite — at least that’s what I’ve argued. So you think you’re getting security and improvement and better quality of life, and in the end it costs you. You become less secure, and more stressed, and more frazzled. So we see these counter-effects that are going on in technology, and of course leaders don’t understand this or they don’t care. They’re just looking for their short-term benefit, and so they always press forward in technology. So it’s a relentless driving forward or ratcheting forward of the technological system by governments and militaries primarily. On the leftist angle, sure, there’s leftists who are critics of technology. Kaczynski’s main concern, I think, is because of the, he wants to promote an anti-technological movement, and there’s a history in social movements where leftists tend to get involved and to corrupt the original mission of the movement. And they tend to be focused — at least traditional leftists tend to be focused — on humanitarian issues, human social issues. And in a sense the anti-technological movement is, in a sense, let’s say, antisocial, it’s going to cause difficulties for humanity. And Kaczynski, I think, is worried that an anti-tech movement that allows leftists a role in that movement will corrupt and degrade the movement as they have the environmental movement and some others in history, and then cause it to be ineffective. So his critique of leftism is really more of a pragmatic concern that the people who are leading the anti-tech movement should be cautious not to be diverted into social welfare concerns, which is the typical purview of leftists. So I think that seems to be his driving motive.

Steve: So can I drill down on that a little bit? So it sounds like you’re saying that because of their focus on human welfare and care, leftists might object to a strategy or a course of events in which, in the short run human welfare is decreased, because we don’t have as many calories and nice machines to keep us happy, but in the long run maybe better, according to Kaczynski. And so operationally he doesn’t want to get involved with leftists because he thinks they won’t stay the course. Is that fair?

David: Yeah, I think that’s basically right. I mean, he’s arguing for a revolutionary thesis, right? He says you have to undermine the system and bring it to an end, which, of course, if you bring the system to an end in a rapid fashion, then it will be relatively catastrophic for humanity. A lot of people will die, unfortunately. So Kaczyncki’s view, I think, is that when push comes to shove and it’s time to actually carry out revolutionary action that leftists will not do it, because they will see the harm to humanity in the short term and they will not carry forth revolutionary action. I think there’s a more benign revolutionary thesis which you can gradually do over time, which would not have those conditions, and I’ve argued for that in my book. So that’s another angle on that approach.

Steve: So aside from this sort of operational analysis of leftists in his movement, I think he has — I seem to recall from the manifesto or maybe from other correspondence that you had with him — he’s very negative in general about leftists and the psychological type and the personality type. And I don’t know if you recall this, but do you have any comments on that? It seems sort of a little bit independent of this aspect that we’ve just been discussing.

David: Yeah well, right, I don’t know, maybe that’s his personal inclination, maybe that’s again experience in his life history, that he had negative, you know, interactions with people who are leftists — I think he probably did, and, you know, he was… yeah, maybe in Berkeley in what, the late ’60s, ’70s, [laughter] you can imagine he ran into his share of leftists there — so I suppose maybe there’s a kind of a personal animus there. But yeah, that’s largely incidental to the larger issue, so I don’t concern myself too much with that particular situation.

Corey: Did you find he has a sympathy for the right that matches his antipathy to the left?

David: He tends to not speak in terms of right and left, you know. To me, that’s — and I guess I would tend to agree — I mean, that’s a kind of a constructed dichotomy which is not really relevant to the question of technology. Technology issues transcend traditional political ideologies. It’s not right or left, it’s not conservative or, you know, liberal or Republican or Democrat. In a sense, any far-thinking, far-sighted individual, no matter your position, you should be critical and concerned and skeptical towards technology, because it’s such a catastrophic outcome. So I think that’s his general take, I would say. I don’t think he would be sympathetic to either side. He probably finds as much reason to be frustrated with right-wingers as he does with leftists. Maybe, I guess, he’s concerned about who’s really going to have the courage to carry through on a revolutionary thesis. His view is probably, I suppose, people who are conservatives would be more likely to do this. Of course, to be conservative is to conserve traditional modes of existence and traditional lifestyles, so I guess the ultimate conservative is somebody who would go back to an original human lifestyle, which may be going back as far as a hunter-gatherer lifestyle. That really is a conservative position in an extreme sense. So he’s probably sympathetic to that, but I’ve not seen much in the way of political inclinations on either side from him.

Steve: Can I ask for a kind of utilitarian analysis of your position and Kaczynski’s? It’s obviously in the long run, if you think there’s existential risk for the planet or for humans as a whole, that’s a kind of negative infinity utility that you maybe want to get rid of. But I’m more interested in the short term. So imagine that we somehow managed to roll back the clock, maybe through a revolutionary movement or something. We give up most of our advanced technologies. I think probably you’d have to admit that, you know, calorie consumption, infant mortality, all kinds of “bad” things that we got rid of in the past would suddenly revisit us. Is the claim that overall, though, our utility is better because we’re just better psychologically adapted, we’ll be happier even though my kid died at age five from whooping cough? What is the actual claim about the short- to mid-term consequences?

David: Yeah, well, I think that’s an open question. Kaczynski seems to not address that too much that I’m aware of. I’ve tried to tackle that in my own book, my own writings. I think, given a gradual retrenchment of technology — and I’ve argued for a period of like a hundred years: you take a long-term view and you gradually roll back technology over a century — it would be relatively benign, relatively acceptable, relatively humane if you could get to a simpler state of technology. I’ve argued for something in the early Renaissance period, so around the year 1200 is sort of what I’ve defended in my arguments. So it’s a much simpler technological existence. Obviously you cannot address certain issues that we can today with advanced technology. The counter-argument is that many of the problems that we talk about, whether it’s, let’s say, antibiotic-resistant diseases or cancer or obesity or psychological disorders, these have been produced precisely from a technological system, so simply simplifying the technological system will resolve many of these problems. Now okay, I suppose you could say yes, it will introduce others or restore others that we’ve had in the past, and it remains to be seen whether it’s a net gain or a net loss. But certainly human psychology would be better off under a simpler system, because that’s how we’ve evolved. Certainly nature would be better off. The burden of proof is on those who would proceed in a reckless way with an advanced technological system, which is causing stress today, creating diseases which we cannot solve today, introducing new diseases which we’ve never seen before, and putting the entire planet at risk. So that’s the burden of proof. It’s not the retrenchment case: that’s the obvious case which has all the evidence in its favor. It’s the pro-tech position that has to be defended.

Corey: So I’m curious to see what your attitude is towards positions like Steven Pinker’s, who’s arguing that over time, basically, we’ve had, you know, fewer and fewer deaths, people are living longer, people are living healthier lives, there’s just been less violence as society has advanced. If you roll back the clock, going back — I guess about 900 years or so is what you’re suggesting — you would be in a pretty violent era where we did not have vaccines, we have heavy, high levels of infant mortality. So what’s your response to that kind of argument that progress has generally been good for humanity?

David: Well, you have to say in what sense has it been good for humanity? You know, we have far more people on the planet now than we did, you know. The analysis shows in the year 1200 we had about 400 million people on the planet, and today we’re pushing what, about 8 billion people, so the total amount of death and the total amount of suffering has undoubtedly increased just because the number of people is ten to twenty times as high as it used to be. The modes of killing have undoubtedly increased, because now we have high-tech modes of killing people. In the past it was hand-to-hand combat, right? Now people die in car crashes and plane crashes and yeah, various horrendous sorts of ways which weren’t even conceivable in the past. So it’s a very hard argument to make that we’re all, we’re somehow better off just because there’s more people, or maybe because, you know, you can live a little bit longer thanks to certainly life-extending technologies. It’s a very difficult argument to make, and these people like Pinker, they’re basically apologists for the technological system. They have an incentive to promote technology, they personally benefit from it, and so they construct arguments in its favor. But objectively speaking, it’s a very difficult argument to make, I think.

Steve: So I think Pinker would make the claim that, even at a per capita level — so you compare the probability that you’re killed in a modern technological way by an airplane crash or a nuclear reactor meltdown or something — I think he would claim, and I think the statistics would bear him out, that those risks are negligible compared to, if you go back a thousand years, the chances that your neighbor would just kill you. And so…

David: Even proportionally you have a higher risk then.

Steve: Yeah, I think that’s what his claim is. And so he’s looked at rates of violent, you know, death over time, and claims that, you know, we had a huge material decrease in those rates. And they’re not really over-compensated by the probability that you’re gonna be killed by a, you know, a missile hitting your house.

David: Yeah, well of course, but again, we paid a huge price. It’s like saying if you put everybody in a prison, in a perfect prison where nobody ever got hurt, you’d have zero death, okay? But you’re enslaved. So the price that we’ve paid, if in fact we have a lower death rate, is that we’ve psychologically enslaved ourselves. And you say what’s the price of that? And many people say it’s not worth the price at all. And on top of that you throw in the catastrophic risk, the existential risk to the planet and humanity, and then you say it’s been a losing deal, it’s a complete loss, no matter what the short-term statistics would be.

Steve: Right, so I think that’s what I was trying to get at. So first of all, on your side you can always, if you believe that there is existential risk which is growing due to more powerful technologies —and I think not that many people would disagree with you on that, I mean, nuclear weapons is an old example that we’ve had for a long time now — that risk is growing. But I think in the sort of short to medium term it’s mainly a question of trading off what appears to be better quality of life, length of life, all of those things, against psychological mismatch between what human brains are evolved to want, or to thrive in, versus the technological society. Is that a fair summary?

David: So what would be the claim, though? That somehow we’re psychologically better off?

Steve: No, less. I think your claim would be, we might have better things — like we have clean water, and I can jet to Aruba for the holidays — but psychologically, because we’ve been enslaved by this system, and a system that we’re really not well adapted to, that cost is actually greater than whatever benefits we get from desalination plants.

David: Right, yeah, well that’s right. It’s an undignified, dehumanized existence, right? I mean, this argument has been going on for a couple hundred years at least. Yeah, I mean you just look at the statistics — what, depression is at almost epidemic proportions — it’s, what, something like ten percent of the American public is on prescription antidepressants. Depression was unknown in the ancient world. They didn’t even have a word for depression in ancient Greece, it didn’t exist. And now ten percent of us have to take prescription drugs because we’re so depressed.

Corey: Hold on, just because they didn’t have a word for it didn’t mean it didn’t exist.

David: Well, but even by analyzing the literature of the time, the general consensus is there was nothing — I mean, there was, you know, dissatisfaction and people were angry and whatever, but the concept of psychological depression seems to not have existed — I mean, it’s yeah, hard to prove either way, but…

Corey: Yeah, I’m not sure about that.

David: …but and just, I mean, other psychological disorders: it’s not just depression, it’s, you know, schizophrenia, and it’s bipolar disorder, and it’s autism — all these things seem to have accelerated radically in the past few decades. Again, conditions which seemed to be all but unknown in the ancient world, in a pre-technological age. And there’s every reason to believe that these are produced by technological society, and as technological society increases in its scope and its power these things will only get worse. There’s no way to avoid that, that’s an unavoidable outcome: that we can expect increases in depression, psychological disorders, mental illness — that’s inevitable in the future.

Corey: You know, your thesis is quite interesting. I’m wondering if… Let me give you a kind of hypothetical, I’d like to hear how you react to it. I’m sort of fascinated. I’m from New York, recently, and what’s interesting is, you look at scales of happiness in general, and New York tends to rate very low on the happiness scale when you ask people about life in New York City — as far as the states go New York’s not very high — but people don’t really want to leave, generally, right? There are other states where people are known to be happier, but New Yorkers don’t want to move there. And I think if you posed a question to people about whether you want to live a thousand years ago, most people would say no, but if you asked them would you want live in the future, many people would say yes. And are you just gonna say well, you know, you know better than those people do? Or that they’re confused, or the people are like New Yorkers, they’re maximizing something other than happiness? How do you react to the fact that people don’t seem to want to go back a thousand years?

David: Yeah, well right, so that’s a good question that’s been addressed, again, over the decades by technology critics: that people generally, right, if they could choose, they would generally choose their current level of existence or even a more technologically advanced mode of existence, and they don’t tend to see these problems. I mean, that’s a larger issue, right, about social change and social welfare. The masses tend to not have a grasp on these larger-scale issues, and every government deals with this problem, right, where you have arguments — let’s say intellectual arguments or abstract academic arguments — about social welfare, which the masses do not understand or do not buy into. So, I think in a sense it’s true that people tend to not understand the issues of technology. They tend to only look at the benefits, they don’t see the negative consequences, they don’t relate the negative consequences to their technological lifestyles, for probably good reasons. In a sense, it’s always been a difficult argument to make for social change, because you don’t just appeal to the mass and you go to millions of people and say how should we be changing our lifestyles, people? And they’re not going to give you a very advanced argument, because they don’t really understand the issues that are at play. I mean, I hesitate to say that we know better than the masses, but there are issues, larger-scale issues, which are simply not addressable via public opinion polls. This is one of those issues where you don’t survey the masses and say what do you think we should do, because it’s not the kind of thing that’s amenable to that analysis. And, I mean, you can try to promote the viewpoint, but it’s, you don’t look to take a direction from what people think they want in the near term.

Corey: Are you familiar with Sebastian Junger’s book Tribe?

David: No.

Corey: It’s very interesting, you know. He discusses conditions, you know, during the Indian Wars, Native American wars in the early 19th century, and he observes that — I guess this was something noticed by Ben Franklin and others, actually even a little bit before — that when white children were kidnapped by Native Americans, unless they got those kids back within a year, the kids didn’t want to come back. And they were kind of astounded by the fact that it never seemed to go in the other direction: when Native Americans were kidnapped by whites, they always seemed to want to go back. So they’re actually arguing for a position which is quite similar to yours. One of the conclusions was, the reason people liked native societies so much is it was incredibly egalitarian, which white society was not. There was a sense of purpose in life, because groups were small and you felt like you were really contributing. But you only found this out by kidnapping people, effectively. [mild laughter]

Steve: Well, I think there’s even the claim it’s not just children — that there would be adult European males that would actually “go native” and prefer to live with the natives and not want to be reabsorbed into European culture. But I think it’s also well known that early agricultural society was pretty crappy, so actually people, when they switched from being hunter-gatherers to first starting to farm, they got shorter, their teeth got worse, all kinds of bad things happened to them.

Corey: So there’s like local minima, you’re saying.

Steve: Yeah, actually that it allowed people to have higher population density and create societies that could conquer hunter-gatherers, but on an individual basis the well-being wasn’t particularly good. And I think that’s well established now, so…

David: Well, that’s an argument to go to a pre-agricultural way of existence.

Steve: Yes, I think many people would even claim that agriculture itself is the original sin. So if you ask what was the main technological innovation that got us here and started the bad stuff happening, it wasn’t the bow and arrow perhaps, but perhaps it was crops, growing crops.

David: Yeah, I mean that there’s an argument for that. And, you know, I’ve argued going back to, say, early Renaissance technologies — but in fact that may not be sufficient, that may not be the ideal state, and maybe we are just stuck with our two million years of evolution in a pre-agricultural state, and that’s the best we can do and it’s the best for the planet. And that may well be the case.

Steve: So, you know, people are diverse, they’re different from each other. Do you acknowledge that there might be some fraction of the population that actually is well adapted to modernity and technological society?

David: Well adapted in the sense that they can live without exhibiting extreme signs of stress, right? So some of us can carry on our day-to-day existence and we’re not on prescription drugs and we’re not clinically depressed, and probably a fair amount of us are. I guess you could count that as being adapted, in the sense that we’re able to function, in a way, in this society, and… But you know, it’s difficult to say, you know, is it really satisfying? You still have a sense of, you know, the pointlessness of your existence, and you’re still in a rat race, and you’re still feeling stressed, and — you know, I think a lot of people have these very sort of deep subconscious kind of questions about, you know, what kind of life am I leading, and why do I never have enough time, and why can’t I really do what I want, why am I’m not really feeling happy, and, you know, just nagging concerns about their own well-being and the well-being of their family or their children. So even the adapted people, I think, suffer from negative consequences which are hard to articulate, but I think they’re probably there. In a sense, that’s probably why a lot of people do resonate with anti-technological arguments, because they understand there’s something about this modern society which is really deeply unsatisfying at best, and deeply threatening at worst. And I think that’s starting to resonate with more people.

Corey: But isn’t there a stage… I mean, I guess the question is where you cut it off, right? There are no books, right, in — if you go back far enough — and there are a whole lot of people who like to read. I don’t think people see this as particularly technological, but I think a lot of it would depend upon the details, you know. And you talk about 1200, right? That’s pre-Guttenberg, right? So I mean…

David: It’s not pre-books.

Corey: It’s not pre-books.

David: It’s pre-printing.

Corey: It’s pre-printing, yeah.

Steve: Maybe not in China. [laughs]

Corey: Yeah, perhaps, yeah… but I’m not sure how, yeah, exactly how widespread reading was, you know, but I mean there’s various things people are giving up, right? And so, you know, I guess I try to wonder, what rational basis would you say let’s stop here rather than going all the way back to hunter-gatherer society, where many things that people think are traditionally associated with the life of the mind — you’re a professor, a philosophy professor, clearly you enjoy reading and thinking, right? — this is probably not on the cards if you’re spending a lot of time out there gathering your food.

David: Well… but again, I’m not arguing for hunter-gatherer here. I’m just arguing for a simpler technological existence, you know, and I’ve argued that even if you talk in the Renaissance technology, even that’s far more than you actually need to have an intellectual life. You know, all you have to do is look at ancient Athens. So you see what was accomplished in ancient Greece in 500 BC with almost, you know, very simple technologies. They had, you know, simple machine tools, they had four or five metals and that was it, and agriculture. And they had an extremely high level of intellectual discourse, high culture, they had books, they were able to write… I mean, there is no argument whatsoever that you can achieve everything that you would reasonably need in a culture. If they could do it in Greece in 500 BC, we can do it in a modern era. We have the whole history to learn from.

Steve: As much as I admire ancient Greece, I think one has to mention that that was an aristocratic society built on slavery, and so the fact that a few elite people could engage in philosophy and things like that was to some extent based on that social organization.

David: There was slavery. I don’t… I would disagree that it was built on slavery. That was an aspect of that society — of course, it was an aspect of many societies, so we would condemn many societies if we talk about slavery. Many people today are wage slaves, so, I mean, there’s a much larger issue there, and we should be very careful to condemn some of these peak intellectual societies of the past for certain negative aspects as we define them today. So…

Steve: Yeah, I meant…

David: …there’s nothing intrinsic to ancient Greek culture that necessitates slavery. That was, in a sense that was an incidental product of that time.

Steve: I meant the ability for some small fraction of the people to really develop themselves and devote themselves to ideas may have been a consequence from the fact that you could have a very, you had a very unequal distribution of resources in the society.

David: Right, well it, right… To develop an intellectual class, it does take a certain number of people with surplus time, and wealth, and the ability to do these things. But, of course, that’s a pretty minimal standard, right? I mean, if you look at Socrates, who was a poor stonemason, had no wealth, was not an elite in any sense and yet was the peak of this attainment. So I would argue…

Corey: But the guy’s a super genius, right?

David: Well, okay, but we have an example from history, and we don’t all have to be super geniuses, but it says you can engage in intellectual discourse with virtually nothing, it takes nothing to engage in intellectual discussion. Socrates could do it and any of us can do it and, you know, that’s up to us, that’s an individual inclination to do so. It doesn’t require a wealthy elite to be able to conduct this kind of lifestyle.

Steve: So I think we’ve done a decent job of fleshing out your views, and maybe to some extent Kozinski’s views. Could we switch a little bit and talk about slightly more practical matters, like how this sort of thing could be brought into being? And so one of the issues that I wanted to raise was that, in a situation where you have competition between different societies, if one group decides to give up technology, well, I would just say they’re probably done for. And in fact this is actually being discussed in geostrategy circles, because what’s happening right now is that if you were to rank different large power blocks in terms of their willingness to develop new technology, Europe is last — so they’re more cautious and less willing to rush ahead with developing new technologies — the US is probably in the middle, and China is actually, probably the most aggressive developer of new technologies right now, with the least concern for privacy and things like this. In that situation, the one that gives up the advanced weaponry and advanced technology might just cease to exist as an independent entity.

David: Well right, so that’s the argument why you need to promote defensive or military technologies, because it’s your own defense. So you have to build a global case for, first of all, for the need to do this. And we have to confront the entire planet with the threats and the risks that the system is posing, and we need something like a rough global consensus that we need to do this, and that we need to start on a time scale that’s appropriate for each nation and culture, and that the more advanced ones cannot take advantage of the less advanced ones. So we need some kind of… maybe, yeah, I don’t know, UN agreement, global consensus that just because somebody rolls back their technology they’re not gonna allow the Chinese to roll them over because they refuse to roll them back. So I think it will take some kind of global coordination for that to happen. I think it’s probably too early for that to happen, because the arguments are still very abstract. What it will probably take is a minor catastrophe to cause people to suddenly become aware of these risks and to take action. And those of us who are, sort of critical, are, sort of, you know, hoping that it won’t be too horrible of a catastrophe, that we’ll be able to say wow, that was a close call, and we need to make sure this never happens again. It’s gonna take a profound event that’s gonna sort of snap people’s consciousness around and realize that collectively, as humanity, we need to tackle these problems. And, you know, if the Chinese are in the lead and, you know, if they unleash some horrendous technological disaster that wreaks havoc in Chinese society, you can only hope that, yeah, it’s not too bad, and that they learned a lesson, and it doesn’t cost the rest of the planet the same lesson because some other country decided they’re going to advance some highly dangerous technology. But we cannot be sure. The problem is these advanced technologies: they tend to be global, they extend beyond… It’s not just, you know, the meltdown of a nuclear reactor somewhere which poisons the people in that immediate area; it’s potentially unleashing a global catastrophe. So yeah, we need to understand those collectively, we need to work collectively and make sure that the people who are willing to retrench are not exploited by the others.

Corey: But David, I guess I want to push a little bit on this picture about the hope of some kind of collective agreement. As you know, many of the problems of the UN pronouncements in the past is that there simply isn’t a way of enforcing them, right? The UN makes many claims, and unless there’s some sort of military force often to ensure that these things are kept, people sort of do what they want, right? And that’s why, you know, I think a lot of pronouncements and these peacekeeping decisions often don’t go through. I think Steve’s picture is, if Europe keeps going along its current path — even, right, let alone steps back — what happens is that China will just control more and more of the world’s GDP, more and more of the world’s resources, and that it will just slowly roll over Europe. And if anybody takes the approach you’re advocating, it’s essentially unilateral economic disarmament, and that the people who don’t will just control everything.

Steve: Yeah, right.

David: Well right, so you have to take protectionist measures to not allow yourself to be exploited by the technologically advanced societies. I mean, I…

Corey: But aren’t those going to be intrinsically technological? How can you resist?

David: Yeah, but you can you can pass laws or short-term protective measures to defend your economy or your society from some of these effects. So that can happen at a national level.

Corey: But what would that be, right ,what…

David: Well, it depends. If you’re talking economic invasion, then you’re just, you know, there’s various economic tools, right, to protect your local economy from… whatever it might be, advanced products, or low-cost products, or whatever might come out of a Chinese technological system, for example.

Steve: if I could take a specific example of nuclear arms control. So that’s a situation where there is a technology that everybody agrees is super dangerous, and there is a sort of collective will to try to stop additional countries from becoming nuclear powers, but the only way we can enforce that is actually by having a huge nuclear arsenal. So imagine, in this future world where everyone’s kind of been convinced by yours and Kaczynski’s arguments, that we’re starting to ratchet back technology. We’ve given up our satellites even, so we can’t see what they’re doing in North Korea. But some very determined, relatively small group of people in North Korea decide well, we still remember how to make missiles and bombs, and pretty soon they’re ready to take over the whole world.

David: Right. Well, I mean, it can be a step-by-step process. That’s how it works with nuclear disarmament: you slowly, and gradually, and through verification processes, you both step back, step by step, from this confrontational stance. And, you know, in principle you could do the same thing with advanced technology: you could collectively agree we’re going to retrench, we’re gonna withdraw these things, we’re gonna cease further development, we’re gonna verify each other’s progress, and we’re gonna have some ability to sanction or penalize those who don’t go along. And yeah, I mean, in principle it’s not beyond our means to work out a system like that.

Steve: I think Corey was saying that, in the details, the tools you may need to enforce the agreement might be super high-tech, like your own nuclear missiles, your own satellites.

Corey: It’s not just military, right, it’s fundamentally economic, right? You start pulling technologies and the economy is gonna shrink, people become poorer, and that’s gonna leave you open to essentially, a kind of being — well, a sense of being overrun by more powerful countries. This is the argument people made about US culture and technology for a long time: it was just so powerful, even if they didn’t invade you militarily, they invaded you culturally and economically and controlled you. It seems like that’s what would happen with, say, China in the place of the US or any other power that doesn’t actually technologically disarm.

David: Right. Well again, there’s a short-term process that would have to be mapped out, how you would protect yourself against that kind of predatory advantage, short-term advantage that might come from somebody who was not retrenching while you were. So I mean, obviously, there’s a lot of specific details that would have to be mapped out, how that process would work, and to protect people and to encourage them to speed up the process of retrenching their technology rather than to drag their feet, as it were. But, you know, that always has to be balanced against all the negatives. If China or South Korea or whoever is advancing in their technological systems, they will be faced with some very serious problems in the very near future, and they’re going to have to deal with those problems as well. It’s not going to be a free ride to some economic dominance of the world. They will be facing very difficult problems that will come from their own advanced technologies — whether it’s psychological stress on their populations or, you know, local catastrophes, or whatever, there’s a variety of problems that could be unleashed on those nations that they will have to deal with. And it’s far from clear they’re gonna have a free road to the future if they plow ahead and everybody else is dialing back. So there’s a lot of issues to take into account if we look ahead to how we would map out this future.

Steve: So it sounds, I mean to me it sounds like a plausible way that this could work out for your goals is some kind of catastrophe, like antimicrobial resistance goes completely nuts, or some bad guy makes a virus that kills a lot of people, and then suddenly society — or maybe global warming accelerates — and then society wakes up and realizes we’ve got to put some controls on technological progress. Is there any other way you could see it working out for you? Suppose there’s no huge technology-triggered disaster in the next hundred years. What could bring your utopia into existence?

David: I don’t know if I’d call it a utopia, but I call it survival — if you want to call that utopia, I don’t know, but… Yeah, I don’t know, I mean, like I say, when you’re dealing with large-scale societies, it tends to take dramatic events to shake them into action. You’d like to think there was a slow, rational process by which we could work our way out of this mess without a catastrophe. That seems to be unlikely. Kaczynski himself has spent a lot of time arguing that you really have very little hope of the slow, rational guiding process in society. Things tend to happen out of your control, or against your wishes, or in a kind of, maybe, random way, or in a semi-autonomous way. So unfortunately, you know, when you look at the practical events, it probably will take a near catastrophe that we will have to identify as a technological catastrophe rather than, say, a terrorist action or a medical situation. It has to be pinned on the technology, because that’s what these are. These looming catastrophes are really all technological based, no matter who’s doing them: if it’s the enemy or if it’s the germs or whatever, they’re the result of technological systems that have run amok. So the problem for intellectuals is to be able to frame the problem as what it is, as a technological problem, and the technology is the root cause of the problem, and if we hope to get a get a handle on these problems we need to address it at the level of the technological structure, the system itself. So it will take a probably near-term catastrophe where, I don’t know, hundreds of thousands of people may have to die, or millions of people…

Steve: Millions.

David: …probably have to die, I mean…

Corey: A billion.

David: Yeah, well exactly right — I mean, you know, it’s unbelievable, we talk about how horrible, you know, a shooting is, that they shot 40 people, you know, and it’s like — that’s nothing, it’s gonna take 40,000 or 400,000 or 4 million people to be killed, and then you’ll wake up to what the real, really the situation is. So yeah, I’m not terribly optimistic. It’s probably gonna take some relatively high-magnitude catastrophe — you hope it’s not too catastrophic and the price is not too high to pay, and that maybe that will bring people around and they’ll start to take action.

Corey: So I just want to ask, is the difference between you and Kaczynski on this point that Kaczynski thinks one should actually carry out a revolutionary act to cause this many deaths in order to wake up the planet, and you simply think that this will occur somehow during the normal course of events? Is that the difference between a true revolutionary and sort of an academic observer like yourself?

David: Well right, I mean Kaczynski seems to argue for a generic revolutionary stance and he doesn’t give a lot of details. He says it could happen rapidly or slowly, it could involve, yeah, violent consequences or it might not involve violent consequences. He’s relatively non-committal. I think on that…

Corey: He’s advocating violence to bring this about, right? In fact, he’s committing violence to bring this about, so…

David: No no, he never advocates violence, ever. None of his writings have ever advocated violence. I mean, he personally took violent action, but he never advocated violent action. None of his writings have ever advocated violent action.

Corey: Okay, so I guess I assumed that there’s some connection between his violent actions and his manifesto. So if we’re totally separating that out, then it’s sort of a strange position to take, right?

David: Right, I think we need to take him at its word, and those are two completely separate events. And so revolutionary action is not intrinsically violent action, and it’s up to us who are sort of the more rational members of society to map out a non-violent revolutionary approach. That’s what I’ve tried to do in my books and my writings, is to try to do it gradually over time, understand the issues at a deep level, spell out the philosophical implications so people understand what’s going on, and then you can do this relatively gradually and benignly and in the least harmful way possible. You can do it slowly over time as I proposed, over, say a century, a hundred years, and that’s probably the best outcome that we could hope for.

Steve: If you have high conviction in your beliefs about the problems with technology, is it not morally defensible to perhaps cause some collateral civilian deaths in order to bring about a better outcome for the human species?

David: So if you take a strictly utilitarian view, right? — that’s an old argument that goes back hundreds of years, where you say if I had to kill 10 people to save 100, probably there’s an argument that says I should do that, right? — that this is no different here. Or, I suppose, if you did an analysis and push came to shove, and you say look, we had to have 100,000 people die so that we could save 8 billion or whatever, then, you know, there’s an argument. I haven’t made that argument, Kaczynski has not, but there is a utilitarian case to make that argument.

Steve: So if the Nazis had won World War II and were occupying North America, perhaps many people would say it would be defensible for us to conduct some terrorist kind of actions to weaken that Nazi government in North America. If you accept the arguments of Kozinski’s thesis, can you still criticize on moral grounds what he did?

David: Well, his was a very unique case. You’re saying sending the mail bombs, for example. That was a very unique case to his specific background, and he was doing it specifically, explicitly to attain the notoriety to get the manifesto published in a venue that would get widespread attention. And that’s a unique situation that will probably never be repeated. So I think that’s a completely separate discussion. He’s never advocated that for anyone else. I’ve never advocated that, so… yeah.

Steve: Yeah, I’m not asking, I’m not saying that you have advocated it — I’m just, I’m trying to draw a parallel. So you’re a partisan trying to free the United States from Nazi rule; many people would say it’s justifiable for you to conduct some violent actions, maybe which have some collateral effects. If you accept that, and you also accept Kaczynski’s thesis that technology is leading us to real existential risk for humanity, wouldn’t you say it’s actually ethically or morally justified for him to have killed some people?

David: Well, it’s an open question, and I guess we don’t know yet. In principle you’re right. In principle if, like I say, if you could save the planet and the bulk of humanity and you had to kill some people along the way, I guess in the long scale you would look at that and say yes, that was an ethical trade-off. I mean, that’s a standard utilitarian argument.

Steve: Yeah, the place where I differ with you is just that my level of conviction in his thesis is not nearly as high as what I think yours is, so… But if I were to accelerate my conviction to that level, then I would say yes, absolutely, it’s in the interests of the human species that we knock off Bill Gates and all these other people that are AI researchers, you know. So if I really believe that, then I think that I would be led to that conclusion.

David: So conceivably, if you thought that the problem was that imminent and that there was no other solution, then conceivably yes, you could construct an ethical argument that says you should take those kinds of actions. I don’t know anyone who’s ever posed that view or has defended that view. I think most reasonable thinkers would say it’s not come to that point, we have time yet to act. The situation is maybe not quite that grave and it doesn’t yet justify that kind of extreme action. And so I don’t know that anyone would defend that view — although in principle, I suppose that could be true, and in the future it could become more true as we become closer, more on the brink of these catastrophic outcomes, then that may increase the argument for taking dramatic action, as you say.

Corey: I just have one question on a little, slightly different topic. It’s about the end state that you envision, right? Here’s the problem that I think may occur to many people: if you roll back the clock, all that knowledge is still there; and in effect people are gonna know all this technology, they could recreate it, you know, probably not in too much time — although it’s a really interesting thought experiment to find, like, how long it would take to actually generate a computer with just a bunch of people running around the forest, right?

David: [laughs] Right.

Corey: But the fact is, it’s all in people’s heads, and you’re still asking people to just not develop themselves significantly. Now, that strikes me as an intrinsically unstable situation, right, because some people could insist they want to develop technology without stopping…

David: Actually, that would not happen. The reason we had the modern technological society was because of easy access to fossil fuels, and those are gone. So we have no means — even if we had the knowledge, we have no energetic means, no fuel means to reconstruct an advanced technological society. So if the system gets to the point where we are unable to extract those fuels or comparable fuel sources, we will not be able to reconstruct what we have, no matter what we know.

Steve: But I don’t think you can assume that in this future we won’t have access to some concentrated energy sources.

David: But they’re not gonna be fossil fuels, and what are we talking about, nuclear power? That would not exist in a de-technified future, right? So…

Steve: No, what I meant is that, imagine that this program succeeds in the next hundred years or something, and we haven’t exhausted our supplies of coal, natural gas… There are plenty of other concentrated energy sources that will probably, likely still be around, if…

David: Well, but not in mass scale. All the surface deposits of coal and oil are gone, right? Even today we are… It’s requiring more and more energy to access the fossil fuels that we’re using. And there’s this concept in sustainability that’s called energy return on energy invested. And we’re actually approaching the point where we’re expending more energy to extract the energy from the fossil fuels. And when we cross that threshold, it will be a negative return on investment, and that’s even without any change in technological society. So it’s very clear that in the near future, in the next few decades, there will be a negative return for fossil fuels, and then we will stop; and then we will either have to substitute it with solar power or nuclear power, or we will have to retrench to a pre-technological state of existence.

Steve: I think that premising the stability of this future utopia on the fact that we’ve exhausted sources of concentrated energy is a pretty strong assumption — I mean, after all, you can chop, eventually we’ll have trees again, and you can chop down trees and create blast furnaces. There was quite a lot of iron and steel being produced by, you know, Chinese for example, quite a long time ago. I think what you’re going to have to have is a kind of very strongly enforced societal prohibition against technological innovation. Otherwise you will always have isolated groups who are trying to do something, and you’re gonna have to face up to the fact that you have to actually stop them, perhaps through violent means.

David: Conceivably, right? So if you’re talking the long term, if we get through a hundred-year process and we’ve retrenched, and now people are, sort of, now this push comes to slowly build things back up, and… even under the most optimistic scenario, it would take centuries to do so, right? And I guess the argument is well, then those people at that time, they will have to handle that situation. They will know the history, they will have to decide what to do at that time. We can’t worry about the people 500 hundred years from now. We need to live through the next 50 years, and that’s what we need to worry about.

Steve: I can’t help but ask you if you’ve read — I’m gonna mention a couple of science fiction novels, and I’m curious whether you’ve read them, so — Dune?

David: I’m familiar with it. I don’t read science fiction. I’ve seen the movie, I’m vaguely familiar with the story.

Steve: Let me reverse paths and then ask you if you’re familiar with a guy called Samuel Butler?

David: Sure.

Steve: Okay, so… A very early anti-technology person — I think his book was, or article was called “Darwin Among the Machines” — and he argued, I think over 100 years ago, 150 years ago maybe, that we were becoming servants to machines which were becoming increasingly complex and would threaten our civilization eventually. So I think I’m in agreement with you that these ideas are quite old and they’re not unique to Ted Kaczynski. I would encourage you to read some of the science fiction which entertains notions of what it looks like after you’ve imposed this non-technological utopia, and what steps have to be taken to prevent the resurgence of technology in the far future.

David: Yeah, that’s an interesting question, for sure.

Steve: Right.

Corey: I think we’re about out of time.

Steve: Yes, so we’re nearly out of time. I want to thank you for coming. This has, I think, been a very engaging discussion. In the links that we post to this podcast, we will put some links in to your book and other resources, so people can learn more about your views. I can’t help but mention that this podcast will go out through an incredibly technologically advanced network, with gigantic infernal Google servers in the background powering the distribution of information and bits. But David, thanks very much for visiting us.

David: Very good, thank you.

Steve: We appreciate it.

Corey: It’s been a pleasure.

David: Thank you.

Creators and Guests

Stephen Hsu
Host
Stephen Hsu
Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University.
© Steve Hsu - All Rights Reserved