China, Acceleration, and Nick Land - with Matt Southey – #108

Steve Hsu: The general mark about genomics in China is that I, I've said this many times, but every time I go there, I, I test the hypothesis. There's much less discomfort about the idea of selecting your embryos for, not just for health related traits, but even for traits like height or intelligence or, or you know, cosmetic appearance. People there are not really freaked out by it the way people here are, and I, I think it really is the, the, I've given the same reason over the years, which is that

Welcome to Manifold. I'm here today with my friend Matt Southey. We are recording at one of my favorite places in the world. I think it's one of his too. We are at Light Haven in Berkeley and we're sitting outside in a park-like environment. What, is there a formal name for where we are?

Matt Southey: It's beside Bays. The Bays building.

Steve Hsu: Okay. We're outside base building. There may be some audio, so we kind of traded off here. Audio quality 'cause we could have gone into a studio, but it's just nicer out here.

The sun is still up and but there is a little bit of noise from cars driving by. And there's also the occasional AI safety researcher walking by, shouting something about p-doom so bear with us, but hopefully this will be a nice conversation. The theme of this episode is China Acceleration and Nick Land, and Matt is an expert on Nick Land.

He wrote a PhD dissertation, so it's actually Dr. Southie to you at Rice University in the philosophy department on the work of Nick Land. Now. If you have no idea who Nick Land is, don't worry about it. We're gonna get there later in the episode.

What I'm first gonna do is talk just a little bit about what I learned on my most recent trip.
It was an extensive trip in China, and I'll just talk about that mostly myself. But then we'll transition into Nick Land because I met with Nick in Shanghai just before I left China. So that's roughly the outline of what we're gonna discuss. So I was in China, I was in Shenzhen, and then Beijing for an extended period, and then I was in Shanghai, and then I came back to the US. And so the whole time I was in China was roughly two weeks and extremely busy, lots of meetings.

The last episode of Manifold was with Taylor Ogan, who's, an investor who has relocated to Shenzhen. He's based in Shenzhen. And he knows he follows everything that's happening in the sort of commercial tech side of China. And so that, that episode is quite good. If you're interested in those things, I definitely recommend it to you.

Today I'm gonna focus a little bit on what I did in Beijing and Shanghai.
I was visiting Qing Hua University, which is really I would say the top university in China now. And it's pretty big in the scheme of things. I think it's about 50 or 60,000 students. I heard slightly different numbers from different people.

It's a big university, but the quality of the students is extremely high. It's definitely, I think, the toughest school to get into in China based on the Gau Cow or the college entrance exam. And so the students that you meet there are really exceptionally bright and hardworking. And there's a very strong espree on campus.

The kids there are very ambitious. They're really there to learn. One of the most striking things was crossing campus at roughly dinner time to go to one of the cafeterias with some of the students there and, and a professor. You know, there were so many bicycles, there were electric scooters, a few cars and lots of people walking.

And it was just so lively and there was such a density of people at six or 7:00 PM It's very surprising. I don't think very many US campuses you could find anything like that on any US campus. So I was there visiting the Yao, St. Yau Institute, which, so St. Yau is a fields medalist who spent most of his career at Harvard. And his, he's a geometer, a world famous geometer. There's something called a Kaaba Yao Manifold, which plays a big role in string theory. And so Yao went back to China and created his own college, Shing-Tung college at Xinghua. And he's also established a bunch of research institutes for math and theoretical physics and also AI research around China at different universities.

But he's in residence at Qing Hu I was visiting his institute. My host is a young professor there who I first met at Caltech when he was a postdoc. And he, like me, works both in theoretical physics and ai. So I was quite interested in learning about, you know, where those different subjects are in China, and particularly in Xing Hu at this time.

My colleague actually took some time off before starting his professorship to work at a private AI company in China, in Shenzhen actually. And he also has connections to the AI for pure math effort at ByteDance. So he, he's got both one foot in each world sort of the way that I do. So just some remarks on what it's like there at Xinghua.

The scale is amazing because you know, they're admitting of order 10,000 kids a year, but they're 10, basically 10,000 of the smartest high school seniors in China that are admitted. But within the university, there are these special classes. So there is the St. Yao class, which is about a hundred kids admitted to a program where they're committed for about eight years.

They're gonna do an undergraduate degree, and then they're gonna do a PhD in math or theoretical physics. So it's a hundred of the top talents for that kind of thinking in China. So very, very elite group. People can be admitted to that group by doing really well in the olympiad competitions in various fields or by taking a specialized test in on math and theoretical physics.

And so one of the appeals of these special classes at Xinghua is, is avoiding taking the gaokao, which is very grueling and which covers all subjects. That's a set of about a hundred kids. I, I gave a lecture to them actually on genomics, which was quite interesting. I'll say a few words about that.
But then there's another special class called the Andrew Yao class. So Andrew, YAO, which is not the same as YAU. So S St. Yau is YAU. And Andrew Yao is YAO. Andrew Yao won the Turing Prize. He's a theoretical computer scientist who I think during his US career, he was a professor at Princeton and Berkeley and he went back to Xinghua.

He was one of the earliest really top people to go back to Xinghua. And he has his own special class. And those kids are typically the kids. Some are math or physics Olympiad kids, but most are actually kids who do well in the informatics Olympiad. And those kids are, you know, when he first started the program, they would be doing things like theoretical computer science or maybe more practical computer science and starting companies.

But now of course, the focus has shifted much more to AI there. So that's another cohort of a hundred kids. And then I also met kids from the Chen Sun class, which is, this is named after the guy who. Was one of the founders of JPL Jet Propulsion Lab at Caltech. He was a world famous aerodynamicist, a student of Von Carmen's.

And he actually, because the FBI thought he was a communist spy and locked him up in the us, went back to China very early on. And the whole rocketry program, the missile program in China, the space program, are all actually derivative of his contributions because he was arguably the top person in that field in the United States when he left.

And then he basically built that whole thing up in China. So that's another group of a hundred kids that who mainly focused on physics and engineering. I met a kid from that class as well. So it's just an unbelievable collection of talent there. And also not just talent, but some of these kids that I met, even undergrads, had already raised money for their startups.

So you have venture guys who are hanging around Xinghua and some kid who's really outstanding maybe with the recommendation of his or her advisor or professor, can actually raise money for a pretty serious startup. In this case like robotics, AI and robotics.

So very, very impressive. I was able to see some presentations by some of the students of the research they're doing, and it's very world class.

It's really at the frontier, particularly in robotics. I tweeted out some video of a, a little robot that plays tennis and that is a project in collaboration between one of the academic groups at Xinghua and a local robotics company. And you can just see that the connections between academia and the private companies there is much stronger than in the US.

So in the US you, you have research groups on campus and then, and then kids or professors may, may be involved in startups later based on the research, but the connection seems much tighter actually in China than here. My colleague there that I was visiting has been getting very interested in AI world models for robotics.

So basically using some kind of training data building ais that actually build realistic models of how the world works based on that training. So not just language models, but then they, and then they use that intelligence to actually control a robot. And so this tennis playing robot, which, you know, it's really amazing to watch it.

It's just so smooth the way it plays tennis. That is a kind of outcome of this kind of research. One of the most interesting questions, which people there are really focused on, I hadn't heard that much about, I guess I know some startups in the US that are working on this, but it doesn't seem as far advanced, is basically can you just use training data, which is video of, say, humans performing some task like folding a shirt or picking up a stone on a GoBoard or playing tennis. And can the AI basically learn how to control the robot to perform that task just by seeing enough video of humans doing it? And of course, the implications of this are that if you get that nailed down, that means a, a factory robot or a warehouse robot can be trained very quickly. To, you know, just get some video of humans doing it.

The human workers, human workers train their replacements, just get that video and then train on it, and the robot can perform the task perfectly. As long as it's a kind of repetitive kind of activity. It's seems like they can do this very well now. so that was my time in Xinghua. I, I actually had a number of unbelievable meetings.

I met the founder of Pony Ai, who is one of the top competitive programmers in the world. And founded this company, pony Do ai, which is, which is sort of like the Waymo of China. And in China there are two Waymo like companies, one Baidu, Baidu's, more like Google. It's like a big tech search engine company.
So Baidu has a Roboto Taxii autonomous vehicle company, and Pony AI is the other one. Well Baidu is public and also pony.ai is a public company too. So the guy I met with is actually a billionaire. And he was extremely nice. He spent 90 minutes explaining everything about how they train autonomous vehicles what their secret sauce is, why you shouldn't be that confident about what Tesla is doing how they're scaling it in different cities in China and and Europe and other places.

I think, I'm not sure how much of that conversation I should really talk about here, but was really quite an amazing discussion. 'Cause he, he was so nice. He just basically, this other professor and I spent 90 minutes just asking him every single question that we had about the status of autonomous driving and things like this.

And he, he just straightforwardly answered every one of our questions. So really shocking. And actually, my, my mental model of how it was being done was changed pretty significantly by the stuff he told us. And it's, it's, it's pretty much, it's also at variance to the stuff Tesla claims, which I thought was very interesting.

Matt Southey: What was the reception to your genomics talk?

Steve Hsu: Yeah, so I gave an evening lecture to professors and students in the Shenzhen College. So ST Yau was there and then a bunch of math professors and physics professors, and then the students. And then, like, you know, believe it or not, Manifold has enough of a following in China that a bunch of people came to the lecture who you know, weren't even at Xinghua but just came to it.

Or there were other students, like a bunch of students from the Schwartzman program. So there's a Schwartzman Fellowship. Schwarzman, I think is a, is a big investor, billionaire investor who was trying to build bridges between the US and China. And he endowed a college at Xinghua, which allows foreign students mostly from the US I think just come and spend a year in China on a fellowship.

And so there's kind of this elite Schwartzman fellows thing. It's like being a Rhode Scholar or something. So a bunch of kids from that who I didn't know at all who were from that college came to my talk too. 'cause it, it was advertised pretty widely. I think it's actually gonna be, they recorded it and it'll be released at some point.

So I'll put the video online. The general mark about genomics in China is that I, I've said this many times, but every time I go there, I, I test the hypothesis. There's much less discomfort about the idea of selecting your embryos for, not just for health related traits, but even for traits like height or intelligence or, or you know, cosmetic appearance.

People there are not really freaked out by it the way people here are, and I, I think it really is the, the, I've given the same reason over the years, which is that here, whenever we think about stuff like this, we immediately think about this little guy with a funny mustache and, you know, the events of the 1940s. But in China, those aren't really, that's not their history. That's somebody else's history. And so they, they don't associate breakthroughs in genomics with you know, bad eugenics. They just view it kind of in a balanced way as like, is this actually helping people? Is it helping the families? Would it be better if people were on average, healthier and had 10 year longer lifespans than the current population? They just don't see anything bad about that. and so generally I think people are extremely positive about it.

Matt Southey: How do a lot of these kids see AI safety? Is that even a concept to them?

Steve Hsu: This is an amazing question. So, right before I went on this trip, I was here in the Bay Area for two months and we've been working as a, just kind of a fun side project.

Myself and a, a small team have been working on a documentary film called, I've mentioned this before on the podcast called Dreamers and Doomers and. The goal is to interview people in this environment in the Bay Area, which is really a kind of hot house environment for AI. I mean, like, we're here at Light Haven, which is one of the main training grounds for future AI safety researchers.

and we were trying to really illustrate the dichotomy or the clash in worldview between the dreamers, who really are, you could call them accelerationist, who want AI to come as soon as possible. And they, they, their greatest dream is to build the machine God versus the doomers who are, who are pretty confident this is gonna possibly destroy, or, you know eliminate eventually the human species.

So Dreamers versus Doomers in China, when I explained what I was doing here in the Bay Area. They didn't really understand the doomer. They, they, they were sort of, I had to really spend minutes explaining what is a doomer. Hmm. And the people in China were like, oh, that's interesting. There are people like that.

Like are they really neurotic? Or what, why are they so, you know, why are they so concerned? Now, part of this, you could say, is like, maybe the Chinese are not as advanced in their thinking about the long term impact of AI. And they haven't really thought as much about it as people here. An alternative interpretation is that they're just more pro technology by a, by a mile than Americans.

And their recent history is they went from being very, very poor just 30 years ago to, you know, getting almost like a 10 x in GDP per capita during that period. They went from being poor to being one of the leading technological countries in the world and, and really due to progress in science and technology. And so, so to them. It's much more easy for, for them to anchor on science and technology being a force for good.

Matt Southey: Hmm.

Steve Hsu: And you know, an earlier version of this is that in the United States, this is during my lifetime people got scared of nuclear energy and we didn't deploy nuclear energy, even though technologically we definitely can. And if you do a look back on what the negative environmental or human, you know, health consequences are of nuclear energy, it's, it's very mild. Even with disasters like Chernobyl or Fukushima. Very, very few people got killed by nuclear energy, whereas like every day someone's getting killed when they're gasoline car blows up, or, or you know, they're, they're, or, or they inhale some coal dust or something.

So, so even a look back suggests that like we were way too scared of nuclear energy and we didn't adopt it, at a scale that we should have adopted it in the West. Now in China, they've never been scared of nuclear energy. And if you look at the as with almost everything else, if you look at the number of nuclear reactors under construction in China, it's greater than the number in the rest of World X China.

So like they're leaping ahead in that particular technology and they're not afraid of it the way we have been here in the west. And I think AI's kind of like that. Like they're, they're very eager to deploy it. In the last episode where I was with Taylor Ogan, he had a prototype of this new ZTE phone, which has effectively open claw on the phone. So, so it's, it's, it's it a phone, which is a joint project between ZTE, which is a hand, a handset maker, hardware maker and the company ByteDance. And ByteDance large language model, which nobody in the west has ever heard of it's called Doubao, which is actually a pretty decent frontier model. And basically there's a version of Doubao, which runs locally on your phone, just like open claws running on your Mac Mini.

And it has full control over the phone. If there's a video set of videos, you're like, which of these videos is best at explaining this? And it'll watch all of them and say, oh, you should watch this one. Or you can have it, you know, book travel for you or, you know, it can do all kinds of stuff and so in a way, like they're actually way ahead of the open claw craze. Everything there is mostly done on the phone. Like you don't see the people working that much on laptops. They tend to be working on their phones and stuff. You know, they're already getting ready to roll out at scale some thing in which there's an, you know, enabled frontier model agent living on your device with full, you know, capabilities to, to do stuff on your device.
Of course, it, it, with, with this thing, the way it's set up, is it before it transacts, like if it's about to buy a plane ticket for you or, or send flowers to your wife, it, it interrupts you on the screen and asks you if it's okay, if it can do that. Hmm. And of course, I don't know whether the security issues are all worked out into this. But in this case, it's a project between a handset make like a phone maker and one of the leading AI companies, not a single dude hacking away, which is how Open Claw came about. So in this case, there are many, many teams of security engineers who are very familiar with the operating system.
I think it's an Android operating system on these phones. Many, many security professionals in this case have actually looked at how this agent is allowed to operate and what security protocols are used to constrain its actions on the phone. So it's, it's quite different from open Claw in that sense, like just more eyes have looked at the security model, but they're, they're ready to roll that out.

Matt Southey: Have there been any disasters with this model on the phone?

Steve Hsu: It's only in limited, like my friend has an early prototype version and he's been testing it, so they have people testing it. They're not, they haven't released it yet to the public. And who knows things have that prototype agent have, has actually done, this is a big thing I anticipate with Open Claw, we, we'll be reading some news stories about, you know, open claw spending, you know, someone's entire bank account or getting hacked or someone, someone like communicating over WhatsApp with Open Claw and getting it to like, send someone, the victim's entire fortune, like all their Bitcoins or something like this.

So yeah, that's a big issue. But, but interestingly like they are kind of ahead in a sense in this kind of thing in China. So that's roughly wanted to, what I wanted to say about the time in Beijing. I had other meetings in Beijing as well, including with some officials who were a little bit more involved in government policy.
There's something called industrial maximalism, which is now sort of enshrined in the next five year plan as the development model for how China's gonnabehave in the next five years. And that was quite interesting.You can see my episode with Dan Wong. That episode is called Industrial Maximalism and its discontent. Dan is one of the discontents. But the people I talked to were a hundred percent behind industrial maximalism as the way forward for China. Just a quick comment 'cause I don't wanna do the whole episode on this, the Iran War started while I was in Beijing. And so I got the reactions of many Chinese people, including some of these Strat, like macro kind of level, government level strategists about the Iran War.

And every single one of them viewed this as a mistake by Trump. And I think, you know, of course if there is a quick regime change and suddenly Iran becomes controlled by the US the way it was under the Shah back in the day and the US gets control of their oil resources, then of course that would be a big victory for the United States geopolitically versus China.

But most people in in China didn't think that was a likely outcome. Most people in China were quite open to the idea that Operation Epic Fury really should be called Operation Epstein Fury. Because it's clearly, this clearly seems to be something which is much more in the interest of Israel than in the United States.
If you say those things in the US you get pigeonholed like Tucker Carlson as like some kind of like bad person or like, at least like a dissident, maybe even an anti-Semitic person for saying this. But the, the idea that Israel is pro-Israel interests have disproportionate impact or influence on US foreign policy is a standard trope.

It's a standard model of the world that people in China have. So, surprise, surprise for you Americans, you know, they're not antisemitic in China. They actually like Jews. They actually, you can buy, go to the bookstore and buy books about how Jews are special and how they've made amazing contributions to science and culture, and they're successful in business.

And even how to books, self-help books about like how you can succeed be the way the Jewish people have succeeded. So they, they're not anti-Semitic at all in China, but they do just openly discuss that US foreign policy is captured by Israeli interests and Jewish interests in the United States. And they may be wrong, okay, maybe you know this country better than they do, they're living across the Pacific Ocean. But maybe they're right and you're not allowed to actually understand what is really happening in this country.

I'll just leave it at that. So I don't want to get into any further trouble, but I'm, I'm merely descriptively reporting on the way people discuss it in Beijing.

Okay. Now I left Beijing and then spent the last few days in Shanghai. I gave a talk at Fudan University, on AI and theoretical physics and math at their new, they have a new Institute for Advanced Studies. it's called the Institute for Advanced Study, XIAS, which was endowed by this billionaire who's one of the co-founders of CATL. So CATL is the. Most advanced battery maker in the world. So they, I think they supply most of the batteries for Tesla and for lots of the EV companies.

They and basically BYDI think compete to dominate the, the frontier for batteries. But one of the CATL founders had given a billion r and b to found this basically kind of think tank research institute. And so that's where I gave my talk. And the other two things that I did in Shanghai are, one, have a meeting with us, another billionaire in China who's quite interested in embryo selection.

So again, like no ick feel about embryo selection in China, but actually rather great interest in it. And the other thing that I had scheduled was a dinner with Nick Land. And so I had a great dinner with Nick. While I was there he has lived in Shanghai for 15, 20 years now, maybe more. Now Land started out as a UK based philosopher, and we're now gonna switch gears and talk about Nick Land with Matt, who's possibly the world's greatest expert on Nick Land, ex Nick land himself. And so we'll talk a little bit about what his philosophical writings are all about, why they have become a kind of philosophical foundation for accelerationism here in the Bay Area.

Why he is fed it here in the Bay Area. So I, I, I first met him in person. We had been in contact for many years over the internet, but I first met him in person because some Silicon Valley billionaires had brought him here to have a set of meetings and actually a public lecture, which I reported on, on, on my X feed because I attended it with Nick. And so that it was when, while Nick was here in the Bay Area that I said, oh, I'm gonna be in Shanghai. Let's definitely get together when I'm in town. And he, he was very happy to do that. So a very, very interesting figure who's central to a lot of the discussion here in the US now about AI, the AI future.

And one could even argue that Nick predicted all of this or much of this already in the 1990s. So with that, let me conclude my China travel log and switch over to a conversation with Matt about acceleration and Nick land. Cool.

I asked you as a first step give a short introduction to Nick Land to someone who's knows nothing about him, has never even heard the name before. So let's start with that.

Matt Southey: Sure. Nick Land philosopher from the UK, as you already said.

he was a professor at the University of Warwick in the 1990s where he led an extremely influential group called the CCRU, the Cybernetic Culture Research Unit. And there he was the professor and mentor of many influential philosophers who would go on to do things often unrelated to Land's work.

And then around the year 2000 Land moved to China and at that point he switched gears and he started writing on a variety of internet blogs. A lot of ideas that. We're seen as a, as a change of course for him. So Land is often divided into two main eras. There's the era before he went to China, while he was in the UK at the University of Warwick.

And then there's the land on of the internet where he started writing on all these blogs where you, Steve were, you know, in touch with him. Land is foremost known as the father of Accelerationism as you already discussed. But he kind of has these two you know, major fields that have received him, like he still talked about in academia from his early days because of his wrestling with DLI and Qatari. He wrote his dissertation on Heiddeger in the late eighties, and he published a book on Batay in 1992.

Steve Hsu: Just a quick interjection for the audience. So these are continental philosophers. That's

Matt Southey: right.

Steve Hsu: And I think pretty esoteric for average people.

Matt Southey: Yeah.

Steve Hsu: But within academia, all the names you're mentioning are pretty well known. And so he did interesting work about Heiddeger and these other people. Really, which is completely separate, would you say, from the work on accelerationism or is it actually interlinked?

Matt Southey: Yeah, it's, it's relevant. Land becomes much more accessible as a writer as time goes on. And his early stuff is very, can be very, very difficult to read. And it's, it's kind of like a lot of people see it as classic continental word salad. I think some of it is better than others. And some of it's really good.
One is kind of expected to come to the table with an understanding of Kant. You shouldn't read land hoping to, for him to explain what he's talking about. You kind of have to come to the table with it with an understanding already. so maybe, maybe I can just kind of talk about a little bit of that

Steve Hsu: For the audience, I think you told me there is a particular chapter in a work of lands in which it, it is possibly the best, either introduction or critique of Kant that maybe you recommend that people who are interested in this topic look at.

Matt Southey: Yeah. Land wrote a series of blog posts much later on called Crypto Current, and this has been assembled in various places on the internet. And the second chapter of crypto current is Land's kind of long statement on how he views transcendental philosophy, which is the philosophy of Kant. I encourage those to read it, but also to, I, I think Kant is often talked about as a very difficult to understand philosopher, but I think there are a few short readings that can make him more accessible, like such as reading the preface to his first critique, the critique of pure reason.
Yeah, those are some resources.

Steve Hsu: Great. So sorry for the interjection. Land moves to Shanghai early two thousands. Sort of disconnects because he doesn't actually have a formal academic position in China. But as he told me, he quite likes living in China because it's like living in the future for, I mean, it's in, in two, two thousands it wasn't so much living in the future as living in a very different place because China was much poorer in 2000, but he continued writing on the internet. And so would you say the, the accelerationist writings were, had they already existed in the nineties or they only appeared after he was mainly writing online after he moved to China.

Matt Southey: The term accelerationism is maybe worthy of unpacking because it's something that was given to Land. It wasn't a term that Land himself used. It was given to him in 2008 by a writer named, named Benjamin Noyce in a blog post. And then Noyce also wrote a book in 2010 where he uses it retroactively to refer to a group of writers like Dillas and Qatari. It then goes on to be, become adopted by the left by Alex Williams and Nick Cernik in their in their Accelerationist manifesto. So it kind of almost starts off as like a leftist project, and then it goes on further to become like this Twitter discourse where it, it, it splinters into all these different branches, like left-wing accelerationism, right wing accelerationism, you know all these, all these different types.

And most recently it became effective accelerationism under, you know, the Twitter personality. Be Beff

Steve Hsu: Jezos

Matt Southey: Beff Jezos.

Steve Hsu: Zos. Yeah at the risk of apologies to the audience, but at the risk of going a little bit into this esoteric rabbit hole in this more academic use of left accelerationism or how much of it, how much of it is accelerating capitalism and the eventual contradict supposed contradictions within capitalism versus technology specifically? What's the difference between those things?

Matt Southey: Land's idea, Land's philosophy is that technological progress exemplified by AI and capitalism form one circuit. And so gains to capitalism become invested into technological production, which increases the economic side of things as well.

And this like massive feedback loop, and this is basically the real engine of history. It's not a human history. It's a, it's a history about this, this inhuman process. That's Land stance, which is very different than the rest of these accelerationism that we've been talking about. Yeah. So when we talk about Landian Accelerationism, that's what we mean is this techno commercials circuit.

Steve Hsu: Right. But the, just for context the non -Landian use of accelerationism, is it within the Marxist idea that capitalism has these internal contradictions and will eventually self-destruct and we're accelerating that self-destruction?

Matt Southey: Yes. Yeah, that's right. That's, that's the popular understanding of accelerationism. Yeah. That was promoted by Alex Williams and Nick Sernik in their accelerationist manifesto.

Steve Hsu: Yes.

Matt Southey: Which is that capitalism has these con internal contradictions, which is a Marxist idea, and that if we just keep on going further in capitalism, it'll just dissolve by itself.

Steve Hsu: Yes.

Matt Southey: And so therefore, the, the glorious Marxist future is what we're waiting for by, you know, stepping on the, the, the gas pedal.

Steve Hsu: Yes. Okay. Now, now let's leave us, you know, for the rest of the podcast, we're gonna leave aside Marxism and, crazy Continental. Philosophy other than by Nick Land. Coming back to what you said, which is that, you know, land, I think already, correct me if I'm wrong, back in the nineties, was thinking about the markets as a kind of machinist intelligence that would themselves promote technological advancement, which would then bring into being this machining intelligence.

And the whole thing is in a sense, non-human because although humans are, are participating in the market, the workings of the market might be somewhat mathematical or, you know driven by incentives and structures that really are not human. But then they can give rise to gigantic corporations.
Those corporations plow resources into the development of technology and eventually out of that arises strong AI and I believe land was having these thoughts at a time, which is really one of the deepest AI winters. So the 1990s and early two thousands was definitely AI winter. Like you, you could not go into a CS department and talk about AI.

People would just laugh at you and they would say, it's Steve. It's a little better to say machine learning. Everybody used ML and never used AI during that era when Nick was writing, and now it's reversed. So now we don't say ML. If you do ML like 10 years ago, what you were calling ML, you would just switch terminology and call it AI. And what I find most amazing about Nick is, and I, I discussed this with him at time, several times. I asked him, I said, when you were writing in the nineties, did you think you were writing a kind of possible futurism that the world might turn out this way? And it was interesting to think about it. And he said, no, I could not imagine that .

It would not turn out this way. I, I was a hundred percent sure it would turn out this way. Which is really shocking given what most people were thinking about AI and, and things like this back in the nineties. So that's what's kind of impressive about this guy, is that he really saw the future. Most of us are actually still in shock trying to digest the strong AI developments of the last few years.

And just trying to think through what, what is our future gonna look like 10 years from now? I've never had so much uncertainty in what the society's gonna look like 10 years in the future. But Nick is the guy who already in the nineties saw it this way and never doubted it was gonna end up this way. And that's what I find most interesting about him.

Matt Southey: Yes. And it's funny because a big part also, I wanted to, I wanted to say that my degree is actually from the religion department because a big part of Land's work is is actually his esotericism and particularly his numerical esotericism. And, you know, talking about like, you know, the, the, the present that we now live in where ais are tokenizing all the words that we've ever written, Lands whole esoteric esoteric practice revolves around converting words into numbers and numbers into words in this way that has become functional in our AI systems.

But he doesn't see it obviously in this limited engineering way. He actually believes that by, you know by converting words to numbers, you're actually talking to machinist intelligences of the outside, which means like beings from the future, probably like something at the end of time. He's, this is why he's not just like a, a traditional philosopher in a lot of ways.

He's very concerned with like, how do we extract information from the future via these esoteric methods and how do we extract, you know, scientific information from the past via empirical methods. So that's in, in my dissertation, that's what I talk about, is that he's, he's using these two methods.
One is, one is trying to get information from the future and one is trying to get information from the past,

Steve Hsu: right? So you, you've introduced something which I think most listeners if they're paying attention at this point are, are saying what information from the future, like, is that causal back propagation in time.

It's not as crazy as it might sound, and we're gonna unpack it in just a few moments.
Before we get into that let me just say a little bit about the more conventional interpretation of Land, because I think most people in Silicon Valley are not, have not realized this particular thing that you're talking about.

Here he has become a symbol or a, or foundational thinker concerning acceleration, or you mentioned this ex influencer called Beff Jezos who coined the term effective accelerationism, which is contra effective altruism. And there the idea is just that we are gonna, I think we're gonna embrace techno capitalism. We're gonna embrace this positive feedback loop between,

Matt Southey: for the benefit of humans. That's where it's, that's where it differs from Land.

Steve Hsu: Okay. So I think, yes, I think most people who are in the Valley who are call would call themselves Accelerationist think that this is going to end up bringing about positive developments for humans, a kind of utopia for humans. And that's probably where they're most in conflict with the doomers, because doomers are pretty sure this is gonna end up badly for humans.

There's actually a third flavor of person, which, you know, I would put myself in this category where I acknowledge that for humans, specifically humans, qua humans, as you guys would say well, this could turn out badly. It could be these super creatures asis, advanced, super intel, artificial super intelligences that we create, supplant humans.

Eventually may, maybe they hive us off into like some beautiful artificial utopia worlds for us to live in, but they eventually take over. They're the driving force of what's happening in this part of the galaxy, say a thousand years or 10,000 years from now. And so you might say that's not good for humans.

But on the other hand, I would say that these are things that we created and if they go on to do great things in the universe, things that we're actually not capable of doing. I think that's still okay. I think that's still a positive future in, at least in my judgment. So that's, its kind of third way of thinking about things.
But let me let you say something about why maybe the way Silicon Valley thinks about Land is superficial, misses some aspects of what he's saying or misunderstands him. Maybe talk about that.

Matt Southey: Yeah. I think your perspective on Land is itself very Landian because when he, he's not worried about the future because he believes that there's a correlation between intelligence and values. So if you have a super intelligence, you're gonna have something with higher values. In other words, like the super, the super intelligence be superintelligent being is something like an Uber mech. And this is, you know, this is like a core philosophical piece that land is at odds with Silicon Valley about and it's called the Orthogonality thesis.

Steve Hsu: Yes. Very appropriate to be saying this at light Haven because this issue of Ortho Orthogonality is very central to AI safety and alignment. So talk about that a little bit.

Matt Southey: Yeah. So the Orthogonality thesis was put forward in a number of places, but I, it's, it's, it's easiest to summarize as a super intelligent, could have any given value.

It could desire to make paperclips, turn as many atoms as possible into paperclips. It could desire the same things we do possibly it could desire anything. There's no value that it, that would be un unreasonable for a super intelligence to have. There's no correlation between its intelligence and the values it has.

Steve Hsu: So you could have a super intelligence that's Mr. Evil, Dr. Doom, or you could have a super intelligence who is incredibly benign like the Buddha,

Matt Southey: right?

Steve Hsu: And Orthogonality says you know, the, the, the direction in space, which is the brain power. Is just completely independent of the direction and directions and space which represent the values of the, of the

Matt Southey: thing.Yes.

Steve Hsu: Yeah.

Matt Southey: And therefore, the, the goal of AI alignment is like, how can you hit this tiny target in the space of all possible values for the super intelligence? How can you get it to have our values? Right? It could have any given set of values, right? The super intelligence, how do we give it our values,
Steve Hsu: right?

But because we only have a few examples so far of actual AGI minds we don't really know whether the orthogonality hypothesis is true. And so if your Land, you could say no as you become smarter, you actually develop certain values and those certain values you know, maybe are better values as, as the thing becomes more intelligent. And so then that maybe gives him some comfort in the creation of these things.

Matt Southey: Yeah. Although I think there are some good reasons to believe in the Orthogonality thesis. The crux of Land's non orthogonality is that he thinks there are only instrumental goals. And this is a distinction between final goals like, I desire to turn the everything into paperclips, or I desire happiness, or I, this is like, this is your ultimate goal. And then how you achieve that ultimate goal are the instrumental substeps. So in order to, you know, enjoy my day to day, I needed to get up and make breakfast in order for me to be fed, I needed to you know, go for a run. So I didn't, so I felt like my body could actually get some exercise, et cetera.

These are instrumental means for achieving the end of happiness or, or enjoying my day. And Land says there's no such thing as having a final goal. There's only instrumentality all the way up. And he cites content this way. Where he says to will, an end is to will also the means to that end. And so the instrumentality just covers there.

There's only instrumentality. It covers final goals as well. And this is an extremely contentious position. There's basically nobody else who's putting this forward beyond Land. There's, yeah.
Steve Hsu: Yeah. I'm not sure how I feel about this. I'm not convinced of this particular point now, sorry. Audience at the risk of getting further into esoteric type things. Another aspect of Land, I think is the following. If you believe that we puny, ape, ape-like things after only pursuing science for less than a thousand years, are already able to make kind of super intelligences in silico, powering them with nuclear reactors if necessary. If we can already do that after only a thousand years and, and you just extrapolate forward another 10,000 years.

Won't there be many super intelligences around and won't many of those super intelligences live in virtual worlds? Because for example, the world models that we're training for robots are often starting out in virtual game-like worlds where we train the intelligence to do certain things that we want, including drive a car.

Eventually in this future where instead of being right now at the, just the very beginnings of creations of artificial intelligence is we're, we're say it's been happening for tens of thousands of years. And the artificial intelligence are artificial intelligences are recursively, self-improving themselves.

You would imagine that the distribution of mines could be that above some cutoff of intelligence, say smarter than the average human. Today most of them are artificial. Yeah. And maybe even most of them are themselves living in virtual worlds. They're not actually in contact with base reality. They're living inside some virtual world that's been created for some reason.

It could be a game, it could be a training environment, et cetera. Conditional on that picture of how the world might be in the future. It could be 10,000 years from now, it could be a thousand years from now. Conditional on that you could ask what is the likelihood that we are actually in contact with base reality?
And so this goes under the name of the simulation hypothesis. So I think, correct me if I'm wrong, Matt. I'm not sure that Nick fully embraces this, but from my perspective, if you embrace advanced super artificial super intelligence, you have to say that in the fullness of time, most brains.

In the whole history of the universe will have existed in virtual worlds, which are not base reality. If you live in a virtual world, there could be a completely different telos, right? There could even be situations where the physical laws mostly hold, but not always, right? There could be a cheat code in the video game.
There could be some hidden purpose in the video game where the, the internal simulated world is being nudged along in a particular direction toward a particular end, but the little puny intelligences which are in the world don't know this. And so the probability distribution of how the universe could turn out to be, I think, is quite different than if you're just a naive physical realist who says, yeah, there's this one world and we humans are discovering the physical laws of it. And those physical laws work all the time. And there's no magic and there's no mystery or occultism, and you're crazy if you think any of those things are true, which I think is a opinion of most scientists.

But if you then layer on top of that, this possibility that, well, it's been demonstrated now in our universe that super intelligences can be created. I think no one doubts that now. Then in the fullness of time, remember the universe, even according to our current models, is gonna last another a hundred billion years.

Okay? Over that next a hundred billion years, aren't most minds as good as ours going to actually be living in simulated worlds which are not base reality? If, if that's true, then this basic tell us about, Hey, it's just some physical laws that apply all the time and there's no magic. You could question that.

I don't think you have to be a nut to question that. And so when I, when I talk to Nick, he, he's always very he always says, oh, this is just woo woo stuff. And I think he assumes it because I'm a physicist. I won't like it, but I always reassure him and I say, look conditional on these other assumptions, which now seem quite plausible to us, given how close we are to AGI right now.

Given those assumptions, what you're doing isn't really fully woo. It's like, it's like a discreet, once you embrace the possibility that we might not be in base reality, if we're not in base reality, why are the rules that our universe evolves by, as rigid as our priors might have said, you know, 50 years ago. We might have a completely different conception of what's going on in here, and you're fully justified in doing oc cult numerology, trying to connect to the outside world or maybe find little tears in reality in the way that the laws of physics operate.What, whatever it is. I think you might be wrong. Maybe none of that happens, but you're not as crazy for trying to pursue that as people might've thought 50 years ago.

Matt Southey: Yeah. The point about simulation is is really a good way of getting at Land's metaphysics because if we are being simulated, then one can imagine outside of the simulation, the simulators being able to move, like scrub through our timeline at will.

And our entire kind of recorded history is just available to them as like one single file that they can access at any given point and do manipulations on. And it's just like our perception of time, which seems to be, you know, linear and one thing happens after another is actually already there, already available in state.
And this is kind of the contention position, which is that what appears to us as like the world and, and space and how it seems extended and two things don't occupy the same point in space and two objects don't seem to And, and, and we see that time flows in one direction and never in the other direction. All of this stuff is about us and our position within the world. And from the outside it might be completely different.

Steve Hsu: Yeah.

Matt Southey: So when land is doing his esoteric practices, he's like, he wants to communicate with those things that have a completely different vantage point on time and space and causality. And that's what he's trying to access.

Steve Hsu: Yeah.

Matt Southey: And he, and he admits, he's like, this is not the same thing as science. And it's completely not reconcilable within the scientific paradigm because science is about acquiring empirical facts based on the past. You do an experiment, you record its effects, and then you draw a theory based on what you've seen.

But Esotericism on the other hand, is about receiving messages from the future, which have no trace. Yeah. And they have no historical trace.

Steve Hsu: Yeah. What I, one of the things I, I think I, I explained to Nick 'cause don't think he knew this ahead of time, is that when you work on quantum gravity. The natural object that occurs in the quantum version of general relativity is for manifolds. So, so their entire space times.

So, so one of the big leaps in modern physics is that space and time are not completely separate things. Space time gets space and time get mixed up in space time. And furthermore, the structure of that manifold of space time can have curvature and even potentially wormholes that connect the future to the past. There's all kinds of things that can happen at the quantum level, in quantum gravity. And so most physicists who are trying to formulate theories of quantum gravity do view it as a kind of a quantum mechanical sum over things which are full histories of the universe. So it's a, a thing what you're looking at it from outside as a timeless thing.

There is time which is perceived by the objects and the things inside that four manifold as of the flow of time, but the outside being, which is formulating this abstract theory, the, the actual mathematical expressions view that whole thing as existing all at once. So it is exactly the view that there's an outside view of all the events, the full history of what happened in the universe all at once.

And of course, if you're a game designer and you, you, you ran a game and it ran from the beginning and the NPC, the NPCs were all just doing stuff and then it ended up somewhere at the end, you could view that whole thing as one file and you could actually go in and make changes. You could, you could do all kinds of things.

Right. And so it's not actually even a scientific, because if you postulate that we're talking about a, a simulation, which is in some other base reality. Then of course, yeah. All these things are just obviously trivially possible for the game designer to do to the game. But also if you're studying quantum gravity and you actually, because of the structure of something called general co variance, the natural objects in the theory of quantum gravity, quantum general relativity are timeless objects where time is an emergent property inside the structure that quantum mechanics can fluctuate in and out of existence.

So, so again, very mind bending if you're not a physicist. And even actually most physicists don't understand this, it would only be theorists who work in quantum gravity understand what I'm saying. But it's, it's actually completely aligned with the kind of thoughts that Nick has and, and in some sense, not, not completely what Kant view.

'cause I think Kant didn't, under, didn't, you know, he was writing well before these concepts came into being, but it is in, in terms of flavor, it is a little, it is Kantian. Yeah.

Matt Southey: Yes. And the pop science term for what you were talking about is the glass block universe. Yeah. Just imagining like a, just a glass rectangular prism where you can see people moving through it.
Like their long worms with at every moment is dec slice of space

Steve Hsu: time. Well, these, well these are actually called traje these are trajectories in space time. So this is actually the natural language of advanced modern physics is actually, you know, looking at trajectories in space time.

Matt Southey: Right.

Steve Hsu: They're called world lines actually.

Yes. Yeah.

Matt Southey: Like a Minkowski space time diagram.

Steve Hsu: Yes.

Matt Southey: Okay.

Steve Hsu: Yes,

Matt Southey: What's particularly interesting is that if we develop the ability to run simulations, not only will that be cool, 'cause I mean we'll have the ability to have a world within our worlds and then maybe worlds within those worlds. And it'll be this like recursive process of the world it's going deeper and deeper.

Steve Hsu: Yep.

Matt Southey: But it will be something like an ontological device where it will tell us our own ontological status. Is our world a simulation? And if we develop that, we will know with almost certainty that we are in a simulation. Well, I, there there's more to the argument than that, but Nick Bostrom writes it out in, in a simulation argument paper. Yeah. But in general, I find that fascinating. A piece of technology serves as telling you something about your ontological status.

Steve Hsu: Well, I, I, okay. I think it's not easy for us to ever know whether we're in a simulation. That's quite challenging. And the, whether or not we ourselves can create simulations of, of other worlds within our world.

It doesn't actually let us know definitively whether our world is a simulation. Yeah. Suggest
Matt Southey: just prob right?

Steve Hsu: It's probabilistic, it suggest, it makes it more plausible

Matt Southey: because if you spin up a million simulations,

Steve Hsu: then why are we in one?

Matt Southey: Yeah. What's the chance of that? You're in base reality.

Steve Hsu: Recently when they created a sort of chat forum for OpenClaw agents to talk to each other, that was a fully vir Well, okay. There was some cheating 'cause some of the owners of the open claw agents were inject telling the agent to like, say certain things in the form, but if it were really just OpenClaw agents talking to each other and you had a time series of agent one said this and agent two responded this and the other agents voted and didn't like it.

And so that is actually a world line of artificial minds which just let it run for 10 hours and then turn it off. And that's a whole. World history of interactions of artificial sentient beings talking to each other and reacting to each other, and sure, we did it right. It, it's happened. Totally. Okay.

And each of those agents that was in there actually can score higher on the SAT than the average human. So, so not that that's a complete measure of intelligence, but that particular metric, like, oh, they're smarter than average humans in that, on that particular metric. And they were inside the simulated world, talking, talking to each other.

Matt Southey: So. Just, just to pivot from kind of the land acceleration point, although I'm sure we'll come back to it. Yeah. The thing that brought land a lot of infamy on the internet was his essay series called The Dark Enlightenment, which is a commentary on mencius mold bug, AKA Curtis Yarvin's essay series, how Dawkins got owned.

Steve Hsu: Yeah. And it, by the way, in the event, one of the events that I attended here, there was a dialogue between Curtis Jarvin and Nick yes held at this billionaire type mansion in San Francisco.
Matt Southey: And the relationship between the two is very different. I mean, they're just involved with very different sorts of projects. And yet they both kind of converge on Yarvin wants a functional political system, and that's also Lands interest as someone who thinks that the, you know, the historical process is like inherently functional, like it's actually building something and it's not a human, it's not humans building it. It's this abstract process that's actually building.

This intelligence. And so in this point they connect. And I just wanted to say that, 'cause it's, I think it's related to the right wing turn of Silicon Valley, the, the Feting of Nick Land. Because what, anyways, he, what's what, what led to him his growing popularity was this problematic essay series at Dark Enlightenment.

Steve Hsu: Yeah. I, I think one could have a lot of different views on this. Like some people who are more on the left and like some of Land's ideas but don't like Neoreaction and HBD would just say like, poor Nick got sucked into this vortex of politically incorrect stuff. and was talking about it. But that's not really central to the question of whether techno capital and, you know, inhuman machinist things are gonna dominate our future.

Yes. They're somewhat independent. It's somewhat, I mean, in a way accidental that Nick, Nick just happened to not be opposed to discussions of things like HBD or the Dark Enlightenment.

Matt Southey: Yeah.

Steve Hsu: Or fascism.

Matt Southey: Yeah. I think his interest is because he's, he somewhat believes these issues are functional issues.

Steve Hsu: Yeah.

Matt Southey: So like to get them right, is to improve the functioning of civilization and therefore the functioning of the tech technical commercial system. Yes. Yes. And therefore like history operating. But I mean, that's up for debate and like Yeah, absolutely. You can pay more. I mean, and it, like I said, you a lot of the academics, you discuss Nick land only ever talk about his early continental days and just assume that he lost his mind after moving to China. And, and likewise when you talk to people in Silicon Valley, they're like, he was crazy before when he was in the UK.

Steve Hsu: Yeah.

Matt Southey: And now he's, he's discovered sanity because he's a right winger like me who's interested in technology.

Steve Hsu: Yeah.

Matt Southey: And so there, there's kind of this like Yeah. Almost schizophrenic thing and the relationship between you, you could call it early Nic.

Yeah. Early Nick land. Yes. Versus late Nick Land is, is an interesting question. And a lot of people think they're radically different in making different points.

Steve Hsu: Yeah.

Matt Southey: I think they're quite similar. I think you can draw some pretty interesting through lines.
Steve Hsu: Yeah. I think I'm with you. But it's just that. The early writings were continental philosophy, so if you didn't have a background in that, you wouldn't really find them legible.

Now he's talking about. Capitalism and markets and ai, and that's more legible to the Silicon Valley crowd. But yeah, I think there's a through line in his thinking for sure.

Matt Southey: Yeah.

Steve Hsu: Yeah.

Matt Southey: And he kind of switches, I mean, he wrote about Nietzsche in his early days, but he really kind of leans into the Nietzschean thing later on. And so kind of, I divide my dissertation into the first part early land, about the ways in which he's doing kind of like a T Esotericism while the second part of his career since he moved to China is about like a Nietzschean in science where he is almost like he's talking about Yeah. HBD, he's talking about like kind of eugenics ish stuff he's talking about instead of Nietzsche's will to power, he calls it the will to think that like beings don't, like Nietzsche said, beings desire an increase in their powers and an increase in their capabilities.

And Nick says a better way of thinking about it as being's desire to create higher intelligence, either in themselves or in others, and that this is actually what is the functional path the universe takes. And so by the way, Land, when he's doing all these, when he's doing his philosophy, he thinks he's being totally descriptive. Like he, he thinks he's not actually saying you, this is what should happen, or this is what shouldn't happen. He's saying like, these are the functional constraints on the universe and therefore this is what will happen. Like you said, like how could it end up differently?

Steve Hsu: Yeah.

Matt Southey: And so that way, in that way it's like to, in, in his view, to be I'm an accelerationist, I want this to happen, or to say, I'm not an accelerationist, I don't want this to happen, is is irrelevant.
Steve Hsu: It's beside the point.

Matt Southey: Right. It's never been about us. Yeah. Like the historical process is just not related to the human story.

Steve Hsu: Yeah. I, I sympathize with Nick a lot on this point because like on my podcast, I am 99% or maybe 95% purely descriptive and not normative. I very seldom take normative positions, but people think that if I say something like, oh, China's kicking our ass on in this tech particular technology vertical.
They think it's a normative statement. Like, oh, Steve is really happy that China is kicking our ass on. Or, you know, it's like, no, I'm describing it to you. We need to share the facts about what's actually happening and analyze them together. Then we can have a fruitful conversation about the normative aspects or what the future's gonna be like.

Without that, we're just talking past each other. Just two retard, scuffling in the hallway. Right? So, so for me, most of what I'm doing is just not, it's purely descriptive. It's like we have to get the facts straight. And then discuss, which is, which is how physicists behave. We, we have to agree on what the experi the data says then we can analyze the data and then we can move forward. If we don't do even that zero step, there's no point in having the conversation. Right?

Matt Southey: Yes. And as you know, you know, Land is such a sweet, nice guy in person.
Steve Hsu: Yeah. He's a, he's a real gentleman.

Matt Southey: And yet his, his philosophy feels so dark and edgy. You know, he was talking about human extinction. It's talking about being replaced and, and this kind of disjunction is really interesting 'cause people are expecting, expecting him to be this badass know, right. They're

Steve Hsu: expecting him to be a, a diabolical Right. Kind of genius figure who's cackling with glee as the humans are replaced or something. Right. And he's not like that at all. He is like, actually your, your most like genial friend. Yeah. But let me, let me say it this way. I think this dark, well, okay. I shouldn't say it too loudly 'cause we're among, we're surrounded by, we're literally surrounded by doomers as we record this conversation, but.

But all this stuff about like, oh, what's gonna happen to the APE links? What, what's gonna happen to the, you know, the, the first apes that could walk upright on this planet? You know, like, oh my God, Boohoo. I think from the complex systems view, the physics view, it's basically like you have systems that can develop complexity.

Some people call it neg entropy. They can extract energy, useful free energy from the systems around them and build progressively more complex structures, either within themselves as cells or neurons or as machines that they themselves build to use. And if you just zoom out a little bit and you say, oh, nature allows for this organizational complexity to develop, and it does develop over time, and we've now reached a point where, like I said, science didn't exist more than a few thousand years ago and now it's suddenly zooming ahead and we're making these artificial beings.

So this complexity is bound to emerge from the system. This, the laws of the system permit it and it will emerge in any intermediate layer in that time, which is like, oh, at this point things were run by the ape beings and they were using their hands to do things. But then eventually they, well, powerful factories full of robots are doing the thing and then like, oh no, actually nano robots are doing the thing, you know, or, you know, et cetera.

So, so it's just a natural progression. It's description of the emergence of greater and greater complexity, greater and greater intelligence in the universe. And what's amazing at the, that the laws of physics allow this process to happen. And land would say it's inevitable that it, it's gonna happen. And yes, apes are only an intermediate boot loader thing, one layer for one era that, and they're not gonna exist forever. It's crazy to think that they would exist forever.

Matt Southey: If land is wrong and Orthogonality is correct, then I think it will be a terrible thing for us to die and for other beings who, without our values whatsoever to come into existence. But I hope, I hope I hope all is well and all ends well. And but I, I think if land's wrong, then it will be pretty catastrophic.
But I guess that remains to be seen.

Steve Hsu: Alright, let's stop because some Doomers chattering in Cantonese about a hundred, about 50 feet from us, and it might be polluting our audio, but I think we've discussed, we've covered this. We issue, I think, in sufficient depth and I think probably our, at least some of our audience are still with us and enjoying it. So why don't we stop here. Matt, it's been a pleasure chatting with you.

Matt Southey: Pleasure chatting with you.

Steve Hsu: Okay, thanks.

Creators and Guests

Stephen Hsu
Host
Stephen Hsu
Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University.
© Steve Hsu - All Rights Reserved