Dreamers and Doomers: Jeremy Nixon at AGI House – #105

Steve Hsu: My pressure was they always were gonna go for AGI, but they felt like we are nice and if we control the AGI or we produce the AGI, it's not as threatening to humanity as if Demis does it.

Jeremy Nixon: Yeah. Yeah. And to be clear, I think that entire anology is broken and I, I'm happy to compose that if you're interested.

Steve Hsu: Go. Yeah, go for it. Go

Jeremy Nixon: for

Steve Hsu: it.

Jeremy Nixon: well, where to begin? I think the, the place positive game is the concept of API. So I, I think that Greg and Sam and Ilya de deserve a lot of credit for branding themselves with what was a crank ideology kind of idea.

And the folk I walked with at Google Brain were judgmental of opening eye for being dreamers, right? So we're believing that. The goals that were set out in the sixties for AI, were all gonna be achieved by them. Mm-hmm. And in many ways they're, they were totally right to, to be dreamers and, and we live in this world where it was, it was all correct.

And the reason that I named, the house AGI house is that we got AGI. So what do I mean by that? 'cause that feels strange and controversial. Well, actually there's a prior paradigm, which is narrow intelligence. It used to be the case that you would train a different model, machine learning model. So machine learning, I know is a term is no longer popular, but actually convolutional neural networks, you know, even the face of ImageNet had to be trained per task.

And you would have to collect new data in order to have your model do something new that wasn't already pre-trained into the model. And so for me, the advent of the foundation model is about generality.

Steve Hsu: Yes.

Jeremy Nixon: And the, the, the general, to be clear for your audience in artificial general intelligence, the G is general.

So it's about whether you have an intelligence that can more or less do anything as opposed to being confined to its specific task.

Welcome to Manifold. We're here at AGI House Twin Peaks San Francisco. Our guest today is Jeremy Nixon. Jeremy, welcome to the show.

Jeremy Nixon: Steve, and an absolute joy to be with you.

Steve Hsu: An honor to have you. I am off camera because you are so cool in your equation cover jacket and the view of the bay is so awesome in the background that I, I don't want to take up any space

Jeremy Nixon: too Good

Steve Hsu: to know, on the video.

So, Jeremy super happy to have you on the show. This interview is for Manifold of course, but we are also shooting a documentary film project. The tentative title is Dreamers and Doomers and on this side of the Bay, at places like AGI House, we want to talk to the biggest dreamers who see a positive future in AI and are working to make that future. And you are the first interview in that series.

Jeremy Nixon: I'm honored and yeah, I, I've, to be clear, lots of love for both the doomer and dreamer perspective.As I told you, I believe that in many ways the doomers are the true dreamers. And I, it's, it's almost in my disappointment with the technology that I can be interviewed as a dreamer.

I I have some sense that there, there's incredible potential. So we are here at the AI physics hackathon. You've just walked us through your GP five generated results around the trodding your equation. And it's obviously exciting to experience the potential of a full comprehension of our reality through an alien intelligence that we have invented. And we get to be the bridge generation to that comp that new form of understanding itself. Right? So it feels like we've invented something that might be as foundational as language, right? Where you're using this extension of yourself, of your tools to create new forms of understanding that didn't exist before. These like the embedding in GT five, representation of the knowledge plausibly deeper than its textual representation of what the mathematics of this improvement this on this question of whether the RO equations are linear. And so, yeah, I guess why not be excited about the future of physics? Where yeah, like hopefully we end up with new inventions. We end up creating things that were inconceivable to ourselves. Like our entire frame for what's possible is a function of our understanding of physics.

Like how recently did we not know about quantum mechanics and eventually quantum computing? Like what awaits a deeper understanding of the fundamentals of our reality. There's just there's all these questions that we can ask of systems that. May go beyond us that I'm excited to bring into existence today.
So hopefully at the end of the night you know at AGI house, we try to build with the creator. So here, you know, you are the creator of this, this new way of sort of doing discovery. This feedback loop with your agent, having some proposal and critique as a sort of the structure of the agent. Hopefully the people here can come up with new discoveries in physics today, and we can be a part of this historical moment.
Steve Hsu: Great. So, you know, I am incredibly envious of younger people your age because you're really, you're at the height of your powers and you're experiencing this AI takeoff moment.

And I'm just happy that you know here, you know here now that I'm an old guy, at least get to see this before I pass from the scene. But I think you guys are in the best position of any generation. Ever not, maybe not the average person who maybe could experience some dislocation from AI taking their jobs, but someone like you who's a brilliant thinker and also an entrepreneur, I just feel so much envy for people like you who are making the most of this moment.

Let me tell the audience a little bit about you, and you can, you can just elaborate a little bit. You grew up in Michigan. Your father was a professor of Victorian literature. You went to Harvard and you studied applied math and many other things at Harvard. And last night you told me, Hey man, I was smart enough to realize I didn't have to go to academia. I could just go to Google Brain. Be paid way beyond what most professors are paid right out of college and work with some of the greatest researchers on the planet. And that's what you did.

Jeremy Nixon: Mm-hmm.

Steve Hsu: So, now how many years ago was that? When did you graduate from college from Harvard?
Jeremy Nixon: Yeah, so I, Harvard class of 2015 and yeah, it's very true. I see the Google Brain PhD as an incredibly lucrative, but simultaneously intellectually deep and rewarding collaboration and was able to, you know, publish with the inventor of the diffusion model. To be clear, at the time we had no idea what it would mean to have this sort of paper based non program themic.

So, you know, there's a lot of physics influence on yeah, I was able to publish on meta optimization with the creator thinking machine. So all these folk are now doing phenomenal things. But the idea behind meta optimization that a neural network can itself be the optimizers actually. This incredibly deep theme, and it was a joy to be at brain.

I didn't end up doing a PhD but in practice have the, a publication record of someone who had done a PhD. And yeah, I certainly loved those, those years of intellectual depth, like I invented, you know, a number of uncertainty metrics, which you know, made it to, among other things. The teams at Waymo with calibrators that are required to detect whether or not someone's present or not.

There's a sense that you have all this potential to not only do research, but have that research enter reality in a really deep way very quickly. And I have just a lot of love for the research lab as a concept. so yeah, I'm here next to, to one of my bookshelves. So we have Richard Rhodes's making of the atomic bomb.
It discusses yeah, certainly the Manhattan Project, which is an archetypal mass research, lab style experiment. Certainly Project Apollo, the Apollo program. So, you know. A lot of these projects attempt to put all the greatest researchers, the greatest minds into the exact same organization in this very intensive sprint to some glorious research outcome. And why would I be in a sort of backwater PhD program when Yeah, I could, I could be the center of the action.

Steve Hsu: I think you're totally right. I think you had a lot of life insight in choosing what you did in the 10 years since you left college. And now after some years working at Google, you've kind of gone off on your own.

And I almost introduced you at the beginning of the show as the impresario of AI because you, you sort of pioneered the group house idea here in the Bay Area, right? You had a group house down in the South Bay. We are currently here in San Francisco, on the, you know, with a beautiful view of the entire city.
And you filled this house with other entrepreneurial agentic, super smart people. And when I talk to people in the Bay Area even people who are older, like say up to 40 years old, they say this is their preferred way of living. They'd rather live in a communal environment, you know, a very nice house, big house, a luxurious house actually.

But they are constantly meeting new people through the network, through the other people in the house. There's always some exciting thing going on. Right now you're running this hackathon. Just talk about the scene. Like what, what, what is life like for a 30 ish person in the Bay Area? Who is as talented as you?
Jeremy Nixon: Yeah, I guess, I mean, there's a question. Why live alone in a one bedroom apartment by yourself when you could grab seven of your most brilliant friends, people who, you know, you have a conversation with them, and in the face of them you come up with dozens of worthy ideas and combined forces that is, you know, at the intersection of all of your friends are just a tremendous number of brilliant people.

And so you can co-create at all sorts of events. So one of my roommates introduced me to you. She said, oh, you know I was just talking to Steve Hsu I got so excited because I love, you know, certainly your work on genomic intelligence and and certainly the stuff in manifold and your blogs like, oh wow, that'd be incredible.

Just a very impromptu kind of introduction that creates you know, more rich social life. And I think in a lot of cases people live in a way that's within their means and, and they de they degrade their lifestyle in order to do it. But the nice thing about co-living is you can rent a huge mansion, but actually pay a totally reasonable amount because you and your friends, you know, split the cost of the space.

And in the mansion you can bring 300 friends for a massive party, or 200 friends for a big hackathon, or have, we have every week a reading event. So we have a reading society and people you know, explore everything from philosophy to sort of majoring textbooks in a single sitting. And that kind of thing is hard to do if you don't kind of own the space.

So as soon as you have the space, you can be creative and do anything. And so there's something about yeah, I guess the style of creating society and communities that I love. but I also think at the core of it is of like deeper intention. So Silicon Valley has a very special culture. You know, we have this sort of hero worship of the founder archetype, and we have a novel currency.

We have sort of cryptocurrency. We have a novel sort of religious body of religious movements. The apocalypse called you know, the Bay Area are infamous for having spawned open AI and philanthropic, but simultaneously. you know, there's techno utopianism in the sense that there's infinite potential.

So I guess regardless of like where you stand on these, like kind of intense beliefs about the future, you are part of a Plex site claim that's interactive and that has a, a lot of very unusual beliefs relative to the general population. Like they actually, you know, the EAs and the eac, they have much more in common than, you know, sort of a standard person like working a standard job.

Steve Hsu: So let's, let's break it down for, so my listener, you know, I've got plenty of Normie listeners who are just you know, professor of X at some perfectly respectable Big 10 University, and they have no idea what EAC is, or EA or how open AI and anthropic, as you just said, actually originated in these kind of apocalyptic cult beliefs.

So let's, let's break it down a little bit. So let's tell, tell us the story. Of how the very existence of open AI and philanthropic, which are changing two companies, which are changing the world, have already in their brief existence, changed the way people live and interact with technology. How did they originate from a Bay Area apo apocalyptic death cult?

Jeremy Nixon: So, I, I mean, partially I feel like I'm in the right position to answer this 'cause I recently started a new company called Infinity, which is all about automating research and the way that AI systems today can do code generation and automatically optimize that code to do discovery as a, as a way to evaluate and discovery.

And so it's in the same tradition as a lot of these big research labs, which start with this assumption that you'll have recursively self-improving AI systems. So there's a very long history, but basically, the singular, everything from the Singularity Institute to its renaming to mi to, to Elon's fear of demis,

Steve Hsu: okay, let me let break it.

Jeremy Nixon: There's a lot of details. So lemme

Steve Hsu: start, you know, all this cold and I, I wanna, I wanna help break it down for the

Jeremy Nixon: audience. Lemme keep it simple. So, yeah. so why does Open AI exist?

Steve Hsu: Yeah.

Jeremy Nixon: Well, there's a version of the story, which is Elon. we became aware of Demis Hassabis who created a DeepMind. DeepMind had produced this phenomenal Atari system based on deep reinforcement learning. And so to be clear on dates here, is this like 2011 through 2015, and actually Peter Thiel, interestingly met Demis at Singularity Summit back in 2010. So why was DeepMind funded? Well, actually, the Bay Area has the singularity concept and Michael Vassar was running this conference. Demis gave a talk on computational neuroscience.

Peter invested in in DeepMind. At the time Demis was kind of working with to Gio and was in the legacy of Mar and he got the Gatsby, so he had his own history. But Elon met Demis Hassabis this founder of DeepMind and was horrified that they were going to, in the strong language of the initial Deep Mind proposal, solve intelligence and use it to solve everything else.

And there was this sort of thread in the vibe of Demis Hassabis. So this, this mind, this person who's smarter than anyone, he's won the Mind Sports Olympia five times, but he's also created these very edgy video games that have religious themes. You know, you'll play the position of God in his games. And so Elon believed that Demis was a danger to the future of humanity.

And to be clear, Elon loves this sort of savior complex psychology. And I'd say a lot of that category of God complex characterizes the apocalypse. Generally speaking, it's common. I don't want to pretend that climate change isn't, you know, in this category of belief, it is. Everyone's interested in saving the world. And how do you be a good person? Will you play a part in the story of saving the world?

Steve Hsu: so can I, can I add something?

Jeremy Nixon: Yeah, go ahead.

Steve Hsu: So, I have it on very good authority that I don't know that Elon still believes this now, or exactly how much he believes it at this moment, but at the time he was very fascinated by the idea that we live in a simulation.

There are only a few player characters, everybody else like, like me, I'm sure you're a player character, but, but people like me are just NPCs. But definitely Elon thinks of himself as a player character, and Demis also a player character. And he was afraid that the game, the point of the game was to get to AGI first. And he was afraid that they, the reason he had to create, help create open AI started jumping a little ahead of the story. Yeah.

Jeremy Nixon: Yeah.

Steve Hsu: The reason that he had to help create open AI was because otherwise Demis and Google were gonna win.

Jeremy Nixon: Mm-hmm.

Steve Hsu: The race to AI, and this is after Google acquired DeepMind, which was Dennis's company.

Jeremy Nixon: Yeah, yeah, exactly. Yeah. No bending Larry Page. So I, I yeah, I've lost deal with that
acquisition, but yes. I think Elon's obsession with this simulation concept, it makes sense if you've lived the life of Elon, like think for a moment.

Steve Hsu: Yeah.

Jeremy Nixon: Like

Steve Hsu: you're the

Jeremy Nixon: richest fan world if you, if you created SpaceX and

Steve Hsu: Right.

Jeremy Nixon: And walked on water, so to speak, and this was 2014, so he was in an ambiguous character, right. In the same way that he is now. And so yeah, it would make sense to model the world that way. Yeah. So there's something to be said. I think for, the Sam o like live players, there are only a few. A true agent's concept, but I, I do see you as a live player.

Steve Hsu: Okay,

Jeremy Nixon: thank you. I do think you created a company in, in, you know, this sort of general intelligence line that would not otherwise have existed. I think it's counterfactual and in general. Yeah, I I think there's a lot to be said for your work that's yeah, Elon and Sam Altman and Ilia Suker collaborated to create open AI in the face of the threat from DeepMind.

And if you look at their internal comms, it validates the story. They're now, you know, in legal sort of war with one another. So we can see the, these comms around how we, you know, we can't allow Demis to control AGI and super intelligence and so they create this new entity. It was for me a shock at the time, 'cause I thought that their marketing made no sense.

They called it opening AI. They planned to open source everything, but they simultaneously claimed that it would be this recursively self-improving artificial intelligence that was capable of, you know, achieving more or less anything. And so. Trying to reconcile that as a safety minded person is, is really hard opening.
I made no sense at the opening. Now they modified their, their ideas quite quickly, which it seems actually where kind of a debate between Elon and Sam, what it should be. But at the opening it made no sense from a safety perspective, but the justification for it was, oh, Demis approach is gonna end humanity.

And to be clear, this is like in the wake of Sylmar, there are these sort of series of con conferences that Elon attended where folk like Max Tag Mark had raised a bunch of these fears about the pro, the specter of apocalyptic super intelligence and wanted to address those problems with new organizations.
And yeah, I mean it all is very ironic in the end. So, so, you know, both in open AI first and then Anthropic turn to acceleration as their primary practical ethos while ostensibly having these founding roots and like the idea that what they're doing is an apocalyptic pro process.

Steve Hsu: And so my pressure was they always were gonna go for AGI, but they felt like we are nice and if we control the AGI or we produce the AGI, it's not as threatening to humanity as if Demis does it.
Jeremy Nixon: Yeah. Yeah. And to be clear, I think that entire anology is broken and I, I'm happy to compose that if you're interested.

Steve Hsu: Go. Yeah, go for it. Go

Jeremy Nixon: for

Steve Hsu: it.

Jeremy Nixon: well, where to begin? I think the, the place positive game is the concept of API. So I, I think that Greg and Sam and Ilya de deserve a lot of credit for branding themselves with what was a crank ideology kind of idea.

And the folk I walked with at Google Brain were judgmental of opening eye for being dreamers, right? So we're believing that. The goals that were set out in the sixties for AI, were all gonna be achieved by them. Mm-hmm. And in many ways they're, they were totally right to, to be dreamers and, and we live in this world where it was, it was all correct.

And the reason that I named, the house AGI house is that we got AGI. So what do I mean by that? 'cause that feels strange and controversial. Well, actually there's a prior paradigm, which is narrow intelligence. It used to be the case that you would train a different model, machine learning model. So machine learning, I know is a term is no longer popular, but actually convolutional neural networks, you know, even the face of ImageNet had to be trained per task.

And you would have to collect new data in order to have your model do something new that wasn't already pre-trained into the model. And so for me, the advent of the foundation model is about generality.

Steve Hsu: Yes.

Jeremy Nixon: And the, the, the general, to be clear for your audience in artificial general intelligence, the G is general.

So it's about whether you have an intelligence that can more or less do anything as opposed to being confined to its specific task.

Steve Hsu: Yeah. I think your, your, your definition of AGI is perfectly reasonable. General versus narrow.
Jeremy Nixon: Right.

Steve Hsu: And we have generalized, so by your definition, we have achieved AGI through these foundation models.

Jeremy Nixon: Yeah, yeah. The models are general, that's why we call 'em foundation models. So, so the foundation model term was created to make a distinction between that category of model and the category of models that came before it. And so at a minimum, I think it's easy to claim. There was a huge jump in generality when we got to the foundation model to this sort of, you know, basically prefix task.

You like run an LLM on it, you tremendous amounts of training data. It sees more or less all text on the internet. It is capable of solar solving arbitrary problems that you give it in language processing. This paradigm in my opinion at the time was the AGI paradigm. And so I was surprised. The foundation model concept gave these people an out to say we don't have AGI.

And that's certainly human level AI and super intelligence, which, you know, jagged super intelligence is the correction. But obviously you just want the pure thing, not a correction to a bad term. Like all these ideas have like, you know, been given as examples of how we could conceptualize things. I have my issue with them too, but I think that the AGI concept for me was fulfilled by cloth.

Steve Hsu: I think that's totally fair for, for a reasonable definition of AGI think you're totally correct, right? I think what these guys are worried about across the bay over in Berkeley

Jeremy Nixon: Yeah.

Steve Hsu: Is recursively self-improving AI that gets out of control and maybe vast surpasses human intelligence by vast amounts, has a sense of agency itself and wants to do things in the universe, which aren't necessarily aligned with what we want.

So I think they, that's what they would call the threat. And some people, people differ on how many years away that threat is. Maybe it's never, maybe it's five years, maybe it's one year.

Jeremy Nixon: Hmm. Yeah. I do think that's a, a succinct representation of their present representation of things. But yeah, I spent a lot of time in and around those ideas.

I was reading Less Wrong in 2011. I, you know, I read all these sequences. I, yeah, I certainly like read Nate's coming of Age, the Dark Arts rationality, and I do think that in many ways that communities fallen victim to a lot of, sort of, of the cognitive biases that they are typically obsessed with. And

Steve Hsu: just to, just to clarify that for the listener.

Yeah,

Jeremy Nixon: sure. Yeah.

Steve Hsu: So the, the culture within the so-called rationalist community, which is centered around some core texts like Elliot Ows. this is the Harry Potter sequences, right? Where it's, it's set
Jeremy Nixon: HB more Harry Potter and the methods

Steve Hsu: of Rationality. Rationality. So it's set in a kind of Harry Potter fan fiction universe.
But the protagonist is learning about how to use bay rule and reason rationally and overcome biases.

Jeremy Nixon: Exactly.

Steve Hsu: Which I think is all like, for me, super admirable even. But

Jeremy Nixon: yes, though the characters are incredible.

Steve Hsu: Yeah.

Jeremy Nixon: And it's just so beautiful to watch Harry use enhanced intelligence to make logic out of ma of magic.

Steve Hsu: Yeah. And I would, I would say even some top level scientists are not really that rational. Like they, they maybe can apply science scientific method in their narrow area, but when they reason about politics or something like that, they, they, I

Jeremy Nixon: see us as a political, like a thing actually. It's, it's strangely activating the same subset of people's travel identities that typically would be associated with the political position.

So, yeah. When YNG or Jeff Hinton talks to you about the future of ai. Typically from a political perspective, yes. Social. Yes. It's like, well, we need to make sure that the people, you know, it's communal. The people are given, what they're due. It's anti-corporate in a lot of cases. You know, there's this sort of political question around whether companies should or shouldn't control things, that is really important to folk.

And I would say, yeah, even, you know, the government has this China centric race conception of right, of AGI, it's

Steve Hsu: like,

Jeremy Nixon: who, which, which geopolitical entity will have control or predominance? And there are contexts where that matters. Like when you're at war,

Steve Hsu: right?

Jeremy Nixon: Who dies or not is a function of that kind of ideological position.

But, it's unclear to me that any of these ideological positions, which are tribal in nature, are grounded in the facts of the situation. Yeah. So I, I kind of take umbridge with the, the underlying thought process that starts with you know, having an identity as someone who's trying to save the world, or as someone who I think the right word probably believes in a utilitarian ethos around effect of altruism and like the, the fear of identity. You know, no one will ex explicitly identify as an EA to you. It's always like a adjacency or something like, it is a little bit savvy to, to how dangerous it can be to think with identity.

But practically speaking, the tribes that have built up around these things are, are like deeply inclusive or exclusive as a function of your beliefs. And so I think there's a lot to be said for how those dynamics play out in Berkeley.

Steve Hsu: Yes. So the rationalist movement is a group of people. It's remarkable because it's self-organized and the main leaders of this community, they don't have necessarily you know, fancy college degrees or resumes. They wrote mainly on the internet. Lots of people, brilliant people like yourself read these things when they were growing up.

Jeremy Nixon: Right.

Steve Hsu: Right. So, so let the, the website less wrong, the sequences, they influenced a whole, in a way, generation of smart kids all around the world. I've been to meetings at Light Haven where some kid is flying in from Portugal or Japan and saying, I, I read all this stuff and I'm here to finally meet the people. And it's, they're in ecstasy. And having reached the, you know, the holy land. It's incredible as a sociological phenomenon. It's in Incre phenomenon. It's incredible. And I actually think in certain ways these guys can use, like, if you listen, read some discussion, very long discussion on Les Wong. They are sometimes reasoning more rationally than college professors about a particular topic.

So I, I admire the discourse sometimes that I see on the site Les Wong. Now, I think you're pointing out a very crucial thing, which is that when they come to talk about existential risks. From AI, they are falling prey to the same cognitive biases that they claim to have strengthened themselves, to resist.

Right. The whole ethos of the movement is to overcome bias and, and be rational and reason clearly. But I think your critique is that when they think about existential risk, they're getting it wrong. And it's ironic, you said, I think you said it's ironic because they're falling prey to the very things that they were had set out to, to not fall prey to.

Is that, is that fair?

Jeremy Nixon: Well, yeah. I mean, to be clear, they live out a heroic ideology and so, so Nate's this blog post and last Ron style called on Saving the World.

Steve Hsu: Yeah.

Jeremy Nixon: And it's about when he decided to become the savior of the world and just to think about how fun it is. To, to live life that way.

Steve Hsu: Yeah.

Jeremy Nixon: I

Steve Hsu: mean, what to

Jeremy Nixon: be in the meaning level Yeah. Of somebody whose actions determine whether or not everything exists. Yeah.

Steve Hsu: Wouldn't it suck if all you were doing is like trying to get that promotion and like meet that nice girlfriend you

Jeremy Nixon: could saving the whole world meaningful. Exactly.

Steve Hsu: Exactly.

Jeremy Nixon: And to be clear that if, if it's if it's true, there's, there's nothing wrong with that. It's actually the most admirable thing. And I, I think that the heroic instinct, and in many ways it's kind of a masculine instinct. Like, I would like to sacrifice myself in the ultimate way. My tribe for the collective. Yeah, exactly. is in many ways a cornerstone of human psychology.

Steve Hsu: Now is your, is your critique though, that they're looking for an apocalyptic existential threat because they want to be that hero and therefore they have identified AI as that threat, even if it isn't justifiable to do so?

Is that, is that your critique?

Jeremy Nixon: Yes.

Steve Hsu: Okay.

Jeremy Nixon: And to be clear they say this themselves. They say, oh, I. Was looking for

Steve Hsu: Yes.

Jeremy Nixon: The maximally effective action that I could take. Yes. And what could be more effective than averting an apocalypse? And they write books like The Precipice. So Toby Orton, a philosopher, and to me there's an entire generation of philosophers behind this.

Not just McCaskill, but Singer, like utilitarian philosophy leads you to this very interesting long-term conclusion that if I value long-term utility, all that matters is the probability that everybody dies. And small changes to that probability are worth far more than any amount of good that I could do in the present.

Steve Hsu: Yes.

Jeremy Nixon: And there's a really beautiful elegance to that, that kind of ethics. And it's, it's sacrificial not only of yourself, but of your entire generation potentially because what value is this generation of living people in comparison to the trillions of sentient entities that will exist in the future as a function of our attempts to curtail existential risk.

Steve Hsu: Yes. And you know, you've got now this very intense kind of discussion of these utilitarian calculations, making very strong assumptions about how many people are gonna live in the future and what the probability that a certain technology is gonna wipe out humanity, et cetera.

Jeremy Nixon: Yeah, yeah, yeah.it's all pascal's mugging.

Steve Hsu: Yeah. And

Jeremy Nixon: yeah, which requires an explanation.

Steve Hsu: Yeah. And it strikes, you know, as a, a very fun game for smart people to play. But I think you're beyond that, right? So you are actually engaged in the practical creation of the future, you know, using these technologies, right? Absolutely. And, and, and, so in our documentary, we're profiling the dream, what we call the dreamers, as just opposed to the doomers.

Jeremy Nixon: Right?

Steve Hsu: and for you, the day-to-day part of life is encountering. Beautiful deep ideas and trying to use them to make a difference in the world. Is that,

Jeremy Nixon: yeah. Is that fair? I mean, that is definitely true of me and everyone I interact with.

Steve Hsu: Yeah,

Jeremy Nixon: so the original sort of founding thesis of Genesis was self-awareness in action, and everyone would give a speech about what was worth creating and why.

So yeah, we are like a kind of creation centric ideology. We think creating new things, bringing things in the world, you know, having a new idea and making that new idea real is a lifestyle that you should embrace. And so almost everyone is researching and coming up with discoveries is building companies.

So my roommates are folk like Anton Zika who made Lovable, which is actually an active medic creation. So every user of, if you don't know what Loveable is, it's this it's this website generator. So you describe in just a few English words what you want the website to be, and it creates the entire web app for you, login, et cetera.

It's a comprehensive and it. And, you know, so he's blown up and like raised a billion dollar valuations.

Steve Hsu: Yeah. And Lovable is the fastest growing soft, like it's the fastest growing software product ever in history in terms of like,

Jeremy Nixon: yeah, it was the fastest growing ever, I believe five months ago. Some new AI, it was the fastest ever.

But yeah, basically I, he went from 10 to a hundred million revenue in six to seven months left. Wow. last year.

Steve Hsu: Yeah. Incredible.

Jeremy Nixon: And so Anton's a great example. Everybody, Andre is a great example. So he made the Tesla autopilot system. Well, he's, he led the team that made it and hired that team, and Elon trusted him to do that.

And he and I co-created the AGI house in Borough. And I think his ethos is very creation centric as well. So people read his educational works on his blog, which teach people how to create high quality machine learning models. And so, yeah, I guess at the core of the thesis of Neogenesis, sort of the precursor es was unleashing heroic creation. So as opposed to the hero in living inside of the savior complex. It's a distinct God complex. It's the creator complex.

Steve Hsu: Yes.

Jeremy Nixon: And the term, you know, Genesis actually stood in part for the new creation of the universe, the idea being with sufficient scientific progress, whether it be via wolf ram style, you know, cellular or, you know, I'm sure you've heard programming the universe sort of work in quantum computing, a complex simulation.

The idea is basically you can invent technologies that are as creative as to allow you to make entire universes. And I'm sure like, you know, high video game simulations will eventually give way to like real physics simulators that, that do these kinds of complex computations. So we really believe in creation and it's a different category of God complex.I think it's more optimistic dreamer like, as you say. I think the people are happier, less neurotic.

Steve Hsu: Yeah. Well, some of the selection could

Jeremy Nixon: be

Steve Hsu: like some, some of the selection could be like if you're prone to neuroticism. Then you're over there.

Jeremy Nixon: Yeah. The psychological.

Steve Hsu: Yeah. And if, on the other hand, if you're intrinsically an optimist

Jeremy Nixon: and

Steve Hsu: PO have a sort of positive disposition, then you're over here.

Jeremy Nixon: Yeah.

Steve Hsu: Right. So

Jeremy Nixon: I think that's plausible. And people make the decision on aesthetic grounds. Yeah. Oh, these people have like aesthetics of high creation. These people are trying to save the planet. I think that's the positive version of that in practice. I don't know, look like Jessica Taylor to paralyzed with fear that like any action you take could bring about the apocalypse.

And on that basis you like, don't wanna leave your bed.

Steve Hsu: Now people over here on this side have been accused of wanting to create the machine god. And how much of that is just a kind of fanciful caricature of what's going on? How much of it is really a deep motivating factor for people who work in on foundation models and, and things like this?

Jeremy Nixon: I, I don't, I mean. I think the doers really want to create the machine God to, I don't know,
Steve Hsu: to, to make, to make their lives, meaningful or

Jeremy Nixon: yeah, make

Steve Hsu: their resistance meaningful.

Jeremy Nixon: Yeah, I guess. And I think the religious element is really important. So maybe we should start at the beginning. Ellis and Esler as a writer, began as a deconversion artist. So I would say a huge fraction of his writings in 2008 and 2009 were critiques of Christianity and the roots of the rationality movement are actually in new atheism. So there's a deconversion experience that happened at scale to people who were on the internet and were part of the rationality community.

So that in many ways makes him this pseudo religious leader. He has deconverted you from your religion. And who are you going to look to, to answer deeper questions about meaning and purpose than who to be in the and the rationalists and the EAs give you an answer. The answer is that you can reduce the probability of the apocalypse by addressing existential risk through AI safety research.

And that ideology is incredibly internally coherent and seductive. So I think that, when you talk about creating, you know, the machine God, so to speak, a lot of the language patterns there originate in the, frankly, like religious and anti-religious patterns of the rationality movement, where cult leaders from every part of the world gather in Berkeley to Yeah, it's, and I mean, if you ask any, they'll, they'll, they'll often tell you this explicitly like they, they are, they're interested in the formation of a new religious movement, and in a lot of cases have been deconverted by people who are in that very community from, from Christianity typically, but also from Islam and, and from other religious movements.

Steve Hsu: In

Jeremy Nixon: particular though,

Steve Hsu: the, the a it's, oh, sorry to interrupt. The AI existential risk community, is their religion basically the b Larry and Jihad from Dune that, that thou shalt not create a mind in a, a machine in the shape of the human mind or is there, is there more to their religion than that?

Jeremy Nixon: I, I think it's, it's two sided.

So, I mean, Eliezer was the original Accelerationist, so I don't know if you've read his, his work from 98? I
Steve Hsu: haven't. I haven't, no. Tell me, what was he writing about in 2000? So,
Jeremy Nixon: and he was a transhumanist, so you Yes, that's right. Remember, yeah. H plus and that's right. The email lists where Bostrom and Eliezer would trade ideas on how to, you know, achieve every conceivable technological breakthrough.

Steve Hsu: Yes.

Jeremy Nixon: And he had this sort of turning against AI in the face of some theoretical ideas that recurs the self-improvement would be, would create an apocalypse, but actually. Was totally obsessed with every transhumanist ideal bio transhumanism.

Steve Hsu: Yes.

Jeremy Nixon: So the idea of the ideal of immortality, I think Bostrom kind of came back around to it in many ways.

He's now writing about deep utopia. He expects on some subconscious level that, you know, AI is not the risk we thought it was, and is much more likely to create a utopian than create an apocalypse. And I think, I think that's accurate. Now, yeah, Eliezer he had this sort of turning, but the original name of Miri was the Singularity Institute.

Yes. They were trying to it, right. Trying to achieve the singularity. Yes.

Steve Hsu: Right. So

Jeremy Nixon: who, who's accusing who of, of trying to build the machine god, no. He's the original pursuer of the machine god. And so, of course has that concept.

Steve Hsu: Great. I that, that's fantastic. It's a fantastic elaboration of how this whole thing evolved over the last. 20 plus years.

Jeremy Nixon: Yeah. And

Steve Hsu: I've never actually heard anyone give that. Maybe in the, in some communities this is, this is well understood, but I'd never heard anybody give that full story.

Jeremy Nixon: Hmm.

Steve Hsu: So, thank you very much.

Jeremy Nixon: Yeah. Yeah. I am surprised that this isn't like everybody's representation of the situation.
Steve Hsu: Yeah.

Jeremy Nixon: cause Sam Altman hilariously, I don't remember what year it was, maybe in 21 and 22, called Eliezer out for being more counterfactually relevant to the creation of,

Steve Hsu: I think it was more recently than that

Jeremy Nixon: actually. Oh, okay. Okay. You remember this

Steve Hsu: quote? Yeah, I do remember this quote.

Jeremy Nixon: Yeah. Maybe you tell her it may have happened

Steve Hsu: on X

Jeremy Nixon: section. It was on X. It was

Steve Hsu: on X. Yeah.

Jeremy Nixon: Absolutely.

Steve Hsu: Yeah.

Jeremy Nixon: and no, it's totally true that he's far more responsible for the, the reality of super intelligence

Steve Hsu: Yeah.

Jeremy Nixon: Than Sam or Dario or Right. These sort of like characters who are just playing the role. Right. Eliezer's story.

Steve Hsu: Right, right. Let me, let me descend to something a little more mundane. Because of the huge flow of resources into. The development of AI. AI research, AI applications.2% of our GDP right now is being spent on data center investments, buying chips, scaling AI training, and inference for someone like you who worked at Google Brain, obviously at Polymath and understands everything that's going on, knows everyone in Silicon Valley and in San Francisco. This must be an amazing moment for you. So you have investors coming to you saying, Hey, Jeremy, if you can just introduce me to some, I I need, I need to get exposure to AI native companies, started by young people, and you're the guy who can do it for me. Right? Do you have people just beating down your door trying to get some of your time, some of your expertise?

Jeremy Nixon: Yeah. So is the case that 16 of my, so I've had 70 roommates at my house.

Steve Hsu: Seven zero.

Jeremy Nixon: Seven zero. Yeah. Yeah. 16 of those roommates have started unicorn. Or sent a million dollar,

Steve Hsu: can I move in tomorrow? Because those are pretty good odds, right?

Jeremy Nixon: Yeah,

Steve Hsu: yeah, yeah. So no,

Jeremy Nixon: like so my roommate's just now raising a half a billion dollar valuation for, for his company that he started, I wanna say, 13 months ago.

And, yeah, I guess I told you about Anton, but certainly. My other roommate, I shouldn't disclose how much, but he and Sam Altman started, this new Bell Labs,

Steve Hsu: right.

Jeremy Nixon: And it's doing incredible sort of cultural work in allowing scientists, physicists actually in a lot of cases Yeah. To do their sort of wildest dreams style of research.

Steve Hsu: Right.

Jeremy Nixon: and so yeah, people at the house are, are up to all sorts of incredible things and people love investing in it.

Steve Hsu: So tell me, where will Jeremy Nixon be in five years?

Jeremy Nixon: I mean, yeah. I, I am a dreamer, as you say, so I, my ideal scenario would be. That my company Infinity, had become the process by which the vast majority of scientific and technological progress had been made.

To be clear, we are obsessed with creating a research agents. The core idea is that most, most of the kind of epistemological structures that support science whether it be, you know, in the context of medicine, you know, the TTA and the context of computer science and ml, the benchmark, a lot of these epistemological methods can both be improved on by automated systems, but can be used as an automated feedback loop.
So we're all familiar with the sort of discoveries and mathematics that have been made with AI systems. There you have a really simple epistemology, the proof is correct or incorrect.

Steve Hsu: Mm-hmm.

Jeremy Nixon: But in all these contexts where you have a more ambiguous epistemology, where it's uncertain, you need to create metrics.

And as soon as you have a measure, your AI system can basically be trained against the measure. And so the, the process of automatically optimizing against the metric. Is how Infinity optimizes inference. So the present inference is how all AI is executed. So it's a forward pass of a large language model or diffusion model that produces a textual result or an image, and we optimize that using inference.

So my intention with Infinity is to build a general research agent that doesn't just solve this problem, but can really solve problems across the engineering disciplines. Mechanical, electrical engineering, where you have simulations that allow you to measure the capabilities of physical objects. And as soon as you get a measure of whether your physical object achieves some goal, maybe it's a, a plane and it has some aerodynamics in your optimizing its speed. You can have a coding agent write the relevant CAD code that configures the simulation that, you know, optimizes sort of the real world.

similar story in bio. So if you have a tumor. That cancer tumor can be sequenced, and an antibody that has a binding affinity to the tumor has a metric that can be optimized. And so you have some constraints around toxicity. But basically these, these metric optimization problems allow you to create personalized cancer treatments where you create antibodies per person as a function of an AI discovery process that reliably finds, you know, antibodies that bind really well to the tumor and not to any other part of the body in creating toxicity.

Steve Hsu: Will, will your agents be based on someone else's foundation model and you're fine tuning the agent or building scaffolding around it? Or are you, will you someday be training your own foundation models?

Jeremy Nixon: So, so we have a agent that discovers new foundation model training processes. So one of its discoveries we are about to publish is around stacking that is discovered a new way to ensemble models together so that when they make predictions.

Those predictions are trained on by yet another model that refines the decision of which model to pay attention to. And that basically allows the ensemble to more effectively represent the diversity view of, of perspectives on the problem. And in many ways, actually, I could see like
MultiPro. So you have the setup, this sort of critique model where there's a proposal and a critique.

Steve Hsu: Yeah.

Jeremy Nixon: And internalizing these patterns in ensembles is actually a really scalable technique. And so what's interesting about the discovery is it didn't come out of sort of my intuition. An AI system that has an ontology of the space of possible proposals that it could make, discovered this research proposal, coded it up, evaluated it, and showed that it was outperforming against tens of thousands of other research ideas.
And so the, the insane scale of these AI scientists as part of what allows them to, in my mind, in future be a huge fraction of, of research progress. So my previous launch of mine, so I've, I've had a bunch of product launches. One of them is a research paper generator. So we created what we called archive gen, which was, the archive. But every paper was created by a data analysis agent. So we would get a real data set, it would use code generation to perform data analysis and go until it discovered something of merit. And then it would publish the result and it would publish it with code. So it's kind of replicable and it's this sort of system for creating a version of archive, which is totally generated by AI minds.

And so there's the sense that there'll be two archives. There's the human minds working, you know, with their, their, their meat. And there's the, there's the, the data center of AI scientists, which are generating a huge, a much larger number of papers and maybe more minor insights. But the core thing with this AI scientists paradigm is everything is totally replicable.

There's no, there's no nonsense around sort of the political scheming that goes into research and academia. It's a lot of genuine

Steve Hsu: the vision for Infinity, your company Infinity that you're describing, where I had to put on my Eliezer hat, I would say it's getting dangerously close to things which can improve AI and improve themselves.Right. So,

Jeremy Nixon: well, that's one frame, but another frame is like, you know, you, you like talked about your own mortality at the beginning of this, this suck. It seems to me that we are in far more danger of allowing this generation of people to go without antibodies that cure their cancers because we are neurotically obsessed with a paradigm that, that they'll actually, I didn't go into detail about this, but basically good old fashioned AI is the content.

Where Eliezer developed a bunch of these ideas and his original thesis was that the fragility of ai, it'd be like a genie, like the fragility of its interpretation of your request would mean that

Steve Hsu: yes,

Jeremy Nixon: if you asked it to pick a strawberry off a plate,

Steve Hsu: yes,

Jeremy Nixon: it would destroy the world because it wouldn't understand that, you know, it didn't have to fulfill your quest to the umpteenth probability.

And it would verify to like some extreme degree that it had done that. Another example you would give would be the paperclip factory. So, so to be clear, like you can run a paperclip factory using what, and it's not gonna, it's not gonna destroy the universe. Mm-hmm. But the idea at the time was you'd ask an AI to make paperclips for you, and in its overzealousness it would turn the entire universe into paperclips. And Eliezer took these ideas very seriously. It's, I, I know it sounds a little bit. Ridiculous or absurd?

Steve Hsu: No, I think

Jeremy Nixon: describe it. Think took these ideas very

Steve Hsu: seriously. I think plenty of people on that side of things still take these ideas seriously. Yeah.

Jeremy Nixon: Yeah,

Steve Hsu: yeah, yeah, yeah.

Jeremy Nixon: and so to be clear, I think the grounding intuition for these ideas is good old fashioned AI.
So, you know, machine learning researcher would have intuitively thought that that kind of problem would occur. It's about really, you know, so the frame I must run is called edge substantiation and optimization. It's about the way optimal solutions are typically at the edge of solution spaces, and they just assumed the kind of fragility that the, the old versions of AI had.

Yes. Where you do search and you find stream solutions and the kind of problems that generally computers have. So you, you write code that's off by a single, character, and it doesn't compile, it fails entirely. Like these kinds of problems would afflict general intelligence, but they didn't anticipate that machine learning.
Would be the form factor of general intelligence. And so back in, 27, I talked to Eliezer about gradient descent, and it was clear to me that he had no idea how gradient based optimization actually worked and that he had not spent time in deep learning research despite it, in my opinion, having been six years since the ImageNet moment.

Mm-hmm. You know, three years since DeepMind had produced their deep reinforcement learning algorithm. And my mind, his, his intuitions needed to be recalibrated for a new machine learning paradigm and they tracked and failed to do it. So at me in 2017, had this project with, you know, Jessica Taylor and Andrew, a few others who tried to sort of do machine learning research around agent foundations.
But Eliezer reaction to the failure of the sort of game theore agent foundation's research agenda that they had at MI was to declare that since he was, he was an ego issue. So since he's the smartest person and was unable to solve AI safety, right, no one else

Steve Hsu: is gonna

Jeremy Nixon: get a game theory. Yeah. Via, so they had a number of novel decision theories, and so they had what they called causal decision theory. And they, they don't move decision theories that failed to address the problems in decision theory, they believed would be at the root of the apocalypse. And I tried to convince some of these things are irrelevant to machine learning and irrelevant to deep learning. And they said, concluded that, well, since it's not possible for us to solve a AI safety, there will not be any group that could ever solve AI safety and they kind of, they went down this recursive kind of dark hole, in my opinion, of like everything is doomed. He published this work in like, I don't remember what

Steve Hsu: I remember. Yes, I remember this. It was just like, yes.

Jeremy Nixon: Very dark stuff.

Steve Hsu: Yes.

Jeremy Nixon: And I think kind of out of touch with a lot of people who are in machine learning are just like, yeah, I don't know.

It's hard for me to justify. I actually I think it's a little tragic and if he was sort of more like, I guess education wise agentic and had like gotten up to speed on like machine learning and RL and really architected via ontology of IX risk around machine learning would've like come to cleaner conclusions.
Do, do, is

Steve Hsu: it, would you say it's fair to say that the, the, the subsequent generations, so the kids that are just finishing the MATS program, various AI safety programs of education and Light Haven

Jeremy Nixon: Yeah.

Steve Hsu: Know more about how current foundation models function more about gradient descent and they tend to have a more kind of moderate, but still emphasizing AI safety and risk perspective than the, the og the original guys like Yeah.

Eliezer or I, I find it hard to get from them a sharp statement saying it's all doom and gloom. They, they're a little more measured than that. They're just saying, Hey, we're on the lookout for the safety risks that are gonna emerge from this technology. And they won't be as categorical, I think as, as Eliezer, but they just want us to focus more on safety. Do you think that's fair?

Jeremy Nixon: Yeah. Yeah. I mean, I think it is purely a perceptual bias now, which is actually a pretty weak

Steve Hsu: Yes.

Jeremy Nixon: Version of the brew,

Steve Hsu: right?

Jeremy Nixon: you can look at any technology with like maximally shaded

Steve Hsu: right

Jeremy Nixon: eyes and see the ways in which it's dangerous. And that is, in my opinion, a much more anodyne than Eliezer apocalyptic perspective.

Steve Hsu: Right.

Jeremy Nixon: And yeah, I think it is wholesome

Steve Hsu: better for their employment too.

Jeremy Nixon: True. Yeah. Yeah. Yeah. I mean, I don't, I don't know. I don't really buy that people are that financially motivated in that community?

Steve Hsu: They're not, but

Jeremy Nixon: I really think they're kind of genuine and like the indoctrination works that is real. And so people genuinely believe

Steve Hsu: No, I agree.

Jeremy Nixon: It's not like a

Steve Hsu: but the side effect. But the side effect is that they're tolerable colleagues at the big labs because instead of saying like,

Jeremy Nixon: it's not that they're tolerable, the founders of those labs are all believers.

Steve Hsu: Sure. It's

Jeremy Nixon: not. Exactly.

Steve Hsu: Yeah. Yeah.

Jeremy Nixon: But,

Steve Hsu: but young people can easily get jobs in AI safety, but they couldn't, if they were really violently against everything. If they were Stop ai, you're killing us all.

Jeremy Nixon: Yeah.

Steve Hsu: If that, if that was their attitude, they could not get jobs at these labs. Whereas if they're just sort of like, I don't

Jeremy Nixon: know,

Steve Hsu: I'm helping you

Jeremy Nixon: with is very, I don't know, maybe

Steve Hsu: I'm helping you with interpretability.

Jeremy Nixon: So there's a point at which Sam turned against his stuff, and so I didn't get to be clear. I don't think it was unjustified.

Steve Hsu: Yeah.

Jeremy Nixon: I think a lot of folk philanthropic are really genuine believers that Oh, I,

Steve Hsu: I agree. I'm not saying

Jeremy Nixon: they're and very much accept this kind of Yeah person. And like, look for this person actually.

Steve Hsu: Oh, I see. So even the apocalyptic, they're looking for the person with a kind of apo apocalyptic interpretation.

Jeremy Nixon: Yeah. I think with the like serious focus philanthropic around like existential risk yeah. Our open-minded about apocalypse risk and are interested in working with people who are interested in apocalypse risk. yeah, I think like that's okay. That's, it's still very much their vibe. I do think they get a lot of flack from the, the, the rationality community for not being that way enough.
And like people from Open Phil leave and like,

Steve Hsu: right.

Jeremy Nixon: You know, full cri critique the turn. It feels like a betrayal every time. And actually there's this entire dynamic in the rationale community of this sort of purity contest of like, who is more purely like opposed to AI progress, right? And like, more purely afraid of AI, more calling for, stop calling for pause, calling for the data centers to be bombed, calling for like the end of progress.
I just think a lot of those factions are like, held in some substantial regard and the parts of the am the parts of the philanthropic world that are kind of more apocalyptic. So it's not obvious that it's like a financial hit, but I just also, I mean, I think I just wanna reiterate it's not financial for those of these people.

Steve Hsu: Yeah. I I I didn't mean to imply that. Yeah. I just meant that if you were too hardcore, your movement would be possibly limited to be completely outside of the labs as opposed to having some presence inside the labs by being a little bit more reasonable.

Jeremy Nixon: Right? Yeah, yeah, yeah, yeah. Yeah. Typically, I guess, yeah.
I also am pretty sort of frustrated by the lack of like action orientation that existed in the previous generation of sort of like Berkeley Rationalist group House worlds. So, you know, they write blog posts, but they typically don't start companies. In my opinion, when I was growing up when I was at Harvard, I saw the, you know, 2014 style Tesla, SpaceX sort of version of world changing where if you want to pull off Apollo scale projects, the new model is you start a company like Google or like SpaceX, which has some, you know, engine of growth and it transforms our capacity to go to a new planet or to like do something incredible.

And so I really wanted to live in a more kind of agentic productive world than was on offer in Berkeley with the rationalists back in, you know, 20, 2016, 2017. So a lot of like the ethos of AGI House and Neogenesis and Genesis is actually group house culture, but with productive action as the output of intellectual work where it's about self-awareness, yes, but actually in action.

So you need to be in the arena and I don't really. It's, yeah. I don't have too many roommates are not in the arena.

Steve Hsu: Yeah.

Jeremy Nixon: You know what I mean by the arena?

Steve Hsu: Yeah. The arena. Teddy Roosevelt. Yeah had a famous quote saying that like, look, you can be a critic sniping off to the side, but from the side. But if you're the one who gets in the arena and you have blood and sweat on your face

Jeremy Nixon: Yeah. Blood,

Steve Hsu: you're, you're the one who really makes it happen. Yeah. Yeah. Yeah. And you're the one who gets, and I, I totally agree with that.

Jeremy Nixon: Yeah. Yeah. Yeah. I guess there's a appreciation for real world action that we have and the vibe in the do more community often is, let's write another blog post.Yeah. About,

Steve Hsu: yeah.

Jeremy Nixon: Our neurotic,

Steve Hsu: I think what's unique about you and the world you're creating here is both the commitment to deep intellectualism, which isn't necessarily there in a lot of venture funds or SaaS B2B, it's

Jeremy Nixon: contrarian.

Steve Hsu: Right? Right. So you. You guys are both action oriented, but also have deep intellectual commitments, which I think is very unique in the world actually.

Jeremy Nixon: Yeah. It's surprising to me that that's the case and I perhaps the five year vision like includes this culture being spread. So like one interest of mine is a, you know, an AGI house global where

Steve Hsu: Yeah.

Jeremy Nixon: You know, whether you're in New York or Cambridge, Mass, Harvard, MIT, or London, whether you're in India or Germany, wherever you are, there's a subculture where people have the Silicon Valley ethos.
Steve Hsu: Yes.
Jeremy Nixon: Which believes in gold rush culture. Yeah. So I see a lot of our sort of culture is basically being obsessed with the idea that a heroic individual can go from rags to riches.

Steve Hsu: Yep.

Jeremy Nixon: Can go from zero to hero. And that, you know, you, you listening to this right now, if you seize upon your creative intuitions and believe in yourself to an unearthly degree. And thrust into reality something which otherwise would not exist. you can get the kind of momentum and the feedback loops behind you and the people in a superstition that allows you to create something incredible. So if, so, if so many people in Silicon Valley have seen their roommate go from unable to pay rent to

Steve Hsu: eating ramen.

Jeremy Nixon: Yeah, yeah. Every

Steve Hsu: meal

Jeremy Nixon: eating ramen to just, wealthy beyond belief. And they have seen them go from like nobody knowing who they are to like believing about the most impactful person they've ever met. Right? And so there's the belief in life changing and like that, that is the, the way that life should be.

Like what are you doing? Taking the risk of living a normal life when you could be every single year, you know, risking it all on your next great creative idea. And if you are ever right, the world is forever changed. That ethos ideally would be a worldwide ethos. I think that culture is importantly distinct from the west quote, so to speak, west quote unquote and so I think Europe often doesn't have this, this kind of ethos. It's something specific to

Steve Hsu: Yeah, but they do have Beijing and Shenzhen now.

Jeremy Nixon: Well, that's interesting. Yeah. Yeah. I think they have an optimism and a kind of technical depth that is rare in the West. I do think of that as encouraging.

I don't know, maybe controversial, but my take on China is it gives me a lot of hope that if and when we culturally drop the ball. So I actually don't know if you, so Richard Noah and I wrote this, this like book length text on the stagnation hypothesis. Mm-hmm. So Peter Teal has this idea around the stagnation hypothesis.

So it's called Teal in Progress and Stagnation, where the, like in the world of atoms, we have declined. So, you know, there aren't planes that are faster than the Blackbird, which we created in the seventies. We haven't gone to the moon since the sixth. But clearly in the world of physical things, we are a decorated civilization. And so part of my hope is that, you know, China in the face of like the decline of the west

Steve Hsu: will pick up the torch.

Jeremy Nixon: Yeah, yeah. It can continue to make progress and a minimum be a foil such that we have to get our things together. I

Steve Hsu: think it's definitely happening. It's definitely

Jeremy Nixon: happening. Yeah. Yeah, yeah. And so you go to Shenzhen and they, they order food and the delivery drone shows up in five minutes with the food.
Steve Hsu: Yeah.

Jeremy Nixon: And you're like, whoa, I thought I lived in a first world country. But actually you live in a backwater that is incapable of manufacturing anything of substance where every great American entrepreneur flies to Shenzhen.

You know, if it's Apple, you have, you know, Foxconn, where wherever you are in, you know, a manufacturing, you end up, you know, using Asian countries in order to get anything done. And there's some hope, you know, robotics may make it, you know, functional to build things in the United States again. But,

Steve Hsu: but those robots are gonna end up being built in China, at least in the short

Jeremy Nixon: run. Yeah, yeah, yeah. Exactly. Exactly. And if if you don't build them in China

Steve Hsu: yeah.

Jeremy Nixon: You're gonna be out competing about a country, the company that does.

Steve Hsu: Yeah.

Jeremy Nixon: and so I think one thing that the, like EAs miss about EAC is it's not just a reaction to EA in a lot of cases, it's not even a reaction to EA, it's a reaction to the stagnation of, you know, American and technological and manufacturing progress. Yeah. And to the forces, the ideological forces that led to that degradation. So, you know, in 2004 we were all told runaway recursive climate change is going to end humanity. And so on that basis, there's a ludite culture, neo I'll call it, I call them neo lite typically, right? Like, Apparatus for shutting down technological progress in the face of environmental apocalyptic fear. Yeah. And you can 100% repurpose that apparatus. Yeah. To shut down AI progress with, apocalyptic AI fear. And it's very easy for me to imagine, you know, the next 20 years being dominated by a collective sociology of. AI doom in the same way that we've been sort of

Steve Hsu: Yes.

Jeremy Nixon: Colonized by the psychology of Climate Doom for the last 20 years. and, and it's, it's really important. It's like incredibly important. So, so Elon's sort of Tesla pitch, which was fundamentally ideological, was we are leading this, this sort of transition to sustainable energy, which makes 'em a good person.

Steve Hsu: Mm-hmm.
Jeremy Nixon: So, you know, in order to be a good person in that world, yeah, you need to sort of be solving the apocalyptic problem that everyone knows. Exists. And even today, if you ask a college kid, like, how do you be a good person? It's like, well, you go fight climate change and it doesn't really matter. Well, maybe actually it does matter that that apocalypse isn't coming.

'cause if it was a real thing then yeah, somehow it wouldn't be ideologically satisfying and you wouldn't be able to sort of show that you were, you were worthy for going after it. You'd just be solving a practical problem. and all the practical people would be solving in a practical way. Right. so it wouldn't have as much of an ideological element to it.

It would feel. You know, much more like making energy cheaper or something, which is like a real practical problem. And it's boring if you do that. There's no, you don't get any good person points. Yeah. You know what I mean?

Steve Hsu: Yeah. You don't get the credit.

Jeremy Nixon: Yeah. You don't get it. You don't get moral credit for that.

Yeah. so I think a lot of the moral credit kind of comes out of it having the very ambiguous status of like, we don't know if this is real. There are some true believers, there are a lot of skeptics. And so anyone who believes is now. Important and special and is the, is the difference maker.

Steve Hsu: Right.

Jeremy Nixon: If everyone agrees, then you just get good policy, right? Absolutely. Immediately. No,

Steve Hsu: you're not special. You

Jeremy Nixon: have to be special. Yeah. Yeah. There's no space for specialists.

Steve Hsu: Yeah. Well, Jeremy, I want to thank you for taking this time to chat with us. it's been fascinating and I look forward to watching your trajectory, over the next five or 10 years.

Thanks very much.

Jeremy Nixon: Thank you, Steve. It's been a joy.

Steve Hsu: More of the, you know, reasons why you're less worried about existential risk and what you think like the doomers or the more rationality people, might be getting right or what versus what they're getting wrong and then why you would disagree, I guess.

Jeremy Nixon: Hmm. I mean I only wouldn't frame it quite in those terms.

John: Okay.

Jeremy Nixon: Yeah. Sort of a social disagreement. So I possibly, I don't know if you have full context, but I worked on uncertainty estimation for out of distribution control primarily is a way to address sort of future risks from recursively self-improving intelligence.

So in 2019 I published a paper called Measuring Calibration and Deep Learning with the Reliable Deep Learning Group. And one of my central motivations in working on uncertainty estimation was to address the problem of out of distribution control. So the question is, in a new context, does a machine learning system behave in unpredictable ways?

And it addresses this category of accident risk where your AI system. Doesn't behave in a way that you understand from training. So the core idea with uncertainty estimation is also very safety centric. It's about calibration and self under self-awareness on some level. Like is your model aware of when it's right and when it's wrong and can its calibration on its uncertainty.

So it's awareness of when it's likely to be making a mistake allow you to avoid accident risk where it accidentally does something untoward. Right? And for me, this was the closest I could get to an intersection of the sort of like picks and shuffles, like actually on the frontier of research, like making progress on the real problem kind of context while addressing this very like abstract, high level question of whether super intelligence in a runaway context will be dangerous.

And it was very sobering to be inventing these new uncertainty metrics and the re calibrators that allow you to create more calibrated cal classifiers and on some level recognize yeah, I guess just how much the systems are actually like, not really capable of going as far as, I think the, the, the doers believe that they will go in as fast a period of time as you'd like it to have.

So, so concretely, it seemed clear to me that the systems were gonna be bottlenecked at levels of performance that were basically akin to speed super intelligence. If you've read Bostrom's Super, super Intelligence, that quality super intelligence was gonna be limited in its capacities. And so I am pretty afraid of existential risk in the practical real world.

Everyone's gonna die of heart disease and cancer and yeah, sort of Parkinson's and sort of standard diseases since I think it's just much more likely that, you know, Steve dies of aging than that the systems end up being the category of quality super intelligence that is world ending. And I think that a lot of the kind of trajectory of like LLMs and the way in which for a little while they do well, and then they reach kind of a ceiling of capabilities and it's clear like what will happen.

You know, not going, yours is like cogen starts to be functional, but it's, it's not the categorical that, you know, LES and Robin debated, I can get like, Robin Hansen was correct in many ways. And yeah, and I guess like, I think it makes it more risky for like you and me because we're probably just gonna die, of sort of standard causes and yeah, I guess that's kind of where the like vast proportion of my risk for like the lives of most people who are alive are.

I think a lot of people were afraid of the release of GPT-2 and that made me sense how miscalibrated a lot of the sort of diverse perspectives can be. So if you like want to sort of look at disagreement, I think that the release of GPT-2 was acceptable and safe. I think that the SB 10 47, 10 to the 26th.
Limitation was unreasonable, and that models at that scale are actually quite safe. And I think there's, especially to put this like likely always can be ways that literally every working process can be enhanced by these models in the same way that like Google search and enhancement. So for example, like if you're trying to do something like a terrorist attack, you can use the models to help you.
But it's nothing like the kind of recursively, self-improving accident risk that is the actual grounding of the ideology for sort of the Berkeley, like, like apocalypse risk scene. And so I think it's good that folk like Dan Hendricks have kind of pivoted to like cyber security and like bio terror, these very specific, like almost actor centric applications, like you're trying to get cloud code to like make the the buy weapon or whatever it is, does it work or not? Like that is a very concrete question that reasonable people can ask and that you can get really concrete answers to and traction on, and that you can develop base mark base base lines around. And so, yeah, I think that that's just like incredibly grounded, but also not in a different from sort of the flash, recursively self-improving foam accident based existential risk. It's like actually the foundation of the entire feild. And I think if that's all that you had from the beginning, none of it would exist.

John: I'm referencing something earlier that you said, but do you think it's that,
that we won't get there or you think that we'll get there with enough? We'll have enough time. You know, like time is a big thing in the rationale in your community about, you don't have enough time to solve alignment and to figure it out and that

Jeremy Nixon: Oh, okay. I mean, yeah, I both like, I guess I have some conceptual criticisms.
Mm-hmm. So, alignment as a concept should not be adapted to this new context. That, that word had a meaning. I dunno if you remember what it was? Alignment used to mean that you would have an AI system that would be acting in total. Sort of independence or isolation from human feedback. And the problem was how do you keep it continually aligned with human interests as it recursively improves, as itself transforms in completely unpredictable ways, and that was the alignment problem.

In many ways it's been like refigure as like RH ing the model. I do think that. You know, in the face of research projects that do try to do recursive self-improvement, that might be vaguely relevant, but actually the core version of the alignment problem as conceived, has just been kind of lost for, I wanna say like five to six minutes, maybe a long time.

It's been lost for a while. and then people kind of like reworked the concept for a bunch of other purposes. Some of them relating to social justice, some of them relating to like other cultural movements and. I think that alignment, like as censorship is just a totally different conversation entirely.

Yeah. So I do think there's some like continual need to, in the face of concepts that turned out to not be relevant, make them relevant by redefining them. And this happened in, in many ways with the way people use alignment. I think this also happened with the way people use the term AGI. So I think it's not just conceptually relevant that I frame foundation models as being AGI. Because if you think that AGI has existed for four years, I should say it's the beginning of 2026, and honestly my use of Claude in like June, the Slack, slack channel, philanthropic June, 2022 was my like, okay, we have AGI moment.

Yeah, I'd say maybe three and a half years. We've had AGI for three and half years and that's why like everyone's in a tizzy is in my opinion. But if you think AGI already happened, then it totally changes your sort of catastrophizing frame. It's like, oh, well we got AGI and everyone didn't die.

And it's clear that it's getting better each year we get more interesting applications, you know, more jobs get automated more research problems get worked on. But it's a very different dynamic and trajectory than was anticipated. And I think there's a lot of sort of conceptual importance. So, so one version that I believe is that when people say AGI actually, in a lot of the apocalyptic context, what they mean is the thing that kills everybody.

And until everyone is dead, it's not AGI. And so that definition will continue to shift and continue to refer to more and more advanced systems until the system is capable of, of genocides, genocides, et cetera. There are a lot of sort of shifts. So Ilia has SSI, right? Safe super intelligence. So Ilia was part of the original AGI branding.

SSI implies actually that, you know, we have AGI on some level now what matters is this new concept of super intelligence and you know, Andre and so X number of others have pointed out the jaggedness reality that like the AI is already super intelligent. It's a super intelligent blender. You can call it in parallel. You can write 500 books in five minutes. So it actually is already speed, super intelligence, but in so many ways, it's deficient. It can't do what a theoretical physicists can do is what part of the sort of pieces of this event. There's so many dumb mistakes that it continually makes in ways in which it just is in, in incapable of doing basic things.

So there's this jaggedness, and I think that that is also kind of an attempt to correct a concept that was broken, and it doesn't go all the way. Like ideally, your conceptual scheme is just cleanly aligned with the underlying reality as opposed to you like bringing in a concept like super intelligence.

So it's typically, Bo wrote his book Super Intelligence 2014 prior to the machine learning revolution. So you're borrowing a lot of these old concepts that no longer represent reality accurately. You're correcting, but correcting in a partial way. And so I just think that like, yeah, both AGI and alignment as like conceptual schemes lead to mistaken conclusions by default, and you're like lost as soon as you use the language.
And so there are other concepts. So it's take p Doom, like I think that concept. Is inherently Pascal's mugging. So if you're familiar with Pascal's wager, it's an argument for Christianity. And interestingly, Scott Adams kind of succumbed or like, I don't know, depending on whether you believe in Pascal's wager or not like, like acted as though Pascal's wager was true by proclaiming himself to believe in Jesus Christ, like a minute before dying right. On the wager there as well if Jesus. is Lord. And if Christianity is true, then probabilistically, even if you think there's only like a one in a thousand chance that Christianity is true, you should proclaim your like love for Jesus Christ at the end of your life just so that you could go to heaven in the case that turns out to be right.

And so when I say Pascal's mugging, it's this inverse. It's saying if there's a, a one in a thousand probability that the totality of humanity is destroyed by ASI then you should devote your every waking minute to infant and decimally, reducing the probability that everybody dies. And when challenge comes from prospect theory.

So if you read Kaman Inky in Thinking Fast and Slow, they cover the possibility effect and the kink and in loss aversion, but really this sort of psychological effect where small probabilities can't actually be effectively represented by by humans, we do not engage with them. So as soon as we know something's possible, we assign it an amount of emotional space effectively, which makes us feel worried about it.
And so we can't distinguish emotionally between like 1, 1 1 in 1000 and 1,000,001 in 10 trillion. Like we don't have any sort of emotional ability to deal with that kind of distinction. And there's a viscerality to the totality of people dying to apocalyp actually. So many great movements have been built out of the apocalypse, so it, it's like something that really captures the imagination.

and so I think that the right replacement for P Doom for people who like that sort of probabilistic framing for things is actually P life. So I think we should, instead of minimizing the probability that everyone dies, maximize the probability that everyone lives. And so this includes existential risk, right?

So if we end up with an excess risk experience, the P life goes down dramatically. But it includes the upside of using AI in the context of cancer biology, where suddenly you have cures for important cancers that would otherwise, like never have sort of had traction and, and where the use of AI in order to advance you know, quality of life and improved genetics and solve a lot of problems that are, that are in many ways meta problems. So I guess if you don't solve the problem of survival, then you no longer exist and, you know, all of your values no longer are represented in the world, practically speaking, except through semi-mesis.
So I don't know. I prefer this sort of more inclusive conception of P Life than p doom and. I think that because it integrates X risk, it allows you to care about it to a sort of appropriate degree as opposed to h hinging everything on it in the context of Pascal's mugging.

John: I just wanna say Pascal's Ming is one of my favorite concepts from Less Wrong. I think about it a lot in the context of even minor things about like, okay, I'm trying to lower this risk. What's the

Jeremy Nixon: Yeah. How much, what percentage?

John: Yeah. Yeah. What percentage?

Jeremy Nixon: Yeah.

John: so I'm glad that you, you brought that up and I do think that there is a lot of pascal's mugging psychologically in people's action, like what they should do.

Jeremy Nixon: Yeah. I mean, it's pretty intrinsic to the scenario. It's a hard to avoid

John: Yeah.

Jeremy Nixon: Like doing that to people. If you're like, actually everyone's gonna die. Like, yeah. So suddenly it activates the like, holy shit. Like, this is the only thing I should care about reaction in people, which is rational if there's some substantial probability of actual catastrophe.

yeah, I just think that it's not sufficient to know about pascal's mugging in theory. you actually have to like live out a, like, you know, a functional psychological process that like is defended against Pascal's money. Yeah, I also. I don't think Scott did the right thing with his, like the end of his life.

I feel like he lived a life without conviction. If he's gonna play these kind of games with religion. Yeah, I know there's something, when I read

John: the

Jeremy Nixon: statement,

John: I was,

Jeremy Nixon: say again

John: when I, when I read the statement, you know, his like, end of life I was like shocked a little bit, I guess.

Jeremy Nixon: Yeah.

John: But.

Jeremy Nixon: Yeah, I think it lacks integrity.

John: Yeah. I mean, I can get the fear.

Jeremy Nixon: Yeah.

John: But it's still, I, I think I was a little disappointed.

Jeremy Nixon: Yeah.

John: Yeah.

Jeremy Nixon: Yeah. And I guess that's, yeah, a very real world example of someone who's like inside of Pascal's wager. So it's not an abstraction. It, it's a real thing. It happens to, to you, to me, to anyone who experiences these ideas.

John: What would you say your P Life is?

Jeremy Nixon: Whoa. Yeah, I think most people are gonna die by fault. Partially because I, I basically expect there to be like a climate change style, apocalyptic fear of progress in AI that makes up really hard for us to, to like deal with most major diseases. I think that most bio progress is deeply inhibited by so ironic with sort of so-called bioethicists who in many cases are in my mind, more responsible for the deaths of massive numbers of people than anyone else. And I'm tragically like that's kind of the default for a lot of these neo lite style movements. It's like you become an environmentalist. You shut down nuclear power.

Congratulations. You have owned yourself. You become a bioethicist, you shut down stem cell research, you shut down all forms of. Sort of obviously growthful and important therapies and yeah, I guess like no one get IRB approval for basic experiments. We just are not serious about progress and it's not an abstraction, like without progress, everyone is going to die by default. My P life is something that probably used to be clear that like the most people who are are alive today will survive For someone at my age is something like, you know, 10 to 15%.

So I think there's a world where like AI goes very well and make a ton of progress in it, and we don't hit a lot of these cultural bottlenecks. And yeah, we solve a lot of really important problems and age reversal and organ replacement. Work and work really well and surgery works really well and bcis unlock a lot of new human capabilities and we can like begin to back things up via BCI foundation models that like are recording brain states in a way that's kind of high fidelity enough or we like get conscious intelligence via that process or something interesting happens in like the next decade or so on that line.

I think that AI can open up a lot of interesting opportunities technologically and I hope to personally be part of that. So I guess I feel pretty well, I think I feel pretty agentic about it. I think it's possible that like, you know, if you decide today that you're gonna like, make these things happen, you have a chance of doing it and, you know, any listener to this like also could personally play a role.

So I think it's possible to like move the numbers via sufficient intelligent agentic action. My p life is still pretty low. So at the end of the day, like, yeah, most of society is just very down to die and will like push back on anyone who like tries to help solve the problem. Yeah, I think it's like a political challenge.
I think it's like a serious scientific challenge. I, I think the, like education processes that produce the kind of people who can make progress on this barely exist. I think there's a lot of reasons to expect that, like. You know, your family's gonna die, you're gonna die. It's like probably just gonna happen.

John: Just personally, I, you know, my, I was raised Christian, you know, this whole thing, and then I became an atheist.

Jeremy Nixon: Yeah.

John: but I was always against death and I would have these, you know, debates with people who were so pro death and there's all these good things. And I wrote these articles about it and I got into Aubrey De Gray Longevity, escape, velocity and Life Extension.

So the sort of like. transhumanism type thing about, you know, being able to survive was very appealing. Yeah. And I just wanna say I have a list of people I don't respect and bioethicists

Jeremy Nixon: at

John: the top, towards the top of the list. I,

Jeremy Nixon: yeah.

John: I got on Steve one time because, he wrote, he, he was, he was doing an interview or writing something.

He's like, oh, you know, that's bioethicist to figure out. I was like, Steve, I don't want to hear.
Jeremy Nixon: Yeah, yeah, yeah. That's so sound serious. Yeah. To be clear, I think it's important to accommodate the term. I don't think you can allow like the ethical high ground

John: No. There's held

Jeremy Nixon: by people who

John: are

Jeremy Nixon: just obviously filled with ideological venom.

John: Yeah.

Jeremy Nixon: I, it's just, there's just so many genuinely believed terrible ideas. like there's no utilitarian trade off. Like, like do no harm turns into do nothing.

John: Yeah.

Jeremy Nixon: It's kind of tragic.

John: Yeah. I

Jeremy Nixon: don't fully understand how people like live this way, think this way or this way. I, it's just like a struggle as a rational person to, to comprehend it.

But similar story for Christianity. So I, I grew up, learning these things and as soon as I found out that people would go to heaven if they died, I was really curious, like, why don't we just kill everyone who currently believes in Jesus like obviously we should be doing that. There's some chance that they'll stop believing. So like, let's do the Yeah.

John: Should kill everybody under the age of accountability even because

Jeremy Nixon: Oh, sure. If they automatically have it. If

John: they automatically you should be

Jeremy Nixon: like, have

John: it, yes.

Jeremy Nixon: Like actually the, the greatest experience conceivable. Yeah. No, the whole thing like falls apart if you try to apply like basic logic. So I quickly found out to stop asking questions. Yeah,
John: it's, it's interesting because you know, from the Doomer and PDU side. They're worried because of all the resources being poured into AI and to, you know, building more and more powerful models and systems. But you seem to be, like, you seem like your worldview is more, positive for them, but you seem to be pessimistic about the, the, what's being built and how much, how resources are going.
Yeah. And you think it's, it's likelier than not, that people will restrict it enough to make a difference. Yeah, because your, your P Life is low.

Jeremy Nixon: Yeah. Yeah.

John: I'm like depressed at your P life.

Jeremy Nixon: Yeah. No, I, I mean I, I, I was sincere when I said the doomers are the real dreamers. Like they, they really believe that the technology will be able to do anything and will recursively self-improve and, and will unlock all of the Pandora's box. And I really don't think that's gonna happen very quickly. And,

John: when do you think we would get to the sort of stereotypical in people's minds, AGI or really transformative.

Jeremy Nixon: I don't, I don't know if like these terms are helpful. I think they're really like hoodwinking people. So for example, transformative AI, terrible term.
John: Well, you
Jeremy Nixon: for everything's, you
John: said like 95% of the economy in 2022. Yeah.
Jeremy Nixon: So that's an interesting definition. I think I like that definition quite a bit. I don't think it's a good definition of AGI, but I think it's a great definition of a. Concrete, measurable.

John: Yeah.

Jeremy Nixon: Representation of progress. And I think you can get to 95% economic automation and not be afraid of X risk at all.

It's like, what are people in the economy doing? Like a lot of them are sending emails, A lot of them are, you know, recording video interviews, you know, so if you get video gen and it creates documentaries based on people's transcripts and like historical YouTube videos, you can like burst your documentary into existence with like a search agent.

Or it can generate people like me and they can say things like this, and your job would be, auto would be part of the 95%. And I could watch the AI generated documentary. I, to be clear, I could code up something like this tonight. And then, yeah, like you'd be part of the 95%. And so what, like, who cares? I can already watch like way more documentaries than exist.

I, I've made a, a book generator. I'm the meta author of 10,000 books and on some level awesome. But on some level, who cares? I already, there are already way more books than I have time to read. It's like, actually all that matters is maybe personalization, but mostly quality. And so I don't think the 95% number has to be super meaningful from an X risk perspective, but I think it's meaningful from an economic perspective. So yeah, I, and I like how concrete it is, how specific it is.

I think that it's true that a lot of people's jobs. Can be automated in the near future, but most people's jobs are bullshit jobs. Most people's jobs are structural. They're power. They are embedded in power structures. Like the reason they exist is their manager needs to get a bigger budget next year.

And the reason that their company exists is that they are part of a monopoly or a duopoly or some other archic structure that is structurally making money. You understand? There's not like. That many real jobs at the end of the day.

John: Are you worried about the societal effects of job loss or do you think it'll be

Jeremy Nixon: I, yeah, I'm definitely worried about the opposite. I'm like, worried we're make insufficient in progress.

John: you're really pessimistic.

Jeremy Nixon: Yeah. Yeah, yeah. Well, to be, I'm very ambitious, so I've, I've like, you know, an intention to survive. So

John: you can be ambitious and optimistic, but I'm depre, you're, you're so tapped in. I'm depressed. I thought you'd be like, oh, the magic's coming.

Jeremy Nixon: Yeah,

John: we're gonna, when do you think we would cure aging or cure cancer?

Jeremy Nixon: Yeah. I

John: So you think it's gonna be stop

Jeremy Nixon: cancer is a very, yeah. I mean, in general, oncology is split up in many ways. Yeah. Like leukemia is very, very challenging. But like there are certain tumor centric cancers for which antibody centric solutions are likely to work. I'm trying to come up with a

John: definition that would get the heart of what

Jeremy Nixon: I, I think is much more important and harder question.

John: Yeah.

Jeremy Nixon: So you want to like, we're. Ideally reverses the general aging process. Yeah. So it's not that cancer kills people, it's that like weakened immune systems over the course of a person's life allows for things like cancer to kill them. But if cancer doesn't do it, then it'll be Alzheimer's five years later. You know what I mean?

John: Listen, I'm all on the, I get the whole Aubrey de Gray argument about, you know, focusing on the wrong

Jeremy Nixon: thing. Yeah. But like, yeah. And so there are a lot of ideas he has like the bridge, right? So if we start to make a lot of progress, then people can start to live long enough to make a and I've talked for, lived long enough for there to have been more progress, right?

John: Yes.

Jeremy Nixon: And the, the engineering frame I think is really healthy. I, I think it's very grounded. Like the idea that you are a bio machine is the starting position for believing any of these ideas about aging because a lot of people ize the physical body in a way that makes 'em sacralize death, and it's typically attached to their religious tradition.

And so I think that there's a religious component to this. Like a lot of why it's possible for, you know, in someone to think of like rationally or engineering with an engineering mentality about death is that they have like unglued from the sort of belief structures of tribal religion. And I, I mean for, for a lot of reasons, I think it's important that there be some like alternative, some scientific centric religious experience that allows a person to, you know, be a human being so to speak, but believe true things.

And there currently is just not really a way to do that. But I do think that it opens people up to the possibilities that exist. And Aubrey de Gray is a great example of someone who like, has an open mind is somewhat ambitious. Yeah, I do think like a lot of the economy will be a automatable. I don't think a lot of it will be automated.

I mean, yeah, I think that's kind of where I'm at. I, I, it's, and that's a really depressing perspective, but sociologically speaking, I've just watched, I've interacted with government organizations that have a database, and in order to interact with their database, you call them and wait on the phone for like 40 minutes and then someone picks up and they like ask you for your input details and they look you up in the database and they tell you what the database does.

And then you tell them, make this update to the database and they make the update and you hang up. And this would happen in like 2020 when, you know, for 20 years the internet revolution had made it possible for me to like log into a website and like make a change request. You know what I mean? It's like this kind of thing where there's an entire like organization that doesn't need to exist. It's like replaceable via a credit app. Like this kind of situation is just so omnipresent that I don't even think like,

John: so are you a libertarian and cap guy or,

Jeremy Nixon: I don't think this has, is like a political position. This,

John: well, government inefficiency.

Jeremy Nixon: The government efficiencies are real.

I don't think it's to be clear, I don't think it's just government organizations. I think it's a feature of most monopolistic or oligarchic contexts. It's an incentive structure situation. Like
John: And you think the incentive structures are so strong that it will prevent progress. Like automating jobs. Yeah.

Yeah. And that will regulate AI to make it so that we don't hit.

Jeremy Nixon: Yeah. No, I think it's really easy to to do to nuclear power what was done in AI. Yes. Really. I think that's a default scenario and people need, meaning, they kind of have been deprived. So social justice kind of crashed as a meaning structure and in a, yeah. In a lot of ways like EA also kind of crashed as a meaning structure. But I feel like it has still like potential for the masses and we'll see what ends up being in the, like the next big collective meaning generator. So if it turns out to be ai, that's an interesting world. it might still try be environmentalism or climate change or ESG or, or like one of these ideologies, but I yeah, I mean I think it's plausible that it's gonna be a neo ideology as opposed to something, you know, interested in progress.

China is interesting, like they actually have a very different way of operating. They have a lot of engineers in governance and they, seem like kind of slightly more resilient to like. You know, I, I shouldn't say that. Like, yeah, communist. I don't know. It's, it's hard actually to say that there are more resilient ideological pressure, but I don't know.

We'll see. And yeah, Europe is as like decadent and degraded to a point that's hard to even believe. Like try to go to Germany and like create a power plant. I don't know. It's just they like kill their nuclear power. I don't know. it just seems like they've totally lost the script and are not even close to like anything reasonable. This is kind of tragic. Big Coles rubbing their hands together. So
happy. Yeah, yeah. Yeah. And they're all cold because I, yeah, I just think it's a little bit unbelievable. it's a little bit foral

John: Lei and I actually met a German woman who's very social justice and very climate.
I was asking her very

Jeremy Nixon: common. Yeah.

John: Like, Hey, what do you think about this whole nuclear power plant thing? Like, isn't that, and she's like, oh no, I'm glad we got. Yeah.

Jeremy Nixon: You

John: know, I thought maybe she'd be like, oh yeah, we screwed up.

Jeremy Nixon: Yeah, no, no.

John: And she was like, happy.

Jeremy Nixon: Not at all. Yeah.

John: Yeah.

Jeremy Nixon: I mean there's, yeah. Very little coherence there. people are excited about being good.

John: Yeah.

Jeremy Nixon: But their definition of being good is like very socially, like constructed. So,

John: but from, you know, if you spend time at Light Haven, they're like, or even I think the general public is like, well, you've got who are cynical about AI or think it's bad.

You've got these powerful corporations. They've got so much money. You've got Google, you know one of the most powerful companies, they're pouring all this money, they're making all this progress to them, it feels unstoppable. Like, oh, the little protestor, the, the people who want to be neo Luddites.

Jeremy Nixon: Yeah.

John: It feels like it's inevitable. Like you can't fight.

Jeremy Nixon: What is it? I, I think it's always surprisingly non concrete. It,

John: well, like you're, you're worried about, it sounds like you're worried about the public or, or other groups stopping AI progress.

Jeremy Nixon: Yeah. Yeah. I think that's the default. Like so

John: for them, they're like we,

Jeremy Nixon: especially in the US and in in Europe.

John: Yeah.

Jeremy Nixon: Europe's been like, yeah, like Luddite vibes for like the last 20 years. Yeah. well

John: Europe I think is like,

Jeremy Nixon: they're very far gone on that

John: front. Yeah. Yeah.

Jeremy Nixon: And America's that way too, except in Silicon Valley and some New York, some places.
John: But I guess I'm just saying it's interesting that the people who are worried about this feel like they have the opposite view that it feels like.

Jeremy Nixon: I

John: the AI progress will go about without any restriction.

Jeremy Nixon: I think that's true. I think that there is a lot of progress relative to the rest of the economy, but I mean, it's kind of depressing to say this, but AI's kind of the only game in town and AI didn't exist. There really wouldn't be any important technological progress right now.

And maybe Bio would be seeing something, something, but it's a kind of tragic outside of AI. Crypto got wrecked. like it used to be in the valley. There were two games in town, the AI game, the crypto game, that game's gone and Silicon Valley's the hub of technological progress. But I'm sure you're kind of, you may be familiar with this sort of very strange scenario we're in where when you were, when you say the word technology, like technology used to apply to a ton of things, it, now it's just about software and maybe it's just about AI and.

Yeah, it used to be that like your couch could be technology, the chair you're sitting in could be technology that they, we just don't make that kind of progress anymore. Sorry. That's just true like the vast majority of forms of progress is. They basically have been stagnant for ages and they're not going anywhere. And I think it's interesting that AI exists. And so, yeah, I don't know. I guess,

John: but I think many of the things that you're saying have stagnated weren't because of some of them, like obviously power or nuclear power plants were because of restrictions and regulations. Mm.

Jeremy Nixon: Yeah.

John: But some of the others weren't.

But the way you see it is that AI will stagnate not just because it's a hard technical problems, but because of people trying to regulate it like with

Jeremy Nixon: Yeah. I mean, I do think that I, if it were not for some geopolitical complex around China both the American political left and right would be anti ai.

John: Mm-hmm.

Jeremy Nixon: And so it's kind of an artifact of competition, which is, which is not unusual. So like, basically since the collapse of the Soviet Union, the OA has felt zero motivation to go to the moon or to build nuclear technologies. And I think that's kind of just the default state of affairs unless there's some competitive instinct and it's pretty similar incentive structure wise to what I was describing with government. If you have like a, sort of mono superpower and there's no competitors, there's really no reason to do anything.

John: I see. So you're just like, people suck basically. And there's not that many agentic, you know, ambitious.

Jeremy Nixon: I don't see it as people suck. I, I am, I think it almost like

John: the,

Jeremy Nixon: the responsibility in entirely the wrong place. It's not like people's nature is up to them. Just

John: like the systems or,

Jeremy Nixon: No, it's just closer to like a, a basic recognition that it's usually situations of conflict or crisis that like force progress upon the world as opposed to it being something that happens by default and, you know,

John: but whether there was competition or not, you personally, right are still if you want to solve these problems, regardless

Jeremy Nixon: of whether, yeah, but to be clear, like I think there are problems. So if you, you know, if you have a Christian, but it's a problem

John: that some

Jeremy Nixon: people don't. Right? If you have a Christian, if you have a Christian worldview, you think it's great that everyone's dying 'cause they're all going to have,

John: yeah.

Jeremy Nixon: So if you don't see a problem, you don't feel the need to solve the problem, there's no need for a solution. Similar story, like if there's no impending apocalypse, there's no reason to try to go stop the apocalypse, which is actually a really great adventure. I don't think it's easy to be self-aware that you are framing your life narrative in adventure like terms, because admitting that to yourself dampens dramatically, like the amount of motivation you feel to step into that narrative. But like living a, like a life where you are the savior of the world is actually really a fun adventure in a way that living a life where you are.
I don't know. A lot of people love, like the family frame of I a father. These kids that's actually really great and meaningful and sustainable and scalable. They don't have the need for this sort of like super intense, adventurous life narratives. And then, yeah, I guess a lot of people can be niches and economic areas or today have ambition.

And what ambition means for them is they wanna be an expert. They want to have some research paper published with their name on it that marks them as, you know, a part of mathematical history or whatever. And that's kind of cute and wholesome and like much less of a totalizing life ideology than, you know, we're saving the world from climate change or nuclear war or AI risk.

And so I, I feel like people get to kind of pick from a relatively small body of meaning generating narrative processes, and it makes a lot of sense to like pick the, the fun interesting one. And I don't think people frame that as being what they're doing. They just go through life and they experience ideas and they end up inside of the ideology that that quote unquote captures them.

And I don't mean captures in an agentic sense, like the ideology, like has sort of some hunter like intention. I mean in the mimetic sense that they end up in Amatic equilibrium. That is, they end up with the beliefs that that crowd out other beliefs. And if the beliefs they have don't do that, then they continue to move through life experiencing ideas.

And it's only once they hit beliefs that it feels like it's are, are worth running their life basically. Or like that stop other things from changing them that they, I mean the word that justifying to stagnate, but like they end up in equilibrium with their beliefs and yeah, I guess my perspective is that I can watch myself move through psychological equilibria and I'm interested in exploration or search or open-mindedness or like being out kind of above the ideologies. And I think there's an agent element, like I'm interested in the construction of meaning processes rather than like kind of embracing mimetic meaning systems that, yeah, hopefully that makes sense. It might be a bit much for, for folk, but like, and that feels like the obvious way to live life to me.

John: What do you think is going on when the heads of the top AI companies, presumably who know best, or who are in there managing things are all expressing concern about, you know, like, like Steve brought up, you have Demis and Dario at DAVO saying, we wish we could slow down.
Hmm. We have these timelines. If it wasn't for China, we would slow down. Hmm. You know, Sam has famously made statements, although I don't know what he thinks now, but all of them are concerned about existential risk. Do you think that they're saying that strategically or that they actually believe it, but that they're mistaken or they have a rosie your view of progress and

Jeremy Nixon: Yeah, I mean I think they're really embedded in social worlds that are incredibly strong incentive structures and so it's super important to simultaneously make as much progress as possible and keep absolutely everybody in a state of belief that you are the person who should have the power to do everything.
And so you want to be someone who can be vouched for. You you want the person who's worried to say I I think Demis is the right person to build the AGI and the path to being that person, at least the easy path goes through being able to build trust with all parties. And so with all people who are like high level executives at places like Google, they are sitting on top of a population of employees. And those employees have just horrifically diverse political and emotional beliefs. And so you have to simultaneously like key happy, you know, like, you know, aggressive social justice minded vigilantes for whom.
You know, whether it's race or gender, whatever their, their, their obsession is. Like that's what they want you to be. And you speak to that. And also the subset of your population that is just totally mission centric, where they're held, bent on creating the next great thing. And those are the people for whom, like their meaning systems about making as much progress as possible.
And you need to speak to them too. And then there's this other subpopulation that really believes in the progress. They're actually the, perhaps the great streamers. But they dream so vividly and brilliantly that they believe they're gonna kill everybody because it's just gonna be incredibly successful.
And those plausibly are the people who are most capable of unlocking the most important breakthroughs, and you need them to trust you too. So how do you sit atop an organization that has all of these competing political entities and continually continue to be a maximally trusted agent Who can. Make all the product decisions and launch them successfully and have the entire valley adore you.

So that's the world Dema and Darrio and Sam live in, and Jeff Dean and all these people. And it's super impressive honestly. I, I just wanna say it's like hard to be in that position. So, yeah, no, for what it's worth, I, I don't, I don't mean this to be insulting either. I don't think that any public position of any of them is anything but a construct of the feedback loop that they are in with an incredibly complex social environment.

That's not to say that they're incapable of thinking independently. They're more than capable of thinking independently, and they do it all the time. It's just that they are in hypersensitive political contexts and it is totally dysfunctional for them to be at substantially out of line with anyone else.

And if they, so say, say for example, like Demis goes renegade, suddenly there's an important subset of, of DeepMind that can defect and Sam immediately can use the distance between their, like world views as justification for stealing a tremendous amount of talent or Dario, right? So, so let me tell you like, it's so sensitive that the reason both open eye and Anthropic exist is.

The belief that the leaders of the other organizations were insufficiently cautious with respect to existential risk. So, so just know that every time they open their mouth, the existence of their organization is at risk. And regardless of what they, from first principles, believe or not, they have seen firsthand the consequences of saying the wrong thing in the wrong moment.
I just wanted to make you aware of the context and obviously like, you know, you end up believing the thing that it's helpful for you to believe, and doing it in a genuine way is way better than doing it in an artificial way. So you like make yourself believe whatever. It's most importantly, maybe there's a lot of evidence backing it and the collective's really smart and you're internalizing that collective, but I see them as mouthpieces of the body of memes that are stewing in their organizations and in their customer. And that, is the only functional way to be in their position.

John: I can ask this off camera if you want to, but I'm just curious who you think will win if you had to bet.

Jeremy Nixon: Ah, it's just totally a broken frame.

John: Yeah.

Jeremy Nixon: It's just like, win what, sorry, but

John: from their point of view,

Jeremy Nixon: right. This is an AGI mistake.
No,

John: no, I know you think that so, so what I'm saying.

Jeremy Nixon: Yeah.

John: From their point of view, they are trying to race to AGI from you know?

Jeremy Nixon: Yeah, yeah, I understand. But if AGI already happened, then like, what's the erase?

John: Sure, sure. But

Jeremy Nixon: because this doesn't make any sense to
me,

John: I guess

Jeremy Nixon: your question makes no sense to me.

John: Okay.

Jeremy Nixon: So let me be specific. I'm not saying that I don't understand what you're asking. I'm saying that the idea that there's a race to a finite point and one team is gonna get to that point first, and at that moment something will happen.

John: Yeah.

Jeremy Nixon: But that is at the core of the mistake that's being made industry wide. And, and I think like, Dario's vaguely aware of that kind of conceptual confusion and, and in his interviews he like tries to do away with some of the conceptual stuff. And I think the race dynamics are real from a customer perspective. Like, do I use Gemini with my next call? Do I use Quad?

But that is a race day in and day out for market share. That's a real race. This isn't actually a race to AGI. It's a race to who's gonna use quad code or Codex like tomorrow and next year. And so I think that the question that you asked is, is actually pretty ridiculous, but people take it seriously to a point that it's hard to believe.

John: But going back to what you said, do you think that many of the founders and leaders of these companies actually believe that? You know, because Steve in these conversations talking to these guys.

Jeremy Nixon: Yeah.

John: That's what they say.

Jeremy Nixon: Yeah. I think that the implication is there's a point where one of the three,

John: yeah,

Jeremy Nixon: or four if you count Rock or 10, if you count Kimmy and Moonshot and the other Chi DeepSeek the other Chinese, you know, phenomenal models, like the implication is that there's a single point in time at which they will forever be.

The monopoly, and I think that's at this point, pretty species. Ludicrous belief. I, I mean, maybe it's too much. I, do you understand where I'm coming

John: from? Yeah, yeah, yeah.

Jeremy Nixon: Like I think that,

John: but I'm trying to, I'm trying to get what you're saying with, with these lab people, the people, the leaders at least some of them. Yeah. Because some of them could just be like you're saying

Jeremy Nixon: they have to say no to quick, and I can walk you through the dynamics by which monopoly is like kind of inconceivable. So as soon as there's a, a severe capability dip, so say you get quad code and for like six months, everyone's obsessed with quad code.

Sam's gonna announce a, a red code, red at OpenAI and is gonna hire who needs to hire, raise who needs to rate cut the AI for Science program, cut the like whatever's required in order to build a version of Codex that is better than Quad Code. And and Demis is gonna do the same. And the talent that knows how to make Quad Code is gonna increase in value.

So economically speaking, the value of that talent to Anthropic is far less so specific on the specific knowledge of how to build cloud code. Then the value of that knowledge to OpenAI in a world where OpenAI is, is better capitalized than philanthropic, but does not have access to that talent. And similar story for Gemini.

So how are you gonna keep a monopoly on the talent that knows how to maintain the edge? Practically speaking, they haven't. Empirically speaking, these models have been neck and neck for how long like three years? And now there's five Chinese companies and like, you know, mini Mac, sorry not Minimax. G LM four seven is better at age agentic coating than than Rock and Gemini. And you think that's just gonna disappear to. Yeah and these, these Chinese researchers and engineers are no longer, you know, it's not 2010, right? Like they're legit. Sorry. Break it deep. And so I just think it's ridiculous. I just think it's ridiculous.

John: Steve, Steve talked about how you're a very capable speaker. Hmm. And I also, I also think I've talked to a lot of people on both sides. I think you're sort of in a, I don't know, a little different, I would

Jeremy Nixon: say I don't do the sides thing. I'm not like in into sides, generally speaking.

It's kind of ridiculous to say it. I think that, yeah, like, Eliezer and Nate, if they believe what I believe about the probabilities of risk would just more or less do what I was doing. If they were really being intellectually honest.

John: Yeah.

Jeremy Nixon: I think I kind of identify as an ea like OG to be clear.

John: Yeah.

Jeremy Nixon: Who thinks the most effectively altruistic thing to do is like save everyone's life with super indulgence.

John: Yeah.

Jeremy Nixon: Which is obvious if you think that the probability of doom is low.

John: Yeah.

Jeremy Nixon: Which I do 'cause I'm kind of a pessimist comparatively on the technologies like recursively, self-improving capability. And yeah, so that's where I'm at. I don't think it's an unreasonable position, I guess. Yeah. I'm surprised that there are not more people wherever I am.

John: Yeah. Yeah.

Jeremy Nixon: And

John: but I'm just saying even people who don't take sides, I think they don't have the same

Jeremy Nixon: I was saying kind of like meta ideological perspective.

John: Yeah. Like, yeah.

Jeremy Nixon: Yeah. Fair enough.

John: How do you imagine AGI house being meaningful different from other AI hacker houses or founder collectives that have emerged in this space?

Jeremy Nixon: Yeah, I mean so many of these hacker houses were just like inspired by a jazz. I guess the obvious difference is there's a grounding belief in, yeah, I guess the power of the like technical founder researcher, like archetypal community member to in concert with a bunch of other community members, create.

Again, unbelievable progress. There's a, a grounding ethos. And the ethos says like, the, the way to be a good person in the world, the way to like do right by your life is to unleash your heroic creation is to, to make something incredible. And I think that a lot of.

You know, houses just are not as sort of like filled with this belief and this ethos and spirit. and I, I, yeah, no, there, there have been many like kind of phases of AGI house. So from time to time it feels very different. Like the Afro science world feels different from me, like media, media, audio, visual world.
But I think there's a second big difference, which is people. So the like, origin story of AGI House is, it's like, you know, Ben Mann, the founder of Anthropic, the, you know, first for one of the first officers of GG three, but also Bill of Claude talking to Andre Carpathy, who like me, you know, autopilot hang out with Robin Rombach, hanging out with John Schulman, like all in the same room discussing like how to, how to build the future of AI.

And so there's something about the level of quality of thinking. That comes out of a space like this. You know, you just end up with incredible ideas a year and a half before anyone else has thought about them. And, and there's kind of like an ideological gap too. Like we end up like ideologically ahead, so to speak, of the, the kind of rest of the valley because we experience ideas early.

We come up with them in a lot of cases and like we work through them and then someone comes up with something like a year and a half later and like we. We already moved on to like, you know, things that were, that were like much more interesting to us. So for example, vibe Coating's a great example. So Andre coined vibe coating, in like February of last year.

And we had a superstition event in April of the year before, which was about simulation, about web simulation specifically. And so web and superstition is like a really concrete example. So before Lovable existed, before Bolt existed before like any like, like, you know, rep before any of these like vibe, coding, website generators existed, we were eight months before that parting around basically Web sim and World Sim.

So news had World Sim, web Sim was this very early company. So we had this AI for thought event back in October, and like the guy who made web website was at that event, the AI for Thought community was working on a lot of these generative web ideas. So I, I actually published in yeah, I wanna say like September, 2023, a Chrome extension that modified any website that you were on based on the prompt that you gave the extension.

So it in parallel sent all of the web content to an AM model and rewrote all of the, the site. So you could say, write this, rewrite the site in the style of Donald Trump, and it'd be hilarious. Or, you know, rewrite this in the style of s you'd, you know, it would, you'd sify add paper citations to every claim in this website.
So the Chrome extension called degenerative web. So yeah, and then like I found Anton because he made ML engineer, which was a code-based generator that preceded Lovable. So like before, you know, Anton was a billionaire. He was like hacking on code-based generators and in like, yeah, I don't know. In September 23 I was hacking on code-based generators and then he incorporated Lovable and they launched three times, and the third launch went well and like now they're worth billions. But like, Anton was a roommate here in like November and December 23. So yeah, I don't know. Like basically we just are ahead. We are had by like eight to 12 months. yeah, which is important.

John: My second question is, when you first imagine AGI house how did you expand it to advance your mission around AI and based on experience so far, where has the reality match your, match this vision and where has it surprised or disappointed you?

Jeremy Nixon: Yeah, I mean, I am, yeah, I'm a dreamer, so there's a lot of things that haven't happened that I wanted to happen. But yeah, I guess the original idea was also a really creative one. It was kind of like buds. So I dunno if you've ever heard of the Navy Seals, but they have, this sort of underwater demolition training. It's insane training, which includes a hell week.

So one of the original AGI house ideas, like, like right before I renamed Neogenesis, the AGI house was like buds for machine learning research. So the idea is you would show up for like five day sprint in which you would implement every machine learning algorithm, random forest K neighbors, confidence from scratch, everything from scratch, and in which you would implement every category of recommender system, neural recommenders, collaborative filtering.

You would build all of the in production app systems. and you would do this in like a, like back to back 16 hour days in which it was kind of a trial by fire, but everyone had a personal tutor and I, In general, like I love intensity, I love learning, to like an insane degree. And so combining the sort of most intense special forces training experience with machine learning felt super fun.

So that was like an idea and we ended up doing this sort of hackathon with the creators of a lot of these interesting algorithms. And one thing I love about the hackathon idea, and to be clear, it's like really popular today. I know it seems crazy to say this, but at the time they were basically no hackathons.
Like it, it barely existed, back in 2022 and we started doing them because we wanted to have events that produced things. So it's easy to go to an event and like talk to people, but it's something else to like co-create with them and then like launch something and ship it right. Make it real and have people use it. And you blow away the boundaries that people are like in their own companies and now they're all collaborating with each other. So I know a lot of those, like original hyper intense ideas turned into, well, what if we just do single day like events where you go zero to 100 on a project and you ship it and it like has the potential to be life changing and like life-changing in that if it's a product that could be a company and it catches fire, it could be really impactful or if it's a research project that could end up being a research breakthrough, it's, it's a top cool. On disappointments. So I wanted to create a global entity that actually has been a lot more time consuming than I hope. So we got into the Cambridge house, we got into a s soul house, and the cultures in Korea, and other places we're just actually like quite complicated to make work with co-living.
So there's a bunch of interesting learnings there. yeah, I would say. yeah, I, I have a lot of things that I wish I had already done. There are a lot of opportunities missed for sure. But on that, it's been a pretty, like, wildly successful journey.

John: How would you describe, Yourself, the traits in terms of your personality or even physiological traits or the level of experience in different aspects as that makes you different from other leaders in AI space. And how do you think those personal traits have affected your projects and journey so far?

Jeremy Nixon: It's been strange talking about myself, so, um, fascinating. Yeah, I guess I really, really love ideas to like a degree that it might be hard to believe. And so primarily I act out of a curious instinct. Like there's very little that I do that isn't like secretly a learning project. For my own like fascination being fulfilled by like the project.

So like we're here at an AI physics hackathon that I'm throwing in a bunch of people are building projects, but like a big secret agenda is I really want to personally do automated physics research. And so I'm like extending Steve's work on the shorting of equations. And this was really consistent.
So when I was at Google Brain a lot of my research was just like I wanted to learn the subfield that I was doing the research in more than I wanted like a result. I was really curious. I run this like reading group. I try to read in almost every domain, and so I guess I'm obsessed with this concept of technical omni competence.

That's also part of why I physics exists. It's like a part of this sort of Carter shop society. Like I want to have a society where everyone has kind of incoming PhD level understandings of every major technical field. Field, like not just the main sort of scientists, sciences, like math, physics, biochem, but like the engineering disciplines, mechanical and electrical engineering, like bioengineering. Yeah, computer science obviously, but also like functional, practical working knowledge of economics and where applied mathematics is taken as seriously as pure mathematics. And so. A part of the entire agenda of like the sequence of group houses from Neogenesis AGI house is to build a society that is engaged in collective creative technical invention.
And I, I really believe in the inventor as like an identity type as an archetype. I think that like the absence of Teslas or people who live creativity as a lifestyle as opposed to sort of like wearing the clothing of creativity, like pretending to be creative, in my opinion. I, I think there just aren't enough of them.
I, I think there are some people who like, have 10 ideas every day that like if they, if they kind of like hypothetically execute on them, like would just have this sort of Da Vinci style like notebook of, of sort of their life. And I think it's really tragic that that identities economically and financially dysfunctional.
I, I really think like the vast majority of human capital and capabilities are wasted because incentive structures disallow the space identities that are most glorious, that are most worth living out. And yeah. Yeah, I guess a lot of stagnation comes out of the, the comprehensive crushing of the creative spirits of intelligent individuals.

And a, a big fraction of the purpose of the existence of AGI house is to be this co context where creative invention is centered as high and is preserved and where you're expected actually to creatively invent all the time. And I don't think it's consistent with the kind of the financial objectives of like corporate America or education systems or religious institutions. Part of what's interesting to me about AGI House and neogenesis and these kind of interstitial organizations is it is a kind of mechanistic invention. It's an new way to cooperate with people to like show up at a group house and like code something that maybe you want to do just of its own accord. Like a lot of it's just genuine. You just are curious or you want to know if you can build something or you wanna maintain, you wanna build a new skill set, right? And you want to. It kind of let newness be normal and I think like a lot of software engineers kinda get stultified in their like company's code base working on the same thing every single day.

It's so refreshing to like start with a blank. I think that's part of why web coding took off is like it allowed people to kind of express their creative. But, I think they think that makes me pretty different as leaders. Like my objective is, is in a lot of ways the total expression of like my creative instincts and the creative instincts of everyone who's at the c series of events are in the community and the context that I'm in. And so it's in many ways like a kind of a non-economic frame for things like.

A lot of people who write code for fun, like are really serious about it actually being for fun. So I think I just, I just love the freedom to be able to like, think about and build whatever you want and not care about the incentive structures around you.

John: Beautiful. Besides passion for ideas and invention. What other aspects about yourself you think drove your, has driven your success so far, and what other aspects you think are perhaps kind of impeding you a little bit?

Jeremy Nixon: Imply that like they're not, you know, as sort of like economically focused. So it's really easy if you're like living a, a hyper creative lifestyle to be unfocused. It's really easy if you are, meet, like working with totally different people on different projects, to end up with like tons of partial projects or unfinished projects or like partial inventions. and so I think that's the obvious failure mode directly answer that question. But I think one interesting thing about my world is it's very intellectually rich, and so the people who I talk to are fascinated by like me and this world and the people in it. And so I can attract some of the smartest people to commune with us for fun.

I don't know, like you and I were talking about su amari's information geometry and like we had a event around information geometry last year as a reading room. in general, like who thinks about like, applications of information geometry for fun on the weekend? Well we do, and there aren't actually that many places that you can go to to have. A conversation even remotely like that. And so one, one upside is like people like, you know, Andre Ika, like people who like us will want to be in that environment. Steve Hsu, like we love this sort of physics dinner that we have last night. There are all of these really reneged intellectuals who care about thinking and learning for its own sake in a lot of cases who like love this place. And it just so happens that that kind of person also often reliably builds multi-billion dollar companies because as soon as you get that level of polymathic capability and you attach like any reliability to it, it just, it's crazy how much people can accomplish.

John: When you said,one of, one of the important aspects of the house, the identity is like, very much attracted to these kind of ideas. basic basically truth seeking and intellectual excitement. I can very much relate to that. And, yes, I very much enjoy our conversations today about everything, especially the information geometry bit. That was a pleasant, pleasant surprise.

Jeremy Nixon: Yeah, yeah. Yeah. I, I implemented like Manifolds on normal distributions. Like a visualization.

John: Yes.

Jeremy Nixon: Because it's actually so conceptually expanding to operate on your operators and, and yeah, that frame is really gorgeous, that front. So I love that too.

John: I also want to point out that, I find you to have this rare combination of technicals like solidness and ideological groundedness and also real world impact and to me that seems like a renaissance man. And this house is on its trajectory to be a Renaissance house. So I very much want to, you know, basically, hang out more here, in the future. Sure. So this is like one, I just wanna share one tidbit of how I see you and the rest of the, you know, people slash houses being different in this space.

Jeremy Nixon: Yeah, I mean, I guess that I am pretty obsessed with ideological groundedness. So I think that's accurate. I do think that belief systems have impacted me really deeply, and so I feel like the need to be hyper aware about them. And, yeah. I'm also, I'm quite charismatic, so I was in this unusual scenario of.
Being phenomenal at like, both debates. So when I grew up, I was like an impromptu speaker in forensics, and I was sort of a state competitive sort of debater. And then at Harvard, I was in Parley debate but I studied applied math, right? So this gives you this complex of skills like, you know, great speeches, but there about, you know, uncertainty and like, like the nature of that distribution, detection, whatever.
And so, yeah, I, I do have a very unusual vibe where it's like, you know, clearly this person could be giving speeches, some political convention, but actually all of it's hyper-technical and about like the details of some problem in physics from machine learning, whatever it might be, that was intentional.
So when I got to Harvard, I primarily concentrated and applied mathematics because I felt like, it was very high leverage. Like it's harder to learn math outside of school. Everything else that's easy for me to learn from books. Like, I just love reading. I, I self-study like a huge number of AP exams when I was in, in high school.

So it was clear to me that I could learn from books for the most part, for everything that wasn't like math heavy. And so I did this kind of boosting, you may be familiar with gradient boosting algorithms. They basically, like, they modeled their residuals. So I basically was like, oh, well here's where my error is greatest.

Let me train a new model. That's for, for basically fixing this sort of error. and in that context, it basically was about finding every major like hole in my technical skillset and spending enough time on it to become functional. And so, yeah, I don't know. I, I really believe that it's important to be able to become anything.
And if you can't, then you're making your decisions about how to live life as a function of your limited skill. And I read this book called The Art of Learning by Josh Waitzkin, and it was a true inspiration. That was like back when I was 17 and the ideas in that book are about de composing learning into its sort of finer details.
So he'll talk about making smaller circles, which is for him the way he describes the recursive decomposition to the very details of a throw in Tai chi, push hands or a position in chess. And he as simultaneously like a Tai Chi world champion and the junior chess world champion. So he demonstrated those learning principles in distinct domains, and I used a lot of his ideas and some of the inspiration from him to become a top 10 ultimate Frisbee player when I was at Harvard.
So I was playing professionally for Boston. I was competing for the Callahan and a lot of the practice processes that I kind of internalized from that book and the kind of belief in myself that internalized came from that frame. So yeah, I guess I'm pretty intentional about finding some angle on life that I don't understand and inhabiting that identity for long enough to know like whether or not I should spend my entire life in it.

And when we talk about technical omni competence, it's a kind of a desire to live in an entire society that has total technical competence. I really want to live in a country where that's what's conceived of as literacy. Like you're illiterate in math. It can be like expressive version of this is something like, you know, humans typically speak languages like English, but mathematics is the language of reality, the language of God. And why be illiterate in the language of God? I feel like I, And born in an illiterate era where nobody knows how to communicate. I, I think that people who have BCIS and speak in Alese will feel this way, like more intensively. Like how, how could you have lived in like the, the illiterate era? And as soon as right access works and all of the knowledge of the foundation models and is internalized in the mind of anyone who installs a brain computer interface, there will be a sense that we are a part of a benighted people like a, a barbarian culture.

So, you know, it's plausible that like, I'm like one of the last generations of people to speak this way. I hope that's true, but I also think it's plausible that none of that happens and that we have to build a society that's technically incompetent and that that's actually an important prerequisite to making any of those streams real.

So I actually, I, I, I feel like this year I'm really setting about to build that society and the AI cross physics event too. It's definitely a part of what I'm calling the Cardish Chef Society, which is a part of what I'm calling Ascension, which is a part of the sort of umbrella structure of like a hierarchical, comprehensive representation of all technical knowledge in an event series and body of people that allow everyone I know to partake in a kind of, technically I'm a competent culture.

John: I'm curious, like in terms of earlier talked about you have this like gradient, this, this, sorry, gradient boosting analogy.

Jeremy Nixon: Yeah, yeah.

John: Find the residual and try to fix that, you know?

Jeremy Nixon: Yeah.

John: In terms of personal improvement, where do you want to invest the most amount of your effort? IE where do you see us the biggest residual? Which aspect?

Jeremy Nixon: Well, I've been, recently, I've been pretty obsessed with capital. So I feel like I've been deeply undercapitalized since this last few months. I've read.

Creators and Guests

Stephen Hsu
Host
Stephen Hsu
Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University.
© Steve Hsu - All Rights Reserved