Scott Aaronson: Quantum Computing, Unsolvable Problems, & Artificial Intelligence — #9
Steve Hsu: Welcome to Manifold. Today, I am super excited to have Scott Aaronson as my guest. Many of you will already know who he is, but let me introduce him as the David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. Scott recently moved to Austin. Prior to that, he was a professor at MIT for many years.
I believe Scott, you and I have met physically in the real world, meatspace world a few times, I think maybe at SciFoo at the Googleplex.
Scott Aaronson: Oh, okay.
Steve Hsu: And also, I definitely remember at one of the Microsoft research summer meetings, you and I were both there and we took some cruise around the Seattle bay or whatever it is.
Scott Aaronson: Sure. Okay.
Steve Hsu: I've also been a reader of your blog. So I'm super excited to have you here.
Scott Aaronson: Yeah. Well, thanks for having me. It's great to be here.
Steve Hsu: Great. So what I'd like to do is start with a little bit about your background, your education. I think you were a very precocious kid. I want to understand how you came to mathematics and then how you came to the particular research specialization that you've chosen.
So maybe just tell us a little bit about your childhood.
Scott Aaronson: Okay. I have liked math for as long as I can remember, really seeing how, how rapidly the, the powers of, to increase with a calculator when I was four or five that I I, I remember that. I mean, I remember it, it just completely rocking my world and learning what, what negative numbers are. And then it was a, it was a, it was a very big deal to get a scientific calculator that had these buttons like sin and cos that w what the hell are these, right. Well, what do these do?
And then that was just another mystery that one one had to get to the bottom of. But I think that my goal for a while was actually to create my own video games.
I mean, I got like, like many, many kids in, in the, in the 1980s I got a Nintendo set and got obsessed with, with Nintendo. But what I really wanted to do was to create my own video games. I would just spend hours and hours just thinking about it, thinking about how we would design the game, would it be three-dimensional, what would the levels be like? But I had no clue what it would take to actually bring any of it to life. I mean, that was just a complete black box for me. That was a mystery. Maybe you need a giant factory to both of the components. And, and so I think that it, it was a, one of the most revelatory moments in my life was th this, this would have been when I was 11 and I was at a friend's house.
He just said I'm not even a Mac. I think Apple 2. He said, do you want to try messing around with basic? I said, what's basic?'' And he said, well, it's, it's a, it's a language you can, you can write programs with it. you know, like, so for example here's a, here's a game.
It was, you know, some simple thing either a pong or a worm crawling around and growing right now, here's the code of the game right now. You know, you can change this code here. We got like right here and the game will do something different. And I've, I've said that, that for me, that was kind of like learning where babies come from.
It's like the fact that, that you could take the, the, these whole worlds in, in miniature these video games and the creation of them is reducible to a math problem, in some sense. It's just reducible to writing this code.
Which is not just some kind of summary of the game, but is the game. And so then I had to learn what programming was. I think my parents say that I had a tantrum. I was crying because they didn't tell me about this earlier. And that, that, that, that I mean, you know other kids might have a headstart on me that I would never be able to make up at this point.
And the I, I learned, you know what I could about, about programming and then in the next few years but then, then at some point I think I, I, I ran into a wall. and you could say that, that, that was about at the precise point where programming turns into software engineering.
So I love writing little programs for myself that would just test things. You know, for example, if you have um, a little worm that grows on the screen in random directions by taking a random walk but it, but it can't revisit any squares where it's been before.
So it only picks a random square among the ones that it hasn't yet visited. Right? Well, eventually it will get stuck. It will be surrounded on all sides by squares that it's visited. How long does it take until that happens? Hey, it turns out that it's about 70 steps. And I remember just leaving my computer on all night to answer that question.
You know, and I, I, I love I just spent years doing that kind of thing. But then I met other kids when I was 12, 13, I met another kid at my school who was actually writing shareware games. There was a putting them on, on America Online where people could, could could download them and play them.
And, that guy he's, he's, he's, he became a lifelong best friend of mine. He's now a well-known computer security researcher named Alex Halderman. But uh maybe I didn't, I didn't realize quite how extraordinary he was I thought, wow, other kids, they're just, just such better programmers than I, than I am. And, in particular once I saw that, to actually write significant software, right. It involves documenting your code so other people can read it. It involves meeting deadlines. It involves getting your code to inter-operate with everyone else's code. And I realized that other people were just much, much better at that stuff than I was. And if I had any comparative advantage, then, then maybe it was, it was more on the mathematical side.
And so, I think that, that, that's probably what drew me more into the theoretical part of computer science. I mean when I was 13 or 14 or so I would have been reading and just, just learning about what is, what is Big O notation, how do you analyze algorithms, right? That there could be an obvious way to solve something, but that's not the best way. Right? You can, you can be cleverer about how you organize information.
You know, and, and then I would have learned about the P versus NP problem. You know, actually, I, when I was, when I was 15, I went to a Canada USA math camp in Seattle which blew my mind at the time. Because I mean, from, from being in, in, in high school to sort of being directly taught by, by these , amazing mathematicians, right. Suddenly I was struggling to keep up. And I was, you know at best, somewhere in the middle. Of and but but, but in particular, Richard Karp was there.
Richard Karp was one of the founders of the theory of NP completeness. And he gave a series of lectures about how to prove things or, and be complete about, non-trivial efficient algorithms. You know and that, that, that had a very big impact on me. And I I probably had fantasies when I was 15 then, well I was going to solve the P versus NP problem.
You know, and just the you know, it must be just some simple idea that, that everyone's been overlooking, you know? Yeah. I think it's, it's, it's important for, for someone learning a new subject do sort of go through that stage to be, to be burned by it as, as it were right.
And, and, and, and emerge a little bit wiser, but, uh
Steve Hsu: Someday some kid will foolishly think he can prove p equals NP and actually do it, right?
Let me, let me just jump in for a second, because I want to come back and talk about more about computational complexity. But I just want to remark on that. So this is I'm learning all kinds of stuff about you. That's fascinating. So you were already thinking about computational complexity. You even knew about Big O, Little O notation. And then you went to this math camp, and then you met one of the leading people in the field. So that was extremely fortuitous, right?
And so when you went to college, you went to Cornell. Is that right?
Scott Aaronson: Yes. Okay. So there's a weird story, right? I mean, when I was 13, , I lived in Hong Kong for a year because my dad was working there and I went to an American international school. And, I had always been ahead in math or doing something special in math and this school couldn't accommodate it, except by letting me skip to ninth grade. And so, so after a lot of discussion, they finally gave me permission to do that. And that was kind of like a dam breaking. And like I realized, wait, wait you could, you can do that. You can, you, can you, because you know, if you've been very unhappy in, in, in junior high school.
I just, I, I felt like I didn't belong. And, and I felt like, maybe if I can get to college that will be in an environment where I'm happier. And I think at that point my goal became to just sort of get to college as quickly as I could. And so I, so we came back to the U.S. after that, and somehow I finagled my way into uh being in, in, 10th grade, 11th grade. And I took AP calculus at that point and then there were, there was no more math to take. And so, my parents suggested that, well, why don't I just do multi-variable calculus and differential equations as a self study. And the school vetoed that. They said I'm not allowed to do that.
Scott Aaronson: And so then I, I sort of seized on that as my excuse to sort of get out of high school. And I, I, I found out at this point, about a program at Clarkson university in upstate New York which was called the Clarkson school.
This is a program where high school seniors can live in Potsdam, New York at Clarkson University and take college courses for the year. And I said, I would like to do this. And because of the math thing, I sort of had the pretext for being able to do it. And so, I went there and I had an amazing year. This was the summer after the math camp. And I actually found the professor at Clarkson named, named Chris Lynch, who who I was able to work with. At that time, the web was fairly new. I had an idea for how you could sort of optimize the design of a website with like a limited budget of, of links. And so I wrote my first paper about it.
That was this, this year that I spent at Clarkson. And from there you could apply to colleges as a freshman. I applied to all of the, the, the usual places I was, I was rejected almost everywhere I applied.
I guess because my background was too bizarre at that point. But Cornell was nice enough to accept me. And so, so then that is, that is how I ended up going to Cornell.
Scott Aaronson: The year after that that would have been when I was 16, there was one problem before I could start at Cornell, which was that they required a high school diploma. I did not have one. Sometimes you know, one's old high school is sort of nice enough to give you a diploma after you go to Clarkson school. But my high school said, well, no, I am missing physical ed. So I have to spend the whole summer doing pushups, basically, if I want a high school degree.
So, instead of that, we said, well, what about getting a New York state GED. Okay, and so then you had to, you had to be 17 years old to get a GED, which I wasn't, but my mom spent hours on the phone with them and I got them to agree to give me a GED. So then, I actually had that GED hanging in my office when I was teaching at MIT for nine years.
Steve Hsu: No, it was, was the actual crucial event that you had to walk into some room at Cornell and show them your GED. I mean, does that.
Scott Aaronson: No, no, no. I don't think it was that dramatic. It was uh, it was, it was just something that had to be mailed. I don't, I think it was just becoming electronic at that point in time.
We were still mailing things around.
Steve Hsu: But at this time you, you sort of knew you were interested in theoretical computer science. Is that right?
Scott Aaronson: Yeah. I was interested in theoretical computer science, but I was also very interested in AI. And actually when I was at Cornell, it was really the AI people who I, who I ended up working with more. Especially, a professor at Cornell named, named, named Bart Selman, who's a leader in, in, in AI and in sort of all of us trying to solve NP complete problems in practice. You know, not just theoretically. Sort of in the empirical study of NP complete problems. And I became a sort of protege of his. And you know, he was also the advisor toCornell's robo cup team. Robo cup is a robot soccer league. And so I spent a couple of years working on AI programming for, for Cornell's,robot soccer team.
In one sense that was very successful. You know, we won the world championship for two years. But that was also, I think, an experience that clinched it for me that I should not be a software engineer.
Scott Aaronson: I guess, just having to, to write all this code, meet all these deadlines, have all of these other people depend on me. It was just sort of nightmarish for me. They would, they would have some idea for how the goalie could work and I would be like, well maybe, maybe we can prove that that's actually NP complete.
Let us spend a few weeks thinking about that. And then maybe after a few weeks, Hey, I can prove it. And they're like, yeah, we don't care. We're doing it in a completely different way now. Anyway, so, so so, so, so these were some of the experiences that, that, that sort of pushed me into theory.
But you know, there remained the question certainly at that point of can I do anything new in theoretical computer science? Because, as I said, I mean, I, I, I thought about things, right. But let's say by this point, my ambitions had scaled back from solving the P versus NP problem.
But, even, even to do a much more modest thing again and again, I had the experience that you try to do something new and then either you fail or else you succeed, and then it turns out that it was already known. And so, I, I really wasn't sure if I could do anything new in a, in a field like that.
And then when it came time to apply to graduate school, it was actually the AI people who took more interest in me than theoretical computer science people. So I did much better with graduate school than with undergraduate admissions and I had a lot of great choices and I ended up going to UC Berkeley for my PhD.
But he was, actually, a machine learning person. Mike Jordan, who was the one who recruited me there. And and, and, and, and at the time I thought, well, maybe, maybe I should do that. I mean machine learning seems like an up and coming field, maybe that's going somewhere. This would have been in 2000. And I, I spent a year doing that, but, but at that point, I think my, my, my heart was secretly in, in quantum computation. And uh that, that was really a thing that I was curious about. And uh, you know, maybe the leading theoretical computer scientist working in quantum computing named Umesh Vazirani , was there at Berkeley in the CS department. And my dream was really to work with him. And it took a year for me to sort of work my way into his group. But that's where I ended up. And that was 21 years ago. And I guess I'm, I'm still here doing quantum computation.
Steve Hsu: Can I, can I ask you a little bit about that? So you know, my familiarity with quantum computing is mainly from the theoretical physics side. And so to me, Vazirani is kind of a mysterious guy because he's from this different community, but has done important work. And so I was always curious, how did the theoretical computer science community first get interested in quantum computing. Because I think probably some aspects of quantum mechanics must be very novel to them or unusual, or maybe, or maybe they just thought, okay, once you reduce it to a set of unitary operations, it's sort of something we're used to.
Scott Aaronson: Yeah, no, it it's. It's an excellent question. I think in the eighties, quantum computing was, was not yet a field, right? It was an idea that a few physicists had explored, famously Richard Feynman and David Deutsch. And then they sort of realized that there was this theoretical computer science question here.
You know, if you're trying to simulate a quantum mechanical system then is there any way to sort of avoid this exponential explosion in the size of your state. You know, in the number of amplitudes that seems to be inherent when you, when you try to represent a quantum state with a classical computer.
Scott Aaronson: But that was only one thing of many that they were interested in at that time. I wouldn't say that they really focused on it as, as, as the central issue. And uh, and it was. I only in the early 1990s, I think that uh, a few computer scientists started getting interested.
I mean so, so, so, so Charlie Bennett was a very important figure here, right? So you know, Ben, he was trained in physics and chemistry, but it was all also got a lot of important work in computer science. also Gilles Broussard, who was Bennett's friend and collaborator, they invented the BB 84 quantum cryptography scheme together.
So there was this small community of people who were making these connections between physics and computation.
Scott Aaronson: so, so Umesh, uh you know, Vazirani once told me basically once he got tenure at Berkeley he was just looking around for something new and different to do. And he read Feyman's paper about quantum computation and he uh, decided that, that, that, that this was it.
So he and his then student, Ethan Bernstein, wrote a now seminal paper in 1993 which was called quantum complexity theory. And this was the first paper that really systematically studied quantum computing from the standpoint of theoretical computer science.
Scott Aaronson: Okay. So what they did is they said, well, what is the right quantum mechanical generalization of the class P? P is polynomial time. It's the class of all the yes or no problems that are that we think of as efficiently solvable by a Turing machine, by a conventional computer.
So for example given, given a graph, is it is connected, is every point reachable from every other. Given uh, an integer written in binary. Is it prime? Okay. These are examples of problems in P. So Bernstein and Vazirani defined a class that they call it BQP, bounded hour quantum polynomial time. Which was like a generalization of P where now you're allowed to do unitary transformations. You know, you have qubits instead of bits. And you have to at the end, a measurement gets made of the quantum state. And this has to lead to your getting the right answer to your decision problem with a high profitability.
Okay. And, and again, we impose a requirement of efficiency. So the number of elementary operations, quantum gates, we would call them, has to be bounded by some polynomial function of the size of the problem that you're trying to solve. And so, so they sort of, for the first time they systematically studied, well what is this BQP? Is it at least as hard as classical P? Okay. So they verify that, that yes it is. Could it be just infinitely powerful, right? Could quantum computers let you solve the whole thing, problem, or uncomputable problems. Well, no, it doesn't go that far. So anything that's in quantum polynomial time is also in classical exponential time.
Okay. So this is basically because you could, with a classical computer, always simulate quantum mechanics by just writing down the entire wave function, right? The entire quantum state. but then they could, they could show something better than that. Anything that you can do in quantum polynomial time, can also be done by a classical computer with polynomials space. With a polynomial amount of memory.
And, and for, for a physicist, I would say the reason for that is basically the Feynman path integral. Right? It's that you can take the probability of that outcome that you care about and just represent it as a Psalm of exponentially, many contributions that you know, it could be, it could be positive or negative. And then you can just evaluate that song, you know needing exponential time, but reusing the same memory.
And so then that, that already told you something very important. It said that in the present state of computer science, there is basically no hope of proving that BQP is larger than P. Okay. So you guys like the, the obvious question is, well, can a quantum computer do something efficiently that a classical computer cannot?
Okay. But, but this said that well, sorry, if you want to prove that as a theorem, then you would also have to prove that P is not equal to P space because this BQP is sandwiched in between P and P space. Okay. But to prove P is not equal to P space is considered essentially as hard as proving P is not equal to NP.
Okay. This is one of the great unsolved problems, right. So, you know, quantum computing people have a good excuse for failure to solve this problem. It's not our fault. But now the, the, the next question that they, that they looked at was well, can we at least give some evidence that BQP is larger than P.
And, they gave what is known in the trade as oracle evidence or black box evidence, where they said, if you imagine that we had a certain black box that we can make queries to we can ask it questions, but we don't know its inner workings. Then we can design a black box that would cause BQ P to be larger than P okay.
So this doesn't answer the question in the real world, as it were, but it says that there is, there is some hypothetical black box that would, that would, make quantum computing more powerful. And now it says, if you can just find a, a sort of real problem that instantiates, that black box that gives rise to the same sort of behavior, then that would give you an example where quantum computing really was more powerful than classical computing.
And so the, the Bernstein files, Ronnie paper, You know, set off this, this chain reaction of later, you know of events over the next year or two, where what happened was a first, someone named Dan Simon read their paper and said, well this, this, this doesn't sound right. You know, I don't think that quantum computers should really get such an interesting advantage and try to prove that, the sort of Bernstein Vazirani separation was, was limited. And ended up proving the opposite. You know, ended up proving that actually a much better separation is possible.
Not just more than polynomial, but, but actually exponential and for, for another black box problem, which was called Simon's problem. So Simon wrote a paper about that, submitted it to the, one of the premier conferences of theoretical computer science. And it was rejected.
Scott Aaronson: Okay. People just didn't know what to make of it. or they said maybe it involves this, this fanciful black box like come back to us when you can say something about the real world. But there was one person on the program committee of that conference who said, no, I actually think this paper is interesting.
That person was Peter Shor. Okay. And what Peter Shore said was that what Simon's quantum algorithm was doing is kind of like identifying a hidden periodic structure in a function. So like, you're given a black box, it computes a function that has this sort of hidden global behavior to it. Where a classical algorithm would need an exponential number of accesses to the black box. Until it had figured out the pattern until it figured out this, this sort of global symmetry that the function has, but he showed that if you could evaluate all possible inputs to this function in quantum superposition, then you can sort of arrange a pattern of interference that reveals the answer and only a polynomial amount of time.
And then Shor said, our cryptography that we protect internet commerce by and then, then that itself was just really starting at that time. This was 1994. But you know, RSA and Diffie-Hellman, right, are forms of encryption that are also based on functions that have sort of hidden periodic structures.
What if you could just change Simon's problem around a little bit in order to find the periods of, of those functions of integer functions. Wouldn't that then let you factor numbers efficiently. And calculate discrete logarithms efficiently, and things like that.
So that turned out to be technically much more involved than what Simon had done and Shor had to work on that for a year. But, he eventually found that it was possible and that that's what we now know as Shor's algorithm in 1994.
You know, now Shor was a mathematician, but one who had also studied physics. You know, I should say that Vazirani was a computer scientist, but who had also studied physics. So they had kind of the right combination of training to be able to make these connections. And once Shor announced a factoring algorithm, that was when the flood gates opened. And I think people realized that quantum computing was going to be a thing. It was not going away. And immediately they started trying to figure out two more things, which, 28 years later, are still at the center of our interests. And the first thing was, well, what else is a quantum computer good for? Suppose that you have one. Supposing that it works perfectly, what else can it do besides factoring numbers, or these few other very special problems? What about NP complete problems? You know, how much of an advantage does a quantum computer give you there?
Even if P is not equal to NP, is it possible that quantum mechanics makes a difference there? Okay. We're still stuck. We're still studying that question decades later.
And then the second, maybe even more obvious question is, well, how do you actually build a quantum computer? How do you take this theory and how do you turn it into engineering? And that led to another great development in the 1990s, which was the theory of quantum error correction, quantum fault tolerance, which said how can you make this work? Not with ideal perfect qubits, but even with noisy qubits.
You know, which is, which is what you have in the real world. People had amazing theoretical ideas about that in the, in the, in the nineties, including Peter Shor himself, by the way. And, decades later they are finally starting to turn those ideas into reality just about now.
So, so, so, so all of this was happening in like 1994, 95 when I would have been 13 or 14 years old.
Steve Hsu: Yeah.
I was, I was not, you know aware of it at the time, but then I would say around, well, around 95 so maybe a year or two after it had all happened, I read a popular article about it you know, about Shor's algorithm and so forth. And my first reaction when I learned about it was that this sounds like obvious garbage. Okay. This sounds like some physicists who just sort of stumbled into computer science and think that they can solve exponentially hard problems. I just do not understand the enormity of what they are up against.
Scott Aaronson: And and, and, and I should say th this, this skeptical reaction was shared by many computer scientists. There were some who still had that reaction, right. Who haven't gotten over that reaction.
And, and it, it, it is an attitude that actually has a lot of support in, in the history of computing, right. Again and again, people have tried to design architectures, analog computing, for example, that would get around the limitations of the Turing machine. And, and again, and again, they failed, right. Because they were always ignoring some resource that would actually blow up exponentially. Such as the energy that you would need or the precision that you would need in measurement. Okay. And so you could say you and natural first guess would be that quantum computing would just be another example of the same thing where people are just sort of ignoring some resource that will blow up on you and become exponential when you try to actually build one of these quantum computers.
But, then I sort of had no choice, but to learn more about it. Because like I didn't think of myself as a physicist at all. All right. I'm a math computer science person, but it's like physics is now impinging on our world. It is making this enormous claim about the world of P and NP that I have to see if it's true. So I, when I was at Cornell, I was lucky enough to be able to do a couple of summer internships at Bell Labs.
Steve Hsu: Did you meet Grover when you were at Bell Labs?
Scott Aaronson: I did.
I did. So. Yeah. So first I worked with someone named Eric Gross, just on some statistical software that had nothing to do with quantum computing, but I got really curious, just reading about Shor's and Grover's algorithms, how they work and, and, and there were already webpages about it.
You know, the web itself was only five or six years old. I read about quantum algorithms and I was blown away by a couple of things. First of all, I can actually understand this. I would have imagined that, well, I would have had to study physics for years and years before I could even begin to understand what, what they were talking about, but then when you actually go and read the papers, uh it's like, oh it's, it's just your state is this unit vector of complex numbers.
You know, you, the way that you act on it is by unitary matrices, just like you were saying before, Steve. Right? Uh it's you know it's kind of like a classical probabilistic computer, except that instead of probabilities, which are real numbers between zero and one, now we have these complex numbers which are called amplitudes.
So, okay. You know, that, that, that's very interesting. I think even if our universe had been classical some day some computer scientist could have defined BQ P right. Just for, just for purely theoretical interest. And it would have been an interesting thing to study. You know, of course the fact that our universe really does run on these amplitudes, that does heighten the interest of this to to put it mildly.
But, this was really just linear algebra. And I knew linear algebra. And I didn't, at that point, I didn't know what a boson was, what a fermion was, what a Hamiltonian was. creation operator You know, and I didn't need to know any of that. I could learn all the details of how the quantum algorithms worked just by learning these rules, these abstract rules of unitary evolution and measurement.
You know, it's very similar to how you could learn all about programming, become a virtuoso computer scientist, without needing to know how a transistor works or or in principle, even, even that there is such a thing as a transistor, right.
You know, they're all been abstracted away for you. And I saw that something similar was true in quantum computing. And so, I learned about it and, and my boss, Eric was kind enough to say, well, why don't you just spend a few weeks just learning this subject. And so I did, and I even managed to prove some. I was very excited to prove some new results, but then it turned out that those results were already known.
Okay. But you know, it wasn't that, like, I was reproving stuff that was like 50 years old. It was that I was reproving stuff that was one year old. So like someone had beaten me through it by, by one year. And, and, and then I found out that Lov Grover worked in the same building at Bell Labs as I did.
And now, Grover was the discoverer, just a couple of years before of, Grover's algorithm, which is maybe the, the second, most famous quantum algorithm after Shor's factoring algorithm.
Okay. So compared to Shor's algorithm, Grover's algorithm has an enormously greater range of application. You know, it works for NP complete problems. You know, it works for pretty much any combinatorial search or optimization problem that you could think of. It's been generalized to other stuff like playing games. Games like chess or go, Monte Carlo estimation machine learning problems.
But the disadvantage is that the speed up from Grover's algorithm is not exponential, right? It is quote unquote only quadratic. So like where a classical computer would need end time, Grover's algorithm needs the square root of it in time.
Steve Hsu: Can I ask you about that? So, in the case of Grover, we know very well that N goes to the square root of N, speed up. In the case of discrete logs and factoring, do we really actually know what the optimal classical performance would be on those problems? Because,
Scott Aaronson: Well, absolutely not.
This comes right back to these, these, central mysteries of theoretical computer science that we were talking about before, like the P versus NP problem, right. If P were equal to NP, then among many other things that would mean that there was a fast classical algorithm for factoring numbers. And it would mean that Shor's algorithm does not achieve an exponential speed up.
Steve Hsu: Sorry to interrupt, but even if, even if P is not NP, isn't it true that the exact polynomial scaling of the difficulty of factoring is not really known? Is that true?
Scott Aaronson: That is also true. That is true as well. Yeah. So, factoring seems to inhabit a Twilight zone between P and the NP complete problems. Right? We do not think that factoring is NP-complete. We have excellent theoretical reasons why it shouldn't be. But then on the other hand, we also don't have a fast classical algorithm for it.
Okay. So it seems to have a lot of special structure and that special structure was enough for Shor to design a quantum algorithm to solve it. What we don't know is whether it's also enough to design a fast classical algorithm, right?
You know, now there are classical algorithms for factoring that do better than pure brute force. Okay. The most famous of these is called the number field sieve. Okay so it lets you factor in N digit number in time that is instead of exponential in N uh it's roughly exponential in the cube root of it.
This is using elliptic curves but this is, this is how presumably the NSA or the GCHQ, right, when they are attacking people's cryptography, they're presumably using algorithms like these. And there's some evidence even from the Snowden Edward Snowden documents and 2014 that at least at the time they were, they were doing something like that.
But you've put your finger on one of the central difficulties of this field, which is that we don't know in present how to prove inherent limitations of algorithms and in most of the cases that we care about.
So we can say there is an algorithm to do such and such, but is it optimal? You know, we don't know. People still wonder, well how great of a speed-up can quantum computers give for solving NP complete problems. Now, what we know is that there's this thing that I was talking about before called the black box setting. The black box setting is where you sort of abstract the way, all the details of the problem.
So you could say I imagine that we just have this abstract function that lets us search in a haystack. You know, you tell it where you're searching, and then it tells you if you have found the needle in the haystack or not. And then, in that setting, we know that Grover's algorithm is the best that even a quantum computer can do.
Okay. And so that's a very important theorem that was proven by Bennett Bernstein, Broussard and Vazirani in 1994. It's called the BBBV Theorem. And, ironically, this was proven before Grover's algorithm was even discovered. Okay. So Grover's algorithm was proved to be optimal in the black box setting before it was even discovered to exist.
And I think at the time they said, well, we can only prove that square root of N steps are needed. And presumably that's just some technical defect in the proof. And then Grover came along a year later and said, no, it's not a technical defect. You know, that's because there is, as we would say, a matching upper bound. Right? A quantum computer really can solve this abstract search problem in, in square root event time.
Now the trouble is that it still doesn't answer what you might really want to know in practice, which is what, well, let's say you have the traveling salesman problem, right? Or you have to satisfy a bunch of constraints for some model checking for some microchip design or some machine learning problem. Or you're trying to break some, some cryptographic code or you're trying to mine new Bitcoins, or something like that. Right?
These are all problems that are, are not just purely abstract search problems, right? They have more structure to them. And so then the question remains, could a quantum computer exploit that structure to do better?
This, we don't know. In formal terms, this is the question of, is NP, which is the class of all the problems where classical computers can at least verify the answer efficiently when given it, is NP contained in DQP? Can NP complete problems be solved in polynomial time?
Scott Aaronson: So then maybe not surprisingly, we don't know the answer to that cause we don't even know whether NP is contained in P. Right?
But we know that if there is a fast quantum algorithm for NP complete problems, that it has to look very, very different from any of the quantum algorithms that we currently know. It has to look very different from Grover's algorithm, which only gives you this square root speed up. It also has to look very different from Shor's algorithm, which requires this very special structure, this periodicity structure, which the NP complete problems don't seem to have.
So there's now been like almost 25 years of, of efforts to you know, work on that problem often by designing heuristics. So people have studied or what about a kneeling processes, right? There's something called the 80 sabbatical algorithm. And these are kind of like you take classical heuristic methods that are like simulated kneeling which, which are, which are not guaranteed to work. They can take exponential time.
But often enough, they give you a good solution in practice and you can sort of make them quantum. You can try to take advantage also of quantum tunneling effects to sort of get out of local Optima and get to the global Optima. Okay. And this is exactly what people study with the quantum, this quantum Adriatic algorithm.
Okay. But after more than 20 years of research on it we still don't really know sort of, is there some subset of optimization problems that, that, that matters in practice, where this will give you an exponential speed up over classical algorithms. We know that you can, in this black box scenario, we now know as of the last year or two, thanks to a breakthrough by Matt Hastings, and then subsequent work by Umesh Vazirani and Andreas Gillion. We know that you can construct solution landscapes where any classical algorithm needs exponential time, but this sort of quantum and Ewing approach will reach a global optimum in polynomial time.
So that's possible, but then are these landscapes ever relevant? You know, do they ever arise in any practical optimization problem that we still don't know? Okay. So there's plenty on our plates even just on the theoretical side of quantum computing, even today, right. And this is one of the examples.
Steve Hsu: I think that was a great introduction to an area for people like me, who if you come from the physics side and you, you read some textbook on quantum computing, all the computational complexity stuff is in some kind of overview chapter at the beginning.
Scott Aaronson: Yeah, right, right. I mean, I mean, you know exactly right. It kind of gets
Steve Hsu: Well, you see some Venn diagrams with PQP and these...
Scott Aaronson: It all, it all gets shunted off to the side. Yeah. You got to Steve, you got to look at the Bernstein Vazirani paper.
Steve Hsu: I think I did look at it in the nineties without, without understanding it at all.
It does everything and , from today's standpoint, is like a ridiculously unwieldy way. Right? Like we know we can say everything much simpler today. So maybe look at my lecture notes, look at my, you know.
So, anyway I was about to say I ended up working. I mean, I met Grover I. I told him my ideas for Grover's algorithm and how to improve it, which were complete nonsense. Right, which just didn't work at all. But Grover was kind enough to offer me an internship with him the following summer at Bell Labs. And so I did that and I met some of Vazirani students from Berkeley, who were also working with Grover. So I met them and I learned actually, what was the state of the art in quantum algorithms research.
Scott Aaronson: And that's when I sort of formed the intention, like I have to go to Berkeley. I have to learn whatever it is that these people know. And so that, that was really how I got started in quantum computing research was really that, that summer at Bell Labs.
Steve Hsu: After all that it's it's, after all that, it's kind of amazing that you spent that year with Michael Jordan in ML.
Scott Aaronson: Yeah. Yeah. Well I, I had an idea at the time that maybe I'm going to combine quantum computing with machine learning and you know, no one's talking about that.
Yeah. At the time, no one was talking about it so I needn't have worried, right. Like I just gave it another 10 or 15 years. That would become like the most hype thing in the history of hype. And which is, which is what it is now.
Steve Hsu: Yeah, It's like hype squared. And then if you can add crypto into it then you're done.
Scott Aaronson: Exactly, exactly. But I was, I was thinking about that even, in 2000. You know, but then you know, this item, well wall, well, yeah, I can, I can invent quantum versions of all these machine learning concepts, like graphical models and so on, but then what do you do next? Right? Well why is this useful? You know, how do you show that there is an advantage compared to the classical versions of these things? And since I couldn't answer those questions, I sort of set it to the side. Now if I had only known from today's perspective, right. I would have known that, that none of that is any barrier at all. You just write hundreds of papers about it anyway.
Steve Hsu: Yeah.
But at the, I'm a little worried that we might've lost some of the listeners who are not so theoretical in their orientation. So maybe we can switch to a more concrete topic or at least one that's in the news, which is quantum supremacy. So maybe we can explain because I'm sure people see these headlines all the time, right on the internet that, oh, they've got this quantum computer now, which is better than any classical computer computer we can possibly imagine.
Maybe you can just say what you think about quantum supremacy and what's a useful way to think about it.
Scott Aaronson: Yeah, sure. So the term quantum supremacy was coined by John Preskill a decade ago. Although it referred to something that, that, that, that some of us had been, had been thinking about for, for several years, by that time. And it's basically just how you use a quantum computer to do something. That is, that we're as confident as possible is hard to simulate with a classical computer.
And now it is crucial that I did not say something useful. Okay. You know, just, it could be some completely artificial benchmark. But our goal is just to sort of prove the quantum computing skeptics wrong. Kind of like the Wright brothers airplane was not useful for commercial air travel. Right? It exists just to prove a point.
So around 2010, 2011, some of us started thinking about, well, how do you use a quantum computer? You know, which, which might not be, ideal. You know, not quite what we want. So it doesn't have thousands of perfect qubits. Maybe it only has 100 noisy qubits. But can we still use it to do something that would sort of demonstrate the reality of. exponential speed ups by means of quantum computation.
We realized that there are a lot of advantages to switching attention from problems like factoring that have just a single right answer, to what are called sampling problems.
Hey, a sampling problem says I'm going to describe to you some probability distribution over some huge number of possible outputs. And now you just have to sample a random output, but consistently with that distribution. So the pattern of which outputs are likely or unlikely, or should, should match this distribution.
So around 2011, my then student Alex Arkhipov and I, came up with one of the first proposals for how to do this which we called boson sampling. At this point I did have to learn a little bit more physics. I did have to learn at least what bosons were.
Although at that time it was still for just a, sort of a theoretical computer science reason, we wanted a quantum computer to sample from some distribution in which the probabilities were given by very, very hard functions for a classical computer to keep track of.
And we focused on a particular hard function, which is called the permanent of a matrix. Um, and this is, you know, so a lot is known about the permanent from the standpoint of a computational complexity that is something called Sharp P complete, which we think is even harder than NP complete.
And so we thought that it had the right properties for creating one of these hard sampling problems. But then we need to know what kind of quantum computation would give rise to permanence as the amplitudes. And the answer turned out to be a quantum computation based on bosons.
Based on generating a bunch of identical bosons, such as photons, for example, and then sending them through a network of beam splitters for example. Then you would have to sum over sort of possible ways of permuting these identical particles around. You know, and each way of permuting them would make another contribution to, to the amplitude, and, you'd get a permanent.
I had known that that connection existed for a long time, but I never really knew what to do with it. And so we decided, well, could you use this to just create some very, very rudimentary quantum computation. But nevertheless, we can give evidence that it is hard to simulate with a classical computer.
So that's, that's what we did, we, we published about that in, in 2011 and then what, what happened. And I should say, other people like Bremner, Joseph and Shepherd, came to very similar ideas, but not from bosons. From, from different starting points. And then, the experimentalists around that time got all excited, right? Because they said, well guess what, these bosons are not just a theoretical construct. We have photon sources in our labs. You know, we do stuff like this all the time. We send photons through networks of beam splitters. We send them into photo detectors and we measure where they landed. Maybe we could actually do this.
Scott Aaronson: And that was really a new thing in my career. Because I had been about as far as you could get on the theoretical end of quantum computing research for a decade.Occasionally, like, I would be let into a lab for, for a lab to work if I promise not to touch anything. But it's like I didn't know, like, they could have told me that their coffee machine was the dilution fridge, and I probably would have believed them.
Okay.But once the quantum optics people started getting interested in boson sampling, then we realized wow, we're actually going to be collaborating or working with experimentalists now to try to make this a reality.
And so then, Preskill wrote this paper where he gave the name quantum supremacy to sort of this quest to sort of demonstrate a quantum speed up for something.
You know, it doesn't have to be something useful. And then there was a backlash, not everyone liked that name, you know. And so today people are calling it quantum advantage.
Steve Hsu: It's not, it's not politically correct. Supremacy.
Scott Aaronson: Exactly. But
Steve Hsu: Can I just, can I just ask though, is the, is the definition of it it's do you allow the classical computer to work for the age of the universe? Or what is the actual, at what point do you have to stay?
Scott Aaronson: To my mind the definition is you want a well-defined task so you can specify in advance what the inputs and outputs are. And so I want to rule out you saying to simulate this system, right? That's not a precise enough definition of a task. You know, I want a mathematical specification of what is to be done, such that it would make sense to say, did the quantum computer fail to do that task? You know, I mean, and we know no, no machine ever fails at the task of simulating itself. But a device could fail at the task of boson sampling. You know, from a certain distribution.
So now, what we say is that the quantum computer should be able to do this task in a reasonable amount of time. We should be able to verify that it really did do the task, but with a classical computer, it should take us orders of magnitude longer to do the same task. And it has to be a competitive process, right? The classical skeptics need to know all the specs of what they would have to do to match this. And they have to have worked on it for at least a year or two. Put real effort into it and just not being able to come within several orders of magnitude of, of matching the performance.
So there are still many details that you can quibble with. And actually the people have been quibbling about it over the last few years. Right? What exactly counts as quantum supremacy? you know, how exactly do you measure resources? What if a quantum computer can do something and then you can match its performance using an enormous network of thousands of cores with a classical.
Yeah. Do you say the quantum advantage remains? You would like to say, maybe yes, it does. You could say, even when there's not a quantum advantage in time, maybe there's a quantum advantage in carbon footprint, you know?
You know, you're using massively more energy to run this, this classical simulation than you are to run the quantum computer itself. Even though the quantum computer needs a dilution refrigerator, it's over it's so it's not exactly an energy friendly thing itself.
Now you could say, ideally, what you would want is like a much, much larger separation. You'd like something that a quantum computer can do in, in a few minutes and that with the largest classical computers on Earth would take millions of years or billions of years. And, and we hope that eventually Shor's algorithm will, will get us to that point of doing that kind of demonstration. But with the existing quantum supremacy experiments, there's a difficulty which, which fundamentally prevents us from showing a separation that's that large. And the difficulty is, well, we still need a classical computer to verify the answer.
We need to prove that the quantum computer did what it's supposed to do. And with boson sampling or with that style of quantum supremacy experiments, no one has discovered how to verify the answer with a classical computer, except by doing this exponentially hard computation.
Okay. So what that means is that, you can do a demonstration with 50 qubits or 60 qubits, and then that's exactly what Google did a couple of years ago, a group at USTC in China did, just within the last year.
And then you can actually verify the results using, let's say a classical supercomputer, right. And it might take a week or a month, but you can do it. But you know, this inherently doesn't scale, even to a hundred qubits. Cause you know, you can do a two to the 50th power is within what you can do with the biggest classical computers on Earth. Two to the hundred power is not.
So really new ideas are needed. Right? I mean, the wonderful thing about factoring the problem solved by Shor's algorithm, is that it is easy to check the answer, right? Once some machine has found the prime factors of some thousand digit numbers, well then you just, if you don't believe it, you just multiply those factors yourself. You just check.
The trouble is that to implement Shor's algorithm that seems to require an error corrected quantum computer. It seems to require something that we don't yet have to do this error correction, you seem to need thousands of physical qubits to encode just the single logical qubit.
Okay. So at that point you would be talking about millions or hundreds of millions of physical qubits and those of a higher quality than what we have today.
There are now lots of companies that are sort of racing to try to achieve that within the coming decades. And maybe they will. But that's not something that we have yet.
Okay. And so quantum supremacy what got people excited was that maybe you can demonstrate a quantum speed up even with a noisy device, that is not error corrected. That's exactly what Google and USTC have, have done or claim to do or try to do within the last couple of years.
And it looks like a pretty clear quantum advantage remains in these experiments, but it is not as decisive as we would like. Mainly because of this reason that I mentioned, the difficulty of classical verification, right. Which sort of puts a limit on how far you can scale this stuff.
So I think one of the great challenges for just the next five years is can we figure out a way to demonstrate quantum supremacy on a near term quantum computer. So noisy quantum computers with little or no error correction, which is the kind of stuff that we've, we've got now, but where a classical computer can easily recognize a right answer. Where you can easily verify it classically.
And now for bonus points, you could try to do that also for a problem that is actually useful, right. That someone actually independently cares about. Okay. But I'm not even going to demand that part.
Steve Hsu: So just to make sure I understand. So the verification is a problem because it can be extremely hard for the classical computer, but in deciding that on this particular problem, it's a well posed quantum advantage, you do have a rigorous limit on the best algorithm for the classical computer to generate the distribution in the first place.
Scott Aaronson: Ah, okay. So, that's another very good question. So yet again, it's going to depend on some unproven conjectures. Almost every hardness statement that you could ever care about in theoretical computer science depends on some unproven conjecture. I hate to break this to you.
Steve Hsu: But is it, is it a conjecture as the people are as confident in, let's say P not equal NP.
Scott Aaronson: That's an excellent question. No. What got us excited about boson sampling a decade ago was, precisely that, that we were able to prove a theorem that says that if you had a fast classical algorithm that would sample from exactly the same distribution, that would be an ideal version of this experiment. You know, a noiseless version of this experiment would sample from. Then that would have really dramatic consequences for classical complexity classes. I wouldn't quite mean P equals NP, but it would mean that the polynomial hierarchy would collapse, which is something that complexity theorists tend to think would be sort of almost as shocking as P equaling NP.
Steve Hsu: Oh,
Scott Aaronson: They were able to relate the hardness of boson sampling for a classical computer to the non collapse of the polynomial hierarchy. And Bremner, Jozsa, and Shepherd reach similar conclusions about their model. Okay. And since then, that got people excited.
Now, unfortunately, it still doesn't quite answer the physically relevant question, because the problem is that any real experiment has lots of noise in it.
In fact, with these real experiments, when you make the measurement, you see about 99.8% noise. And you're just trying to extract the tiny 0.2% of signal by repeating the sampling process several million times.
That's what Google did, that's what USTC did.
Scott Aaronson: And so then, that forces us to ask a new question, which is, well, what about, the noisy version of the experiment? Is that also hard for a classical computer to simulate? And, and that we, we have not been able to relate to the non collapse of the polynomial hierarchy.
That remains one of the great theoretical open questions in this area. So you know, Arkhipov and I worked on it for half a year. We gave some evidence that sort of, even the noisy version of boson sampling we thought should be classically hard.
Okay. But that, that remains an open question to give evidence for the hardness that is at the same level as what we know for the exact case of, of the problem.
And this is actually a very common pattern in theoretical computer science. So for example,in cryptography, often people will design some crypto, some cryptographic code, and they will prove a theorem that says, well, if you could break this protocol, this system, then you have to be able to invert any one way function. Or should have done something that would seem almost as dramatic as P equaling NP.
And this is, this is kind of the gold standard in our present state of knowledge, right? This is the kind of reduction argument that you want. I think theoretical computer science to a large extent is about passing the blame to someone else. And saying if, if my cryptographic protocol was broken, well, then it wasn't my fault. It's because these one way functions were actually not one way functions or something like that. Okay.
But then the trouble is when, when things actually get implemented in real life, then people always take shortcuts. They always do things in the implementation that sort of cut corners and that make it not exactly match the theoretical specification.
And then a lot of practical security research is just all about that gap. It's not about sort of breaking down the front door breaking the underlying cryptographic code. It's about finding some side doors. where we are just the, the implementation doesn't match the spec.
And so we've seen exactly that same kind of dynamic play out with the quantum supremacy experiments, right. Where in order to implement these experiments over the last few years, people have sort of cut corners as it were. So they said, well, we don't have initial states that are just single photons the way Aaronson and Arkhipov wanted.
We have what are called galcion states instead. We have some percentage of the photons get lost on the way through the beam splitter network. Not all of the photons are detected at the other end. Okay. But we think maybe it's still okay. You know, or but at least we still can't think of a fast way to simulate this with a classical computer. Okay.
But then someone else comes along and says, aha, by using these imperfections in the real experiment, we can design a much faster way to simulate this with a classical computer.
And then the experimentalists go back and say, okay, but we can improve the experiment in order to rule out your attack. And then some, some other classical people come, come along and say, ah, we have a different attack. So then you know, that, that, that is basically the dynamic that we've, we've been seeing over the last couple of years.
Steve Hsu: Yeah, I worry. A related concern I've always had about error correction is that I think they use a fairly benign model of the noise that they're fighting. And I think they assume it's pretty much uncorrelated noise that they have to beat down with their error correction and it could be in particular experimental set up, certainly that the situation is much worse than. ...
Scott Aaronson: Yeah, I mean, that, that, that's absolutely true. And that was recognized as an issue, I would say from the very beginning. Once people prove these theorems in the nineties about the, the possibility of a quantum error correction, quantum fault tolerance, these skeptics of quantum computing knew if they want to say that this is fundamentally impossible, as opposed to merely hard, then either they're going to be denying quantum mechanics itself, or else they're going to be denying one or more of the assumptions that go into this quantum fault tolerance.
So many of them have been doing the ladder they've been saying, well, what if the noise is correlated? What if it's sort of just, conspiratorially designed to kill your quantum computation? And I think that that's a very important direction for research. But we've learned several things since then.
you know, first of all, we've learned that even if there are correlations in the noise as long as the correlations are small enough, like they fall off with distance, for example, then error correction should still work.
But now the other thing is that if you could really just design the noise with any correlations you want and with complete advanced knowledge of which quantum computation is going to be applied. So like you're a, you're a demon, basically a quantum computer killing demon. Then at that point, such a demon could also kill classical computation.
And yet our classical computers work. Right? So we seem to have some evidence that nature is not malicious in that way. So Gil Kalai, the mathematician and one of the most prominent skeptics of quantum computing, and so he keeps putting forward models of noise that, to me, are like that. They strike me as a conspiracy theory on the part of nature.
Steve Hsu: Yeah. Usually, physicists are willing to assume that nature is not against itself.
Scott Aaronson: Yes, exactly. Exactly, exactly. Gil just sort of starts from the assumption that nature has to be killing quantum computing and then he works backwards through sort of, well then what could be true about the noise that would cause that to be true. But to me, the fact that someone as smart as Gil has, has not gotten any further and has had such a limited success in, in showing how any natural noise model would do that is actually evidence for the opposite of his position.
Steve Hsu: Yeah. I'm certainly nowhere near as worried about it as he is. That's it.
Scott Aaronson: Yeah, yeah, yeah, no, I could, I could say something more, which is that the quantum supremacy experiments of the past couple of years have actually directly addressed some of these worries.
So one of the, the most interesting things scientifically from Google's quantum supremacy experiment a couple of years ago, is that they found that the amount of signal that they could sort of extract by measuring their qubits, it basically just went like the fidelity of each gate that they applied raised to the power of the number of gates.
So just like the simplest most naive thing possible, right. Basically they saw the hours acting in a very uncorrelated way.
Steve Hsu: Yeah, that's a good sign.
So, Scott, I'm conscious of the time. So we've been going for over an hour and 20 minutes. And I don't want to take up too much of your time. Maybe we can close with your thoughts. I read, I think, your recent blog posts on Deep Mind’s alpha code. And it reminded me because I thought most of our conversation will be about quantum computing, but it reminded me that you're a person whose opinion on AGI I would really like to hear. And for example, whether recent progress has changed your priors. What is your prediction for human level AGI?
Scott Aaronson: Yeah, well, I, I think that my opinion has changed over the last decade. I mean, I've been as astounded by the successes of, of deep learning as, as anyone else basically. I had not predicted that we would see this sort of just pure machine learning model. That you, you tell it nothing about the game of go besides the rules, and then you just have it play against itself millions of times, just completely ignoring all of the centuries of human expertise about how to play go, and then it will just completely destroy all humans, at, at playing go. Or for that matter, any other two player game that has perfect information.
You know, and adjust to that so that anyone can think to design. And that the same approaches will work for voice recognition. They will work for translating texts between languages. And now they'll work for generating programs that meet some specifications.
You know, I've been writing my blogs since 2005 or so, and so I was aware that on the sort of nerd blogosphere there were a lot of people worried about the prospects for AI taking over the world.
And I interacted even 15 years ago with Robin Hanson, with Eliezer Yudkowsky, who were writing about these topics. And I think I was, I was mostly skeptical at that time. I didn't deny that it was possible, in principle, but it just seemed like solving the problem of intelligence is going to be this slog of who knows how many centuries or millennia. To unravel this mystery.
And that was partly informed by my own experience. You know, having studied AI as an undergrad and as a beginning grad student seeing just how enormous the gap often was between the hype and the reality. And just how little these systems really understood or just how brittle they were to making slight changes to what kind of question you were asking them.
But now what the world has learned over the last decade or so, is that if you just take the ideas that were already around in the seventies and eighties, just neural networks, back propagation. Now it's been rebranded as deep learning. But that's, that's what it is. And maybe there have been a few new ideas since then you have GANS, you have reinforcement learning, you have convolutional neural nets, but mostly you just scale it up to networks with billions of neurons, which of course people couldn't do back in the eighties, and you train them on these enormous amounts of data that you now have because of the internet. And then suddenly it works. It goes from not working to working.
We still don't have a full, theoretical understanding of why it works. Okay. And there are many people who would, you know I have many colleagues who still kind of poo poo it. And they say, okay, but these systems don't really understand anything. You know, they're just pattern matching. It's just statistics. They're just regurgitating what you already told them.
And they, they sort of seize on any evidence that may be, they're not working so well, like there's this whole sub field where you look for what are called adversarial examples. Where like, if you know enough about how a neural network works, then you can design a picture of a pig that it will classify as a chicken. That's right. You know, but then I look at that and I say, well, who is to say that if someone had the wiring diagram of my brain, that they couldn't design an adversarial example against me.
Steve Hsu: Oh, humans have been inventing adversarial examples forever.
Scott Aaronson: What are, what are, what are, what are optical illusions, but a sort of adversarial example against us.
So I think that, that is the oath that we swear, when we become scientists, right? Is that like, if reality tries as hard as it can to teach us a certain lesson that we will learn the lesson rather than inventing clever reasons to maintain our previous worldview.
And, and I think that we do have to update on this remarkable success that these methods have shown.
Now having said that, we don't know where it's going to go. You know, I, I am still doubtful that you could take let's say GPT-3, which is the text engine, and that you're just going to scale it up to a few orders of magnitude more neurons. And then suddenly it's going to have desires and intentions, and it's going to rebel against its creators and take over the world. I don't think that that's going to happen. You know, I think that more likely it will just become better and better at sort of babbling. You know, and what it does is I think it has now solved the problem of generating essays that could get a, B or a C in an undergraduate class.
It can sort of act like it knows about any topic even though on a close reading really it doesn't. That is impressive in a way that an AI is able to do that.
The thing that should make people nervous is that no one really knows if there is a hard barrier between that and just completely solving the problem of intelligence.
There might be a hard barrier, or it might just be that you need a few new tricks, maybe a lot more scaling to bigger and bigger GPU clusters. And then what we are doing, what our brains are doing is just some kind of pattern recognition that is fundamentally a lot like this.
The people I mentioned before, like, you know Eliezer Yudkowsky, right. You know, he has been famous for, for decades for, for arguing that the biggest task facing humanity right now is to ensure that when AI does reach the level of super intelligence, when it starts surpassing us in intelligence, that it is aligned with our values.
Scott Aaronson: That it will sort of create a utopia for us rather than wiping us out, maybe harvesting us for energy, like in the Matrix movies. Whatever, right.
And I'm still, well I, I guess if you look at how I voted with my feet I, I am not spending my time working on that. I'm still working on quantum computing and other things that are interesting to me right now.
And part of my difficulty is, well, even if I agreed in principle that this is something that people should worry about. Like I'm still not sure what should be done about it as a practical matter.
Steve Hsu: Well,
Scott Aaronson: It's yeah.
Steve Hsu: I suspect that AI safety may be an unsolvable problem. I mean, it may be that even for us to predict the behavior of something superior to us, is not possible, and therefore to design it to guarantee safety could be quite difficult or maybe impossible.
Scott Aaronson: Yeah, that is possible. I mean Eliezer would say, well we have to try because you know, that's our only hope as a species.
Steve Hsu: Yeah, I think it's fair.
Scott Aaronson: Yeah, no, I mean, I would also say that just, just looking at the world today the problems caused by stupidity by just greed and idiocy and sort of known flaws of humans, are so severe that many or maybe I would try my luck with super intelligence. You know, maybe it doesn't seem so bad by comparison, right.
Steve Hsu: Yeah. I mean an argument I often make to these folks is that probably this, the first few of these things are going to be very human-like. I mean, a lot of the intelligence they develop will be through looking at examples that came from humanity and, and even if they get rid of us, maybe by accident, without noticing, they're still kind of like us. So they're like evolutionary descendants and good luck to them and the rest of the universe.
Scott Aaronson: Yeah. Yeah, yeah, no, I mean, I, I think that, that is what they would say. And they would say, well maybe to the extent that humanity will still exist, it will be because we have uploaded ourselves or something, or we will have a very different form than what we have now. But we want to make sure that whatever world is, is created that way, it's, it's a world that we want. Or that we are happy to have created.
And, I mean, yes. The way that I would think about it is that, like, I hope that our civilization survives long enough for these to be the most urgent problems.
Steve Hsu: Yeah. Absolutely. I mean, I mean, it might be the day that wipes us out before we actually get it.
Scott Aaronson: Yeah. Like there's so many mundane things. I mean, from climate change to pandemics, as we're now suffering through. To nuclear proliferation, I feel like, if we can survive all of those things to the point where AI becomes sort of the most pressing concern, then we will have done pretty well.
Steve Hsu: There's a science fiction novel called Void Star, I don't know if you've ever.
Scott Aaronson: I haven't. I haven't.
Steve Hsu: Zachary Mason. Who's actually trained in computer science. He has a PhD in computer science and in Void Star there's a job category in the future called AI wrangler. And because it turns out the AI's who live in this world, they're first created in this virtual world. They're not interested in most things that humans are interested in. They're out trying to figure out whether P equals NP and doing all kinds of things in their own virtual worlds which to them are real. And so they immediately escape and they're living in this massive worldwide network and these AI wranglers try to get them interested to solve some material science problem or some mundane, how do we make our batteries better stuff, stuff that corporations care about.
And mostly they just don't care about us. They're just doing their own thing and their own virtual universe. So.
Scott Aaronson: Yup. Yup. Yeah. I mean, it's true that almost all of the science fiction that there's been about AI is just shown a, sort of a staggering lack of imagination. You know, either they're robot butlers or they're like enslavers like in the Matrix movie, but that can somehow be defeated by kickboxing them.
Steve Hsu: From my perspective, Scott, you're a superhuman intelligence, but you seem to spend most of your time on benign pursuits, like figuring out whether P equals NP, right?
Scott Aaronson: Well, we were all just, or just answering comments on my blog or just dealing, dealing with my kids when they're acting up. No, I am I mean, I mean, look, I, I look at, at Witten or at Terry Tao and I say these are super intelligences. And because I am not, maybe I can contribute just by writing about science or by writing a blog. You know, doing, doing the various other things I do. No, I don't think of myself that way.
Steve Hsu: Yes. I was kidding a little bit. But the AIs might have abstract pursuits that we can't understand.
Scott Aaronson: Sure. Sure. No, I mean, I, I mean, it's also possible that they would just, within the first few seconds of turning them, the, of us turning them on, they would solve P versus NP and the and you know, all of that other stuff. And then they would go on to, I don't know, cause you know, abstract art, you know.
Steve Hsu: Telling stories to each other.
Scott Aaronson: Yeah, exactly, exactly. So something, something that we can't even imagine right now.