Bing vs. Bard, US-China STEM Competition, and Embryo Screening — #30

Welcome to Manifold. I got very positive feedback from the previous episode, which was about LLMs and AI and an episode, which I did by myself. So I'm going to do a sequel episode in which I discuss four different topics. The first of which is large language models like ChatGPT, their tendency to hallucinate and how this manifested itself in the widely watched Bing slash GPT and Google slash Bard demos that happened last week.

That's the first topic. The second topic will be demographics of China and in particular their rate of production of individuals trained i.e. with college degrees in science, technology, engineering, medicine, also known as STEM. And they're, the shocking conclusion is that currently they are out producing the United States in STEM college graduates at a rate of about 10 x per year. So I'll talk a little bit about the projections related to that observation and, and what it means for competition between the US and China.

The third topic I'll cover, which is again related to China demographics, is an announcement that the Chinese national healthcare system will now provide free IVF. And I'll discuss the consequences of that. The fourth topic I'll discuss is a recent survey whose results were published in Science Magazine last week, and that survey was about attitudes toward the use of polygenic scores in embryo screening and I will discuss those results. So, four topics. Uh, hope you enjoy this episode.

Topic one, large language models, hallucination and the Bing and Bard demos.

You may have heard last week that both Microsoft, Bing and Google gave demonstrations of how they would use large language model technology in search results going forward. And this is widely thought to be a revolution in internet search. There are billions of searches every day. People are using it constantly to find important information.

And with large language model capabilities, we expect that whole experience to be transformed. And investors obviously care a lot about the outcome of this AI war between Google, Microsoft, and other competitors. Because the Google demo was found to be flawed or slightly rushed, Google's market cap dropped, I believe almost 9% in a single day. So their market cap went down a hundred billion dollars. So the consequences of these demos and their impact on investor psychology is, is not to be, neglected.

Now, in the previous episode of Manifold, I talked about a phenomenon called hallucination, which is common in large language models. So let me remind you that these large language models are trained on huge amounts of data, often scraped from the internet, and the model is trained using an objective function. That's a term from machine learning. The objective function tests whether the model can't predict the next word after being given the earlier words in a sentence or paragraph. So you could call that task sentence completion or that objective function, a score on an evaluation of sentence completion.

It turns out that task is difficult enough that it forces the LLM neural net to encode all kinds of understanding about human language, relationship between concepts used by humans, et cetera, et cetera. A consequence of really large scale training with sentence completion as the objective function is that these large language models really do have a good sense of the English language or natural language for humans that can translate from human natural language into what's called an embedding space of concepts, and then translate back.

And these capabilities were first used, for the purpose of translation because if you, if you took say, human natural language like German, you mapped it into the concept space and then you mapped from the concept space back to some other language, like, French, you have a translator between German and French. And, as you probably have noticed, machine translation has gotten very, very good of late.

A problem with using sentence completion as the objective function is that you produce a model that understands human language and can do this mapping between natural language and the embedding space of concepts. But the model is trained to create plausible answers to questions or plausible completions of sentences. And by plausible I mean that to the distribution of sentences generated by the model, given some input is going to be more or less what was seen in the training data. So if there's a common way that humans would respond to a particular question, that frequency of different possible responses is reflected in the tuning of the large language model, and it's actually in its neural network connections.

But that means the model may not be good at factually answering a question when there's only one precisely right answer because it may quote, hallucinate, and give a plausible but incorrect answer. So this phenomenon of hallucination is a real problem, and it was exhibited in these live demos that were done by Microsoft and also by Google.

In the case of Google, the mistake that I believe Reuters seized on was in response to a question about the James Webb space telescope. And one of the responses about the accomplishments of that telescope was slightly wrong. I think it said that James Webb was the first telescope to have seen an exoplanet of some sort, when in fact it wasn't. It was a different telescope that had done it. But that response would've been a plausible one in much that is written about James Webb. Things like exoplanets being the first to see something in a distant galaxy, et cetera. Those are plausible sentence completions in the, in the topic of the James Webb space telescope.

So there was a very mild kind of hallucination, in the Google demo that Reuters seized on. And, I don't think that was the only reason why the stock price took such a beating on that day. There were other perceptions like, Google had been caught flat-footed by Microsoft's Open AI collaboration. And it seemed like Google was a little more comfortable with its monopoly over search and wasn't maybe gonna respond as energetically. as aggressively as Microsoft and Bing, with regards to integrating LLM technology. So there were a number of things which contributed to the nose dive that the stock took that day, but definitely contributing to that nose dive was the fact that they were caught in hallucination.

Now the functionality we're talking about in, in, when it's incorporated in search is that instead of the usual. just giving you ranked search results based on your query. you could have the query submitted directly to a chatbot to an LLM, and then the LLM would be asked to look at the search results, the top rank search results, and then formulate a more kind of human readable summary or answer, based on those search results. And, and that's the functionality that, I think, rightly would be considered revolutionary once they get it working. It really will change the way that people interact with internet search.

Now, in the Bing demo, there were three examples of hallucination that were caught, and these were very big hallucinations. This was not caught by the media. The media let it go and, and didn't, unlike in the case of Google, The media didn't figure out that Bing had made up a bunch of stuff, and it was, I found the information on the website of a, a blogger, who had actually done the research of looking at the top search results for the relevant queries and looking to see and comparing those results to what,Bing slash GPT actually said. And finding that Bing GPT had not actually, faithfully reported what was in the top search results and had just made a bunch of stuff up. So the credit goes to this person in the show notes, I'll put a link into his analysis and I haven't double checked his analysis. I hadn't even looked into the performance in these demos because I didn't think the demo of what happened in the demos would be a good guide to the current state of Google and Microsoft technology because I thought they would just cherry pick the examples. That's typically what's done when you do a product demo, is you, you show things that make your product look strong and you, you try to, avoid showing things that reveal the flaws of your product.

Something went wrong. I'll speculate a little bit on that later, but I think it basically is because the senior execs at these companies, despite having very strong technical backgrounds, in some cases, possibly didn't really understand what the true limitations of LLMs are. They probably didn't really understand the hallucination phenomenon as well as they should have, because if they did, they would've forced their underlings to do a better job in cherry picking the examples using the demo.

So let me go through three of the use cases that were shown in the Bing demo. One example was, an individual searching for more information about a consumer product.

In this case, I think it was, it was a kind of vacuum or actually a pet vacuum. And just to give you an example of the hallucination, Bing went and found search results relevant to that query and then wrote something about the product. But for example, it hallucinated that this vacuum, this particular product, had a 16 inch cord and it said something like many reviewers were unhappy with the sh that the cord wasn't longer. It was inconvenient that the cord wasn't longer for this small vacuum. And, this is all a hallucination because, in fact, the vacuum in question is cordless. It doesn't even have a cord. So Bing was definitely hallucinating in talking about the length of the cord of this product.

Now, of course, it's a plausible thing for a little vacuum to have a cord, and so it's not surprising that this hallucination happened. But it's an example of the AI, even though it was presented with a bunch of information about the product, not integrating that information correctly and hallucinating some incorrect statements about the product. So that was example one.

Example two involved potentially a, a, a, a user wanting to book a trip. In this case, I think it was to Mexico City. And, many of the answers describing a possible itinerary of places to go in Mexico City were wrong. They gave wrong details about certain bars and restaurants in Mexico City. And again, understandable because the comments that were given about a particular bar were plausible. They were the types of comments that people make about bars, but they just didn't happen to be true descriptions of the bar that in that particular paragraph, Bing was describing.

So you have another example of hallucination and, and you see how difficult it will be to have Bing plan A or Bing GPT plan a travel itinerary for you because it doesn't really get a firm grip on the facts. if it is generating a short story about an imaginary bar in Mexico City, that's one thing that it may do a perfect job on. But if it's trying to tell you something factual about a specific bar in Mexico City, it may get it completely wrong, as was shown, in this use case.

When it comes to travel, you can imagine. huge problems that the AI is gonna have because if you ask it about a plane ticket price, there's a range of plausible values that it's seen many, many times in the, in the training text. It's seen plane tickets bought for $200, $250, $500. But, if the user's trying to price shop and really cares down to the penny, what a particular ticket costs and, Bing GPT or Google just gives a kind of approximate answer, or it hallucinates and gives totally the wrong answer, then it really defeats the purpose of using the search engine to plan a trip.

The final example had to do with a finance use case in which I think the user wanted to compare the financial numbers, accounting numbers of The Gap, and Lululemon, which makes women's athletic wear. And in that case, I believe many of the numbers were totally fabricated even though the search results readily returned numbers like the sales per quarter or earnings per quarter of The Gap or Lululemon. Nevertheless, what the AI actually wrote out for the human to read were just made up numbers, completely wrong numbers, or sometimes it confused two different categories of numbers.

So these are all examples of hallucination and why it will be challenging to get useful work out of LLMs. One way to think about it is that LLM is a kind of child who speaks English perfectly or writes English perfectly. And the child actually has at its command, potentially a huge number of facts about the world. But the child doesn't always do exactly what it's supposed to do. And figuring out how to get the LLM or the child to do what the human would like it to do, or the human coder would like it to do, is actually a challenge. And it's a challenge that needs to be solved with new technology.

The stealth startup that I mentioned in the last podcast that I'm working on has really been focusing on this problem. And we, we actually call the solution to the hallucination problem focus. So our goal is to create AIs, in which our LLMs have LLMs embedded in them, but in which the LLM is forced to focus on a chosen ground truth corpus of information and only answer the questions based on what's in that ground truth corpus. And in these Microsoft and Google use cases. Then the, the, the ground truth corpus would be, for example, the search results that came, you know, in the top rank, for the human query. Say, you know, what are good places to go out at night, places to go at night in Mexico City or, what are people saying about this consumer product, this little vacuum?

If the AI could be focused, if the LLM could be focused and forced to only formulate its response based on facts coming from those search results, then it would've performed much better than what was actually shown at the demo.

Now in our own testing, as I mentioned, we, we've, we've actually done a lot of testing using textbooks, and textbooks are convenient because the professors write a series of questions typically at the end of the chapter, which tests the student's reading comprehension. If the student has read the chapter and understood it and integrated the information, the students should be able to answer these questions. So they're sort of well posed questions about the material in the textbook.

And we find that after focusing we can get very close to a hundred percent performance where the AI with the LLM embedded in it can understand the query and then can write a nice, well-written answer to the query, but based only on the facts that are in the chapter, the appropriate material in the, in the textbook.

So we have been successful doing that and, we, we now know enough about the problem that we think we can apply it to lots of different use cases. And we're actually talking to a number of big companies about deploying this kind of solution for them.

So it's early days with LLMs. it's possible that the hallucination problem will be improved somewhat. I was told by someone who'd spent a lot of time with GPT-4 transcripts that the hallucination problem is reduced significantly in GPT-4 versus GPT-3 or 3.5. I think that's possible, but I think the fundamental issue of using sentence completion as the objective function in the basic training of these models, means that it's gonna be very difficult to fully solve this problem. And I don't think the LLM by itself without the extra focusing technology that we've built will actually, even in GPT-4, GPT-5 will stop hallucinating. I think you'll still have some level of hallucinations simply because the completions are generally gonna be drawn from a probability distribution. And when there are a number of possible completions, number of plausible completions that occur, maybe even explicitly occur in the training data relevant to the query, then there's just no way for the model to know what it's supposed to say, and the focusing of the model on some ground truth reference is the crucial point.

So stay tuned for more on this. If you're a stock picker, if you're, if you trade Google stock, I would say don't be too down on Google. I think Google still is more advanced in these capabilities, AI capabilities, than even the combination of Microsoft and Open AI. So I don't think Google became an 8 or 9% less valuable company overnight.

But I do think that Microsoft and Open AI are more motivated and more entrepreneurial in launching something that will compete with Google Search. The winner here will be consumers, all of us, because, at the end of this will have just generally better search capability.

Topic two, China demographics and STEM competition with the US. I, uh, tweeted about this topic last week, and so I'll put a link in the show notes to those tweet threads. I started out by referencing a National Science Foundation report, which characterizes the US STEM workforce. And again, STEM here means science, technology, engineering. M sometimes is mathematics, sometimes they mean medicine. But basically it means people with some kind of quantitative scientific training.

Now, it turns out that in the entire US workforce there are only about 16 million individuals who have at least a bachelor's degree in a STEM subject, so you could call that roughly speaking our STEM workforce. Of course, I realize there are people who maybe didn't complete their degree, but nevertheless are valuable in STEM. There are people who don't necessarily have college training, but they work in engineering adjacent or high tech manufacturing fields. But just as a reasonable metric of how many people have a sort of sophisticated level of training in science and technology and are working in the US economy, that number is about 16 million.

And I believe that actually includes immigrants. So that includes the many, many immigrants we get from, say, India and China and other countries that come to the us. So 16 million is the number.

Now, if you look at China, 20 years ago, China had a much smaller STEM workforce. So the number of people in China who had gotten a bachelor's degree in one of these subjects and was working in, in, scientific or science or technology, was much smaller than 16 million. And it's only recently that the total number of STEM workers in China has equaled and somewhat surpassed the number in the US. So imagine a curve that's growing from a low base and only just recently equaled and slightly surpassed the US total number.

In the US case, the age distribution is pretty uniform, whereas in China, most of these STEM workers in their economy are young. They have gotten their college degrees relatively recently. And you can see this if you, if you look at like any of their space launches or, the engineers who are building their new generation of EVs, the engineers at DJI, DJI that make drones, et cetera, the teams are incredibly young. You'll often see mission leaders who are only like, 30 or younger. And that's just because of the demographics of their STEM workforce. People tend to be young.

Now we can compare the rate of production going forward of new STEM workers. In the US we graduate about half a million, 500,000 new STEM degrees per year. The number in China is almost 10 times larger. I think it's 4.7 million, so it's almost 5 million. Okay. Rounding to the nearest integer, it's about 10 times higher than in the US. And that's a huge difference. So that means every year there are about 4 million more new STEM grads in China produced in China than in the United States.

So if I take that 4 million per year and I multiply it by four years, that's 16 million. So that means every four years going forward at the current rate of production, China will produce a new pool of STEM workers, which is as large as the aggregate pool for all of the US. So our entire STEM workforce is pretty much steady at around 16 million. They're producing that number in addition to, in excess of the number that we graduate each year. They're producing another 16 million every four years. So if you flash forward 20 years to say 20, around 2040, they could have about six times the total STEM workforce. So almost an order of magnitude, larger STEM workforce than the United States.

And my point in the tweet thread was just to reveal these basic numbers and to point out that, you know, as, I think some Russian general, perhaps it was Zhukov, or maybe it was Stalin, said, you know, quantity has a quality all its own.

Even if you think that, oh, on average the US STEM graduates are better, or the top STEM grads from China will always come to the United States. These are both questionable assumptions, but even if you make those assumptions, just a sheer weight of numbers will make a qualitative difference, and we'll see this qualitative difference growing and growing between today and 2040.

Now, I chose 2040 specifically because there's a kind of meme going around, mostly among non quantitative people about the demographic challenges facing China. And it is true that they have a low TFR. The number of children on average, per woman, per woman in the population is, is low. It's also pretty low among educated Americans. And it's low among Europeans and Japanese. So this is a general problem for advanced economies.

The, um, TFR for China has dropped significantly in the last few years. Some of that is covid related. And if we extrapolate that low TFR into the well into the future, it's going to actually affect the production of STEM talent.

Now, a baby that's born today will not graduate from college for another 22 years. So, the recent demographic decline in China, the decline in TFR, isn't really gonna affect the calculation I just gave of producing around 5 million new STEM grads per year from Chinese colleges. That number is not going to be negatively affected by the recent decline in TFR in China until about 2040. Between now and 2040, all of the kids that will graduate in STEM from Chinese universities between now and 2040, they've already been born. So we don't really need to speculate about those numbers.

And so in a follow up set of tweets that I made for the word cells. Who, who can't really grasp numbers, I, I had a plot where it shows actually what the TFR was in the last 20 years because those are the kids that are graduating from college in China this year, next year, and until 2040. So this is not something we need to extrapolate. It's something that's actually knowable.

What's not knowable is the TFR in the size of birth cohorts in China, in, in subsequent years, and therefore the STEM graduation pool, STEM graduate pool in the years beyond around 2040. That's not knowable without an extrapolation. But we are capable of making reasonable projections of how many STEM graduates there will be in China between now and 2040. And that number is about 5 million because the TFR has been more or less stable, over the last, except in the last few years where there was this uh, large drop. It's been roughly stable, going back to around 2000.

So, I'll put that, I'll put a link to that graph up. But basically this 5 million STEM graduates per year versus 500,000 STEM graduates per year in the US, that ratio will most likely persist until about 2040. So that's knowable without any strong assumptions. What happens after 2040? We don't know. Uh, the recent drop in TFR in China is, on the 20% order of about 20%, which is a significant drop in TFR. That may not be permanent. Some of that may be due to COVID. But a 20% reduction in the number of STEM grads per year would reduce their numbers from 5 million a year to 4 million a year. But still 4 million a year is quite a bit larger than our 500K a year. And even if the number drops to 3 million a year in China, it's still six times on a per year basis, the number of STEM graduates that we produce in the United States.

So these numbers may sound fantastic, but again, I'll, I'll put the data up for links to the data up in the show notes so you can go look at the data yourself. It's all sourced from conventional sources like the World Bank. , et cetera. It's not, this is not, it's not that I'm quoting, relying on the Chi Com government numbers. Sorry if I speak that way, but unfortunately, because of the nature of Twitter, I'm actually forced to like, interact with people who, who, who use that kind of terminology.

In any case, the point is that, center of mass, of where scientific research and research and development, advanced manufacturing, all of that, that center of mass is solidly shifting, not just toward Asia, but actually toward China. And this is reflected in every metric you look at. Whether it's patents, international patents that are issued, number of publications, number of scientific publications, which are in the top 10% in citations, number of scientific publications, which are in the top 1%, by citation, by all of these metrics, the Chinese have pretty much caught up with the US and are starting to surpass the US.

It's even true that if you, if you take the aggregate number of STEM, highly trained STEM workers in China, and you compare not to the US but you compare to the entire world X China, so you aggregate all of the STEM, uh, workforce in the US, in Russia, in Western Europe, et cetera. It could be the case that by 2040, China itself is equal to the entire world x China, or definitely comparable numbers.

So this is a trend that's lost on most people. I think even people who think of themselves as well-informed about scientific and technological competition or demographics, I don't think they really realize that this is what's happening.

Now, another way to look at this is that in China you have a country which is about four times the population of the US. College students there are more likely to study STEM, so something like 50% of college students there are studying STEM, whereas in the US it might be more like 15%, 20% at most. So, that right away tells you that, okay, the country is four times as big, so the college age population is about four times as big. And if they're more likely to study STEM there, instead of a four x advantage in STEM graduates per year, maybe it's an eight x or 10 X advantage. And that's just what we see in the data.

So if you think about it properly, it's not a very surprising result. The problem is that people are used to the idea that, that, which was true over the last 20 years, say 20 years ago, that China is poor, only a very small fraction of the students there can go to college, the colleges are not very good, the professors teaching the students are not really at a world-class standard, et cetera. None of those statements are true anymore. The fraction of college-aged kids in China who get to go to college is as high as the United States. I believe it. It could be even actually slightly higher. And the quality of the professors is better. They've invested a lot of money in their universities. The number of world class researchers there is extremely high now. So none of this should surprise us.

And the main point that I want to make is just one can extrapolate to 2040 without making any kind of speculative assumptions and conclude that there's gonna be almost an order of magnitude more scientific talent in China than in the United States. And I, I just don't see any way of avoiding this, you know, barring a nuclear war or some kind of economic collapse.

Part three, China Demographics and IVF.

Now, as I mentioned in the last segment, China has had a big drop recently in its TFR that the average number of kids produced per woman in the current generation. And the government there is very worried about this and they're trying to enact lots of social policies, which make it more attractive for families to have more kids. And the most recent indication of this is that, the National Healthcare Security Administration of China said last week that it would extend its coverage to the cost of IVF for families in China.

So this is rather interesting. The number of countries that actually do this right now is very limited. Israel does it. Some Scandinavian countries. IVF can be very expensive, although it is much less expensive in China than in the United States. One of the articles I looked at said that the average cost for a cycle of IVF in a wealthy city like Shanghai and China is between four or $5,000, which is I think about a third of what it costs in the United States. does not inconsistent with the typical kind of PPP, purchasing, power parity adjustment that one has to make between, Chinese cities and say Western, economies.

Currently about a million cycles of IVF are done in China each year, and I can easily imagine that if it becomes free, that IVF will be much, much more widely used in China. There was an article in Nature in 2017 about IVF and also about attitudes in China toward polygenic screening, or pre-implantation genetic testing of embryos. And in the article that people interviewed said that they were just shocked at how advanced and well developed the IVF industry was in China. So, big IVF centers with modern equipment and lots of trained personnel. And furthermore, the individual families were much more favorable toward the idea of embryo screening to improve the health of their children.

And the nature article characteristically, you know, emphasized the idea that some of these uses of pre-implantation testing smacked of, quote, eugenics. But the article actually pointed out correctly that in China, eugenics is a positive thing. It doesn't have the negative association that it does in the west with Adolf Hitler and the Nazi party. That just isn't a cultural association that in any part of Asia is made between, something like embryo selection or pre-implantation genetic testing or prenatal testing. . And so the cultural attitudes are just different in Asia.

And it's funny because the wokest people that are the most quote anti eugenics are all often also the most culturally insensitive because they feel that whatever their woke feelings are, all of these billions, two or three billions of people in Asia, should have the same feelings, the same cultural associations, as these white Websters have in America, which has always struck me as very, very strange and actually quite ironic.

Now, if I had to predict what the consequences will be of free government funded IVF in China, well, as I said, usage rates are gonna go way up. The most advanced technologies are gonna be readily adopted by Chinese families who are going through the IVF process. And as it has been discussed, in other podcasts, the cost of preimplantation genetic testing or embryo screening is very low. It's a small fraction of the cost of the cycle of IVF itself. And all of these things have price economics that improve with scale. So if you're dealing with millions of cycles per year of IVF and therefore maybe tens of millions of embryos, the cost of all these things, the basic steps in the IVF process or the genetic testing of biopsies taken from those embryos, everything can actually be done very inexpensively and certainly far less expensively than a year of preschool education for one of these kids, for example.

So my prediction is that over time, very advanced technologies will be employed in China in reproduction, particularly by families that are going through IVF. The cost to the government there will not be particularly great, especially compared to the value of the additional human capital, created, when families have more kids, more healthy kids. And, I think western countries are just gonna be left behind on this, if they don't feel comfortable adopting these new technologies.

This may be a long-term prediction. It may take a decade or two for this to play out, or it may come a little bit faster. It'll be very interesting to see what happens.

Part four. In this final segment, I am going to discuss an article which appeared in Science. It's a research study based on a survey of, I believe, something like 6,000 individuals. And the survey asked questions related to the willingness to use technologies like gene editing or polygenic screening of embryos and attitudes toward the moral acceptability of those technologies.

These authors specifically use the example of gene editing, or PGT-P, to improve cognitive traits concretely, to improve the chances that the child born from this process would be able to attend a top university. That's the way the, the, the question was phrased in the survey. And, of course , improving cognitive ability by gene editing or through polygenic selection of embryos is the most controversial application of that technology. I, I guess in order of controversial, I would say probably cognitive ability is cognitive and behavioral traits are probably the most controversial. In second place would be some cosmetic traits like eye color or skin tone or height, and then least controversial would be screening against highly impactful disease risks, like risk of breast cancer or diabetes.

My startup Genomic Prediction only offers the third category of embryo screening, that is screening embryos against disease risk. However, both one and two, screening based on cognitive ability or behavioral traits and screening based on cosmetic traits are technologically possible, we don't offer them because we feel it's just too controversial and I've always said that I would not favor offering genetic screening of type one or type two until there's been a society-wide discussion in which the general public's understanding of the technology is advanced. Of course, we can't ask that everyone fully understand these technologies, but at least, at least some broad education process takes place where people kind of understand what's being referred to. And then B, some kind of society-wide process is followed so that we can be sure that a strong majority of the people in the country are actually in favor of use of, in this case polygenic screening for type one of type one or type two.

Now, this single survey, although it seems to have been very professionally done, very well conceived and executed, it does not in my mind by itself, support the statement that, oh, our society has made a decision, is well informed, and therefore it's okay to start offering genetic screening of type one or type two. But it is an important step along the way, and I want to congratulate the authors for producing a nice paper and for conducting an intelligently designed study and for providing an important service to society in advancing public understanding of, this important set of technologies, which ultimately is gonna alter significantly the way that humans reproduce in the future.

So let me go to the results of the survey. So when it comes to willingness to use each technology or each service, they refer to it as a service, about 34% of all people thought gene editing was okay, or sorry that they were willing to use genes. 41%, a slightly higher percentage of people under 35, were willing to use gene editing.

And remember, this is gene editing to improve the cognitive ability of their children.

When it comes to polygenic screening, which in the IVF industry is called PGT-P, the fraction of people under 35 that were willing to do it was almost 50%. It's 48%. And among people of all ages, 43% were willing.

So it's approaching half of all people under 35. People under 35 are the ones who are likely to be actually using this because they're the ones that are gonna be using IVF in the near future.

Now, the authors of the survey cleverly also added a non-genetic service or technology to the mix and asked people whether they thought that they would be willing to use SAT prep, i.e. coaching or tutoring for their child to do better on the SAT in order that they could go to a top university, and they're about 70% of all people said they were willing to use it. So SAT prep is about 70% polygenic screening, a little under, just under 50% gene editing something like 35, 40%. So those are the main results for the willingness to use questions.

In terms of moral acceptability, the percentage of people who found gene editing morally wrong was about 30%. So about 30% of the population said gene editing to improve cognitive ability in children is morally wrong. For polygenic screening, only 17% found it to be morally wrong. And for SAT prep, about seven or 8% found it to be morally wrong.

So it's interesting because if you're willing to exclude these, I don't know what to call them, like, radical egalitarians people who are even against, like kids studying or prepping for the SAT, if you exclude them then-- because I imagine those people who were, the people who found SAT prep morally wrong, also probably found PGT-P morally wrong. If you remove those people who I guess I would kind of consider crazy, not crazy for not requiring their kids to do SAT prep, but, but for calling SAT prep morally wrong, I think I, I kind of classify them as crazy. If you remove those crazy people, then only about 10% of the population finds PGT-P or polygenic screening of embryos to be morally wrong.

So, to me, that this is actually, if, if these survey results hold up, if there were actually a larger, better powered representative survey of all Americans and the numbers came out roughly like this, I would say yes, actually, based on that, in my mind it is reasonable to start offering genetic screening or polygenic screening or embryo screening of this type.

So the results of this survey are very intriguing and I will put a link in the show notes to the Science article and the specific graphs, which summarize the information that I've just discussed.

Thanks for listening. I hope you enjoyed this episode of Manifold.

Creators and Guests

Stephen Hsu
Host
Stephen Hsu
Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University.
© Steve Hsu - All Rights Reserved