The closer you are to the machine God, the more its voice whispers in your ear. That’s right. Yeah I don’t think that Beijing is AGI pilled. Kyle Chan, welcome to “Interesting Times.” Great to be here. So at the moment, there are really only two countries that matter for the A.I. future, the United States and China. Their leaders are meeting in Beijing, and the atmosphere is similar to a kind of Cold War atmosphere where people think and argue and talk about them being in a kind of arms race. “We’re leading China. We’re leading China by a lot. China knows that.” “I think at the moment, China is winning.” “There’s no second place. It’s either going to be the United States or China.” You are an expert on China and A.I., and we’re going to talk about that race. Who’s winning, what winning even means, whether it even makes sense to talk about the U.S. and China in terms of a race. But I want to just start with a basic question. How is China’s current approach to A.I. different from the American approach? It’s quite different, actually. So in the US, there’s a particular focus on AGI, artificial general intelligence. And to create something approaching an artificial superintelligence, some kind of almost machine God that can do virtually everything that any human can do, at least on a computer, and more and more. That’s right. You want to get more. That’s the super part. Absolutely and you can see that the amount of spending, the amount of investment, the amount of effort that the American big tech companies and their quote unquote, startups like OpenAI and Anthropic, which are now close to $2 trillion each, are pouring into this is an indication that they’re making a big bet that they can get there at some point, maybe in the near future. That’s the race to AGI in the U.S. China is running a different kind of race. I would argue they’re running multiple races. On the one hand, they are trying to produce better and better A.I. models. They do want to try to keep pace with their American competitors, but that’s not all they’re focused on. They’re also focused on efficiency, making these models smaller, cheaper to run, easier to deploy. That’s one area. Another area they’re focused on is diffusion, trying to get A.I. into the hands of as many users as possible. And part of that strategy involves open source. So this involves giving away your models for free. And that allows other people around the world, including in Silicon Valley, to download Chinese models and to also customize them and tweak them based on their own data, and to make them work in a way that’s more tailored to their own needs. So that’s the advantage of open source. And another major area that China is focused on is applications. Specifically, robotics is a huge area of focus both for the government and for Chinese AI companies. But you don’t really hear so much about AGI. You might hear some of the Chinese tech founders talk about this, and they sometimes sound a little similar to their counterparts in the U.S., but overall, they’re much more focused on these nuts and bolts uses and applications of A.I. in people’s daily lives. That’s the key priority. So if I went to Shanghai or Beijing right now and spent a couple of weeks there interacting with physical reality and digital reality, do you think I would notice a big A.I. driven difference versus life in the United States. Just describe the everyday experience of this strategy, to the extent that it makes a difference in how people are living. So in the larger cities in China, you might see autonomous delivery robots dealing with package deliveries, food deliveries you might see in a restaurant, a waiter robot bringing your food. This is not super, super widespread yet, but it’s starting to come about. Hotels rather than having room service be delivered by a person pushing a cart coming up the elevator, it might be a delivery robot. You have, of course, the self-driving cars. You might even have drone delivery for coffee or food. But it would be a subtle but probably surprising difference to what most Americans experience in terms of their interaction with A.I. in the physical world. So let’s just pause for context, because you talked about the government versus the Chinese A.I. companies. And I think most viewers and listeners are accustomed to the American situation where you have a set of big companies they have been extremely lightly regulated by Washington, D.C. And just in the last year, we’ve started to get into dynamics where the Pentagon especially seems concerned about their national security implications. There’s talk about regulations, screening of models and so on. But basically, it’s been a very traditionally American capitalist environment, not a Manhattan Project or anything like that. To what extent is China similar or different, just in the relationship between the companies and what is obviously a much more powerful and often repressive state. So in China, the state is in charge or specifically, I should say, the party state. The Chinese Communist Party and the various government agencies that they oversee, they’re the ones who set the rules. They’re the ones who ultimately are shaping the trajectory of China’s A.I. industry. They have quite strict regulations, for example, requiring A.I. models to be registered in advance. They have certain content and censorship rules that must be followed. They have a whole host of ways to enforce their rules. Have leverage over Chinese A.I. companies. And there are echoes back to a previous era where they cracked down. Chinese regulators cracked down on Chinese internet companies, for example. So that’s the overarching relationship. But that doesn’t mean that the Chinese A.I. labs themselves are just in lockstep following whatever Beijing says. Ironically, China tried a more top down model to technology in a previous era, and that failed miserably. It did not produce the kind of innovation and flexibility and agility in the marketplace that you would need to have cutting edge technology. What era are we talking about with the more top down approach. So, I mean, that was, I would argue, going back to the Mao era. This is the pre Deng pre 1980s. Exactly Yeah that’s almost Soviet command economy style approach. So what you have is a hybrid model in China. If I could characterize it in a single word, and that would be this broader direction and guidance and certainly support from the central government in China as well as local governments, on the one hand, but then also trying to create space for competition and innovation from the Chinese A.I. labs themselves, whether you’re talking about China’s equivalent the big tech like Alibaba or Tencent, the maker of WeChat, the popular super app, or you’re talking about China’s own A.I. startups like A.I. or Moonshot, which have become actually quite popular around the world. So what are the Chinese equivalents to the extent there are of an Anthropic or an OpenAI right now? That’s a good question. So maybe DeepSeek would be the closest. And then you have the smaller startups. And by small I mean like on the order of $40 to $50 billion market cap. And those are some of the more successful ones, but it’s hard to find that kind of middle ground. DeepSeek now is preparing to take in outside investment. Remember, they were actually not originally an A.I. company. They were part of a hedge fund, actually, that was trying to use A.I. to develop more sophisticated financial models. So they’re a category unto themselves. And all of these companies, though, are operating under some basic constraints that don’t apply to U.S. companies right now. Mostly around chips. So can you describe just describe that the landscape of constraint in China and what it means. So I had mentioned earlier that Chinese A.I. companies are trying to run a different races. One of those was efficiency. And part of that is in response to the constraints that they’re under, in particular around compute and chips. So remember right now the U.S. has export controls on our most advanced semiconductors made by NVIDIA. And we stopped those from officially being sold in China. We allow the sale of watered down versions, but the idea is that we keep the best and the most advanced chips for American A.I. companies in the United States, and for allies and partners for China. That means that they don’t have access to the most cutting edge A.I. chips. They have some Chinese domestic alternatives. And this is a big part of the story. One of the leading players in this space is Huawei. The heavily sanctioned Chinese tech giant that rose first in the telecom space, branched into smartphones and is now in pretty much every other industry electric vehicles, clean technology, and certainly now A.I. and chips. So China is trying to build up their own capacity for developing A.I. chips on their own, not just designing them, but actually producing them. But the problem is they’re just not quite as good as the NVIDIA chips. And without that, it does put a lot of constraints on what they can do. So they’re trying to squeeze more out of very limited compute. Why aren’t there chips as good. I know this is a simple minded question, but is it just that NVIDIA is so awesome at engineering and China’s engineers, even if they have a NVIDIA chip, can’t quite get there themselves. Like, talk to me. Talk to me like this is the chip specialist. This is the $5 trillion question, which is currently, I think roughly the market cap of NVIDIA today. There’s a couple of different aspects to this. One is actually the chip fabrication that is producing the chips. Remember NVIDIA doesn’t make their own chips. TSMC in Taiwan they’re the ones that make the chips conveniently located. Not that far from China. That’s right, that’s right to the consternation of probably a lot of folks in Washington and maybe other folks dependent on those supply chains. But TSMC has been pushing the boundaries for increasingly advanced semiconductors in a whole range of areas, and that includes A.I. and NVIDIA, by partnering with TSMC, can combine some of the best design work out there with some of the best production capabilities. For example, ASML, a Dutch company that maybe some people have heard of. It’s actually one of the biggest tech companies in Europe now. They make these extremely precise, extremely expensive lithography machines for basically printing chips. And they’re the only ones in the world that can make this kind of machine. They sell those to TSMC. TSMC can use that cutting edge technology, combined with their own cutting edge manufacturing processes, and work with NVIDIA to produce these incredible state of the art chips that keep getting better and better. So just essentially then when we talk about the U.S. not allowing NVIDIA to sell to China, we’re effectively talking about the US cutting China out of just a larger supply chain. Absolutely that runs through Taiwan, through the Netherlands, through all around the world. Absolutely O.K, that’s interesting and very helpful. What does China have going for it then in terms of AI build out that the U.S. doesn’t have. Energy is absolutely, absolutely huge in China. And this is something that if you’re thinking about the broader A.I. stack, that is not just the chips or the models themselves, but deeper down in the layer. Energy is perhaps the most important and least talked about for the U.S. is a major bottleneck. It’s very hard now for data centers to build out the power capacity to power all those chips that they’re putting together in China. Interestingly, they’ve been building out energy at a very rapid pace. Clean energy, solar, wind batteries, and they’re trying to leverage that ongoing energy build out to feed into their compute build out, which then feeds into their A.I. development. And so you see really interesting strategies that the Chinese are taking. For example, they have this effort to try to build data centers out in the Western provinces away from the high population urban areas in China. And at first that might not make any sense. Don’t you want to have your data centers close to where people are actually using them. Don’t you want to have that low latency, high response time. And what China’s trying to do is they’re trying to leverage a lot of their renewable energy resources out in those further off regions. They’re also trying to just do good old fashioned geographical redistribution. Concerned always about having these poorer provinces remain poor while the high tech Shenzhen’s and Shanghai’s speed on ahead. So this is another area where they’re trying to leverage some of their strengths to feed into maybe areas where they’re weaker. So then China is to simplify imagining a future where they’re only a little bit behind the U.S. and actually say, say what that means. People talk about the best Chinese models are three months behind the U.S. or six months behind the U.S. How far behind are they and what does that mean in practice. Overall, I think the consensus is Chinese models are somewhere between three six to nine months, depending on the time of year, which was the latest model that just came out. What that means is that when you look at specific benchmarks, specific evaluations for trying to understand how well these perform on, say, math, or coding tests or even New agentic tasks, the Chinese models that are released today are starting to get close to the American models that were released a couple months back. So that’s what that lead time means. But the thing is, it’s not just about having the absolute most cutting edge model, because you can have very, very strong models that can do a lot, that can do a lot of useful tasks maybe create a whole PowerPoint presentation for you and do all the research and analysis that goes into that or answer your emails. So there’s this strategy, I think right now in China, where they’re hoping that it’s not just all about having the very best models, that it’s about trying to figure out where to make this work, and also to build the broader ecosystem for deploying these models to integrate them into more and more services into food delivery or into ride hailing or into again, much more practical real world applications. So in the U.S., obviously there’s just a lot of anxiety around A.I. to a greater degree than any big technological change in my lifetime. Certainly there’s apocalyptic fears. There’s economic fears about job displacement. There’s social and cultural fears. There’s people who just don’t want data centers built in their backyard. So there’s a whole range of different moods. If you were going to try and distill the mood in China, the public mood around A.I., how would you describe it and how is it different from the U.S. I think the biggest anxiety right now in China is an anxiety around falling behind on technology. So I think in the U.S. There’s a lot of worries about job displacement of A.I being a net negative force in society. In China, there are some of those concerns and I can come back to that. But I think right now the fear among individuals and companies and workers is that they’re not keeping pace with A.I., that they’re not using it enough and they’re not savvy enough with this New technology so that they won’t be competitive enough in the labor marketplace. And it’s interesting, this anxiety at the individual level kind of mirrors China’s anxiety at the National level. When ChatGPT first came out. And in fact, you can even go back to when AlphaGo first defeated the world champion human world champion and go, wow, wow, wow, wow. There is a lot of anxiety in China among China’s A.I. industry and among policymakers in Beijing, worried that China was also falling behind, that they were not making the most of this New transformative technology. So it’s interesting to see this kind of mirroring where it’s not about how do I keep out this technology from my life. It’s about how do I bring in an even more and integrate it and give myself that edge in a very, very crowded marketplace. And does that. So I see that attitude in the U.S., but it is a very Silicon Valley tech and tech adjacent attitude. It’s spreading, but you see it in a pretty confined zone of the American economy. But are you saying that in China it is just much more widespread that you don’t have to be working for DeepSeek or working for Alibaba or something to have this. Like, am I falling behind. I must add I protocols mindset. That’s right. So it’s interesting that A.I. is hitting at a time when China was already experiencing a whole bunch of anxieties around labor markets, especially for young college graduates. So, for example, the unemployment rate for young people in China is basically double what it is in the United States. It’s something close to 17 percent which is extremely high. The number of New college graduates hitting the job market this year alone is 12 million plus in China. These are all people competing for many of the same jobs. They don’t want to work in the factories. They don’t want to have those blue collar jobs or delivery jobs. They want, in their minds, the good jobs. And they’re worried that if they don’t keep up with A.I., they might not be able to get those. So it’s a longer standing concern about this hyper competitive environment in China that has been there since as long as I’ve been going to China. But I really amplifies and accelerates those anxieties. And I mean, part of the debate in the U.S. has also been about the welfare state. And you have tech leaders talking about how the welfare state has to adapt if there is A.I. driven unemployment. You have Elon Musk promising not universal basic income, but universal high income. I just like saying that China does not have a safety net to any degree like the United States or Western Europe. Is there a welfare state debate in China. A UBI debate, anything like that. Increasingly so. I mean, the great irony here is I was speaking about the Mao era earlier. That is the era of the iron rice Bowl of the idea that you are a worker at a state firm, at a state organization, and you basically had your job for life. And this idea of job security is no longer there in China unless you’re working for, again, a state owned enterprise or within the government. And so that concern is coming back. And there’s actually more discussion now, including among policy folks in Beijing, about the potential issues related to A.I., job displacement and what China should do about it from a welfare and policy standpoint. I mean, how far, I mean, are there actual policy ideas, in the wind. Is there a UBI under Communist conditions. It’s still early stages from each according to his ability. To each according to his need. That’s makes a comeback. To get rich is glorious. But also. But also they are the Chinese Communist Party after all Yeah, I think it’s still early days for that discussion. And there’s still a pivot that’s happening from the all in hit the gas pedal on A.I. progress, including from the policymakers, where they were emphasizing all the New jobs that would be created by A.I. Don’t worry about those other jobs that might be affected. That’s part of the Industrial Revolution that’s happening now. Industrial revolution four or 5.0. But now that conversation is starting to shift. And what about the central government’s concern about social effects of A.I. Because one notable thing in China, you mentioned earlier the crackdown on internet companies. There was and has been a deep anxiety about the internet’s effect on social life. You’ve had attempts to write crackdown on video gaming among young men. All of the things that American commentators worry about at of speculative level have actually sometimes been actual policies in China. And this is connected to the reality that China has a bigger problem than the U.S. with falling birth rates. Falling marriage rates. Are China’s leaders looking at A.I. through that lens and worrying about the A.I. girlfriend A.I. boyfriend future. Definitely they are very worried about that. And in fact, they are already rolling out policies and regulations around A.I. boyfriends and A.I. girlfriends. It’s so funny. They have a very negative view of wasting time basically of what they see the folks in Beijing and what they see as non-productive activity and in that earlier era of a tech attack crackdown. They saw video games as not really part of the Chinese vision for high growth, technologically powered future when everyone’s at home playing video games and they also crack down on the education market. So there was a lot of private tutoring. Edtech startups were sprouting up. And they saw that as also kind of wasteful because it was a race to the bottom in terms of preparing for exams and feeding into that kind of Cutthroat academic environment. So I think right now we’re seeing something similar happen again, with worries that A.I. companions could end up being a big time sink for Chinese youth, when they should be engineering the future and building out the startups and the future Chinese versions of space, for example. But is there also a sense that this is the solution. If China never fixes its birth rate, that robots are just the way that aging low birth rate societies compete. Is that also part of the theory or the mindset. Definitely that’s a big part of the story. So China has a shrinking workforce. I think their labor force size peaked actually over a decade ago. And they’re heavily dependent on manufacturing. They don’t want to let that go. They see that as the engine for the whole economy. So how do you reconcile those two factors when people don’t want those factory jobs anymore. And young people want different jobs, and there’s just not enough people to fill the factories. One solution is robots. One solution is to increasingly automate factory production, to put robots of many different kinds, whether they’re your classic six arm industrial, six axis industrial robot arm, the classic six armer that can lift up a car in one go. Or now this big push with humanoid robots is seen as being yet another potential solution, if not a perfect solution, to this ongoing labor issue. So China wants to continue to become more and more competitive, to move up the value chain and to make better and more high value stuff. But they don’t have the workforce. So A.I. and robotics is seen as the way to fill that in Yeah, it’s interesting just thinking about you mentioned robot waiters. So one thing that has been encouraging, I think, to people worried about job displacement in the U.S. is the extent to which robotics in restaurants, fast food places, supermarkets and so on has not so far radically displaced human workers. And in fact, places like McDonald’s and Starbucks that have tried to really move to automatic ordering and so on have often found themselves maintaining human staff beyond what they expected or expanding human staff even in a context though, where the Chinese birthrate is maybe 2/3 the US birth rate at this point, depending on which stats you look at, you’re just in a different landscape where maybe you’re worrying less about whether the robot waiter displaces workers, and more about whether you have a waiter at all. And so the robot waiter is welcome and necessary. I mean, that seems like it could be a big point of divergence, ultimately, between how the U.S. and China relates to robots Yeah, definitely. It’s like you’re going to have to on one side or the other. You’re going to have to err on the side of going too slow. And then you may not have the ability to do all these things because there’s not enough workers there. Or you might on the side of going too fast. And I feel like that’s the concern in the U.S. More Let’s pull up back to the AGI superintelligence question. How do you think China’s leaders actually think about the American fixation or the tech world. Sam Altman, Dario Amodei fixation on AGI. Is it a two options. You can tell me if there’s a third, right. One option is that the Chinese basically think that our tech companies are high on their own supply, that there is not, that there’s never going to be some insane return to superintelligence. And it’s always going to be fine to be 3 to six months behind. But then you have catch up. Another option would be that China is actually worried about superintelligence and is basically trying to figure out what are our contingency plans if the Americans seem to be pulling much further ahead. Do either of those describe China’s mindset to the extent that you can read the tea leaves in Beijing. So I mean, one, interesting corollary question is, China trying to do an AGI Manhattan Project somewhere buried underground in a bunker with data centers that can’t be seen by satellites and powered by. Yes are they. And my inclination is no. And do you think they could do something like that without the U.S. being aware of it. So I don’t think that they would be able to do that without the U.S. being aware. I think that it would require such a scale of production, of amassing resources and construction, that we would detect something and we would start to wonder what is going on. And I mean, we already are watching everything about the nuclear build out. For example, in China, nuclear weapons build out. So I would be very doubtful that we would miss something of that scale, because you really would need massive scale in terms of compute and energy to power something that would be like a Manhattan Project for AGI. So they’re not secretly trying to win the race, whatever they’re doing, they are accepting this position of being in our draft on the racetrack or whatever metaphor you want for now. But is that just making a virtue of necessity, or do they think that we’re deluding ourselves in our race to superintelligence? I think they just see the technology quite differently, and they just don’t have that kind of transcendent view of technology. I think that you can see this in other approaches that they’ve taken to the internet or to the IT revolution, which they were obsessed with as well. So they were really focused on just trying to integrate the internet and IT infrastructure into just basic services, education, health, government services. And I think they see something similar with A.I. Now, one thing that I kind of thought experiment I often think about is, what would be the signs that they were trying to do a secret AGI program. And one of the signs, I think, would be about those NVIDIA chips that I mentioned earlier, where right now Trump has relaxed some of the export controls and allowed H200 NVIDIA chips to be sold to China. Those are better than what China had gotten before, but not the very best. And China has basically said thanks, but no thanks. The A.I. companies, to be sure, in China, really, really want those chips. But here’s the divergence. Because Beijing, they don’t necessarily want to be dependent on the U.S., and they want to bolster their own semiconductor program. So if they were really sprinting today for AGI, I think they would have gobbled up those chips as quickly as possible, not knowing when that window might close. So that is one indicator that they are seeing this as a medium to long term bet. So there might be people at DeepSeek Sikh who believe in the superintelligence future more strongly than people in Beijing. Yes yeah. I think the A.I., the closer you are to the machine God, the more its voice whispers in your ear. That’s right Yeah I don’t think the Beijing is an AGI build. What about espionage, which obviously played a big role in the early Cold War arms race with nuclear secrets. Is there an equivalent spy based solution for China if the U.S. seems to be pulling too far ahead. So there is something called distillation, and that’s where you take a weaker model, and you actually train it on the outputs of a stronger model. And distillation is a common practice for A.I. developers when it’s done with full knowledge and full disclosure and total authorization. What seems to be happening now is some of the Chinese A.I. labs seem to be distilling on American A.I. models without authorization, and they’re using, it seems, a number of different proxy accounts so that they can get around efforts to block these campaigns so that they’re not that doesn’t require stealing secrets from Anthropic. It just requires using the Anthropic model in a way that you’re not supposed to be able to use it. That’s right. It’s its own category. It’s not quite like outright IP theft. It’s not like taking the source code for from Anthropic or OpenAI. It hearkens back a little bit to an era where Microsoft was always trying to cut down on Black market copies of Windows and Microsoft Office. Does it work in the sense that can you just have a Chinese Claude distilled that works as well as Claude. So it can help somewhat, but you need to have that foundation to start with. So I think that this is probably one area where it’ll be hard still to get concrete data on exactly what the net effect is, but I would say that if you or I were building a model from scratch, we would not be able to use distillation as a way to catch up to the frontier. If you were one of the better Chinese A.I. labs, you might be able to use some of this to improve your model, especially on areas where you’re weaker on coding. For example, you might be able to use Anthropic’s Claude models to support your long term coding capabilities. So there is that aspect to this whole A.I. race. In a world where there is some kind of takeoff, and I should say, one of the theories that animates the American A.I. companies is the idea that at a certain level, the A.I. start training the New A.I.s and you get this kind of acceleration where suddenly being three or six months behind, it becomes impossible to catch up again. This would be the theory. Suppose that starts to happen. Does China just invade Taiwan Well, seriously. You have I mean, it’s just a kind of fascinating circumstance that you have a kind of arms race. Maybe China doesn’t think of it as an arms race, but it is sitting next door to a central hub in the supply chain that makes the arms race possible. Like, is that the natural Chinese move in the event that they seem to be falling incredibly behind. So I think ironically, if that were really starting to happen, taking over TSMC would be a move too late because the chips are already made and installed and are already running and training the models and feeding into this feedback loop in the United States. So at that point, all bets are off and you’re kind out of options for what to do. The big question here is how fast that can happen and whether this could happen without being detected. There’s always speculation about is there a version of the latest A.I. models that hasn’t been shared or even disclosed to the public in, say, the U.S. or maybe even in China, where they have gotten the inkling of this recursive feedback loop that will lead to this superintelligence explosion. So that question, it’s of hard to know. And then how quickly can you actually get there. But I want you to be prescriptive for a moment because we’re having a summit. We’ve been talking about of what China is doing, how China is thinking, and so on. What does all of this mean for the United States in terms of our policies. Does it mean that we should treat China as a fundamentally more benign actor than our current policy treats them as. Or is it an indicator that, in fact, our policy is working by shaping a Chinese perspective that is not as engaged in the race as it could be Yeah I think at this point, what we should do is take a step back from this all out race framework, because I think right now that race mentality is driving a kind of recklessness, I would argue from the American side to bring up like the threat of Chinese AGI. We should think about that. But I don’t think that that’s what they’re so focused on. But if we’re only focused on that means we need to get rid of the guardrails. We need to not bind ourselves. We need to not have any kind of regulation or restrictions. We need to have as many data centers as possible everywhere. And I think right now that approach is starting to run into some problems in the United States. And whether you’re talking about the backlash to data centers or you’re talking about now, some of these models getting so capable that they might not be at whatever AGI level, but they are at the level potentially of causing greater damage, either in terms of cyber attack capabilities or maybe even in terms of augmenting what a relatively unsophisticated group could do with bioweapons. So there are all these questions that the A.I. community has been talking about for a long time, but certainly for the Trump administration, if you recall, JD Vance’s speech last year where he said, basically, we should not have hand-wringing over A.I. safety, slow down the progress of A.I. development, in other words, in this trade off. And he viewed it as a trade off. We should err on the side of going faster rather than putting on a seat belt. And I think now we’re reaching that point where we need to think about still making progress as fast as possible. Competing with China, making sure we do have the best A.I. models so that we can keep. But does it have to come at the expense of wearing a seat belt or having some basic safeguards. Would you also suggest that the U.S. should adopt a more Chinese vision of the goal of diffusion and building the best possible A.I. enabled technology right now. Because I mean, a different way to frame this is that the US and China are in a race, but China thinks it’s running a race to build the self-driving cars and the robots that every single country in the world will use, and the U.S. will be stuck sitting here with its pretend machine God while China, sells to India, Africa, and Latin America successfully. Do you think the U.S. is in being less breakneck, should also be pivoting to a strategy of essentially integration and sales. Yes, I think we need to focus a lot more on deployment. One of those areas is actually open source, which, because of the commercial incentives, is not a high priority for the top American A.I. labs. They’re focused on selling access to their models through subscriptions, through APIs. And the thing is, that open source approach has been really, really powerful for these Chinese A.I. models to gain adoption, not just in China but around the world. And so it feels like right now the U.S. is ceding a really important channel of competition. When it’s so expensive, it can be the most powerful A.I. model, but you don’t want to pay for it. That can put limits on your growth. Do you think you get that shift organically if there is a slightly stronger regulatory hand like because again, the U.S. does not. We have industrial policy. I’ll put it in quotation marks. But we don’t have the kind of steering of economic strategy that China has. So it’s not like you can say, oh, the United States should be more focused on deployment. And there’s a button to push in Washington, DC that makes that happen. But do you think it would happen naturally if it was a little bit harder and a little bit more challenging, just to maximize compute and capacity for existing A.I. companies. I think there’s a way to tweak the incentives in a way that is not like the Chinese approach. That is not about a top down steering of the whole industry, but is more about trying to create maybe some of that commercial or even research space for say, open source models Yeah, I just think right now. you can think about a number of different markets where this is happening, where there’s a focus on the high end of the market, on consumers or businesses that are willing to pay a lot, but there’s less focus on mass adoption and that broader marketplace. And we’re seeing some of this. I should be clear that NVIDIA is trying to release open source models. They have a commercial incentive because the more A.I. gets adopted, the more their chips are needed. So there’s that closed loop there. And Google DeepMind, they have some relatively good open source models. But the incentive the commercial incentives as they stand are not quite there. Do you think we should sell more chips to China. As of token of a different model. It’s a very difficult topic because anyone who tells you yes or no on chips to China is really flattening the whole story. On the one hand, you do have real near-term effects on China’s ability to produce the most cutting edge A.I. models. So by limiting chips, that does slow down China’s A.I. development in the near term. And that can be useful, for example, for giving our companies that edge in cyber attack capabilities with mithos coming out, even a few months of being able to test on our own systems first is very useful versus a Chinese model having this capability. And they’re testing on our systems. So that’s important. But at the same time, there’s the other side of this whole equation, which is accelerating China’s own chip development. And that’s an area that they’ve been really focused on and they’ve been focused on because of our export controls. So it cuts both ways. In the near term. It will slow down their development. In the longer term, it could speed up at least their ability to have a more resilient, self-reliant semiconductor semiconductor supply chain that is not as affected by U.S. actions. So somewhere in there is the sweet spot, and it’s really about where you draw the line, rather than just saying more chips or less chips, and also how short timelines are overall. Absolutely and I’m just going to make the Hawks case against your case and see how you respond because the Hawk says, look, we’ve been at this for an incredibly short amount of time. Since the ChatGPT appeared in the pandemic, there’s been tremendous acceleration. The people who have predicted acceleration keep being vindicated, right. And yes, if you’re talking about a percent to 25 year time horizon, for the point at which hit maximum superintelligence capacity, then yeah, you have a lot of room to figure out the optimal regulatory balance and all of these things. But if you’re talking about 2 to four to six years, then maintaining a 3 to six month lead over your leading rival, by the way, is an authoritarian government. Seems like it may be really, really, really important. And the slowdown that you’re advocating is one that could give up that advantage. So how would you respond to that kind of argument, which seems to be the mindset that certainly not just people at the Pentagon but a lot of people in Silicon Valley have. So that timeline comes up again and again, in so many different debates within the U.S. as it relates to the U.S.-China A.I. competition. And fundamentally, it’s impossible to say right how that timeline will play out. So I think, for example, that is what Yeah, I’ve discovered that in interviewing people. Yes it’s impossible to say on the timeline question. I mean, then it really boils down to what your views are about this AGI timeline and how likely this is to happen. And another factor that I will throw in there is as a thought experiment. Imagine that China did have access to the most cutting edge American A.I. chips. Would they be more AGI. Would Beijing be more AGI pilled? Forget about DeepSeek or the actual tech founders themselves. And even on that, I’m not so sure that they would be so AGI pilled that my guess would be that they would try to deploy certainly better models, but basically run their current playbook, just amped up a whole bunch. And I think it goes back. But even their current playbook. Includes cyber warfare includes a lot like you just mentioned, the fact that just a three month advantage in the deployment of a cyber warfare capable model like mythos makes a big difference. So it’s not as though the current Chinese playbook is innocent of conflict with the U.S.. That’s right. So that’s why I see it in as different sets of risks. One is this AGI risk that you’re talking about. And that I think is I would argue has been overblown. But what I don’t think has been overblown and in fact, maybe even underestimated up until recently is the cyber risk. And the biosecurity risk. These are more I mean, it’s kind of crazy to say this, but those are more medium risks relative to the A.I. Catastrophic total takeover by superintelligence. So those more intermediate risks I do worry about and I do worry about U.S. competition vis a vis China. And so I think that would be in my mind, a reason for maintaining the export controls that we currently have. And not fiddling with them and not agreeing to these side deals with Xi Jinping, for example. So that’s why I try to find that balance. But in terms of the AGI question, that’s where I’m just less convinced that we’re really all in this sprint towards AGI, that China is really all in the sprint for AGI. But even on the medium, risks which I agree seem to me to be the most plausible risks. You are then making a calculation where you’re saying, what am I most afraid of. Am I most afraid of China with the capacity to do unprecedented cyber warfare against the U.S., or a rogue A.I. or disastrous A.I. model that, crashes the entire U.S. power grid for some inscrutable A.I. related reason. Like it’s that balance that you’re worrying about Yeah, exactly. And it comes to this question, too, about how the U.S. should engage with China about A.I., because if we are focused just on China’s cyber attack capabilities relative to our own, then you might say, don’t bother engaging. We’re both in this arms race essentially on cyber capabilities. But if you’re thinking about the rogue agent or say, a non-state actor using either a set of American models, a set of Chinese models, or maybe they even do arbitrage across even this is maybe 4D chess, but they deliberately are playing this geopolitical competition against each other and trying to distribute an attack across all these different models in order to disguise their origins. Those are areas where I do think that one, it would be useful to talk to the Chinese side about these, and 2, where I think it would be in the U.S. national interest. It wouldn’t just be about binding ourselves and slowing ourselves down relative to China. It would be about this extra third factor that we want to take seriously. And this is a good place to end, because a lot of people in Silicon Valley will say, oh yeah, in theory, we could engage with China and negotiate of mutual A.I. slowdown. But in practice either, it’s not clear that China wants to do that, wants that kind of negotiation, or it’s just unimaginably complex to verify some A.I. control agreement in the way that we did with nuclear missiles during the Cold War. Do you think a Cold War style, ongoing A.I. control negotiation with China is possible. I think we should not have high expectations, and I certainly don’t. I think that we should start by talking. We should start by sharing our approach to A.I. safety and A.I. risk mitigation. We should try to convince the Chinese to take this more seriously. And they are starting to take this more seriously. We should also have a discussion about open source models actually, because as those get better right on the one. On the one hand, we want those to diffuse more, but on the other hand, they could also pose a risk if they get into the wrong hands. So we can talk about all those areas. But I would be very hesitant, certainly at this stage, to even think about binding constraints. Verification agreements. A kind of arms control treaty for A.I. between the U.S. and China. At this stage, it’s way too early. Let’s just start talking. If it’s too early for that, is it just because of the sheer difficulty of imagining such a thing. Or is it a dynamic where precisely because Beijing’s attitude is that we’re not in some Cold War style race. They aren’t. They’re actually less interested than they otherwise would be in that kind of negotiation. I think overall, it really boils down to one thing, which is an extremely low degree of trust between the U.S. and China, and an unwillingness for either side to subject ourselves to invasive verification, monitoring surveillance by the other party. And yeah, there could be interesting technical solutions that would make that more feasible. But it boils down to this geopolitical reality where we don’t trust them and they don’t trust us. Stars, and so we might be able to make progress on areas that affect both of us. But when it comes to letting, say, Chinese regulators come into the U.S. or letting American regulators go inspect data centers in China, I think that is pretty far out there at this stage. And do you think that only changes on the far side of some disaster, conflict, some event. Because I mean, one theory that I don’t just toy with I guess I hold is that a lot of the negotiations around nuclear weapons were only possible because they’d been used, and people were aware of how destructive they are. Is there a world where the only way that the U.S. and China come to terms is a world where something tragic has to happen first Yeah, that’s a scenario I think about too. And I think about what would be the level of incident and what could the response be. You can think about a most extreme case where you have some major cyber attack incident or even bioweapons incident related to A.I. where there are real lives at stake, for example. And that could cause both countries to just unilaterally put a pause on all their A.I. development, because they realize that this is such a big issue with such huge risks. That is possible. So I do wonder, and I do worry, that we might be waiting for that incident to happen before we take action in advance. Before you even start to talk to each other about how to take action. All right, on that somewhat dark note, Kyle Chan, thank you for joining me. Thank you.
