“So right now, everyone is thinking about Iran, but there’s a story happening around it that I think we need to not lose sight of because it’s about not just how we are potentially fighting this war, but how we’ll be fighting all wars going forward. On Friday of last week, Secretary of Defense Pete Hegseth announced that he was breaking the government’s contract with the AI company Anthropic, and he intended to designate them a supply chain risk. The supply chain risk designation is for technologies so dangerous they cannot exist anywhere in the U.S. military supply chain. They cannot be used by any contractor or any subcontractor anywhere in that chain. It has been used before for technologies produced by foreign companies like China’s Huawei, where we fear espionage or losing access to critical capabilities during a conflict. It has never been used against an American company. What is even wilder about this is that it is being used, or at least being threatened against an American company that is even now providing services to the U.S. military as we speak. Anthropic’s AI system Claude was used in the raid against Nicolás Maduro, and it is reportedly being used in the war with Iran. But there were red lines that Anthropic would not allow the Department of War to cross. The one that led to the disintegration of their relationship was overusing AI systems to surveil the American people, using commercially available data. So what is going on here? How does the government want to use these AI systems, and what does it mean. They are trying to destroy one of America’s leading AI companies. For setting some conditions on how these new, powerful, and uncertain technologies can be deployed? My guest today is Dean Ball. Dean is a senior fellow at the Foundation for American Innovation and author of the newsletter Hyperdimensional. He was also a senior policy advisor on AI for the Trump White House, and was the primary writer of their AI action plan, but he’s been furious at what they are doing here. As always, my email ezrakleinshow@nytimes.com. Dean Ball, welcome to the show. Thanks so much for having me. So I want you to walk me through the timeline here. How did we get to the point where the Department of War is labeling Anthropic one of America’s leading AI companies, a supply chain risk? I think the timeline really begins in the summer of 2024 during the Biden administration, when the Department of Defense, now Department of War and Anthropic, came to an agreement for the use of Claude in classified settings. Basically language models are used in government agencies, including the Department of Defense in unclassified settings for things like reviewing contracts and navigating procurement rules and mundane things like that. But there are these classified uses, which include intelligence analysis and potentially assisting operations in real time military operations in real time, and Anthropic was the company most enthusiastic about these national security uses. And they came to an agreement with the Biden administration to basically to do this with a couple of usage restrictions. Domestic mass surveillance was a prohibited use and fully autonomous lethal weapons. In the summer of 2025, during the Trump administration and full disclosure, I was in the Trump administration when this happened, though not at all involved in this deal. The administration made the decision to expand that contract and kept the same terms. So the Trump administration agreed to those restrictions as well. And then in the fall of 2025, I think I suspect that this correlates with the confirmation, the Senate confirmation of Emil Michael, under secretary of war for research and engineering. He comes in, he looks at these things, I think, or perhaps is involved in looking at these things and comes to the conclusion that, no, we cannot be bound by these usage restrictions, and the objection is not so much to the substance of the restrictions, but to the idea of usage restrictions in general. So that conflict actually began several months ago. And as far as I understand, it begins before the raid on in Venezuela, on Nicolás Maduro and all that kind of stuff. But these military operations may be increased the intensity because Anthropic’s models are used during that raid. And then we get to the point where basically where we are now, where the contract has kind of fallen apart. And D.O.W., Department of War and Anthropic have come to the conclusion that they can’t do business with one another. And the punishment is the real question here, I think. And do you want to explain what the punishment is? So basically my view on this has been that I think that the Department of War saying we don’t want usage restrictions of this kind as a principle. That seems fine to me. That seems perfectly reasonable for them to say no, a private company shouldn’t determine. Dario Amodei does not get to decide when autonomous lethal weapons are ready for prime time. That’s a Department of War decision. That’s a decision that political leaders will make. And I think that’s right. I think I agree with the Trump administration on that front. So I think the solution to this is if you cannot agree to terms of business, what typically happens is you cancel the contract and you don’t transact any more money. You don’t have commercial relations. But the punishment that Secretary of War Pete Hegseth has said he is going to issue is to declare Anthropic a supply chain risk, which is typically reserved only for foreign adversaries. What Secretary Hegseth has said is that he wants to prevent Department of War contractors. And by the way, I’m going to refer to it variously as Department of Defense and Department of War. Because…. I still call X Twitter, Yeah, I still call X Twitter. Anyway, all military contractors can be prevented from doing any commercial relations in Secretary Hegseth’s mind from with Anthropic. I don’t think they actually have that power. I don’t think they actually have that statutory power. I think that what the maximum of what I think you could do is say, the, no Department of War contractor can use Claude in their fulfillment of a military contract. But you can’t say you can’t have any commercial relations with them, I don’t think, but that is what Secretary Hegseth has claimed he is going to do, which would be existential for the company if he actually does it. O.K, there’s a lot in here. Yes I want to expand on. But I want to start here. For most people they use chatbots sometimes, if at all. And their experience with them is that they are pretty good at some things and not at others. And we’re not all that good. In June of 2024, when the Biden administration was making this deal. So here you are telling me that we are integrating, in this case, Claude throughout the national security infrastructure. It’s involved somehow in the raid on Nicolás Maduro. How and to what degree should the public trust that the federal government knows how to do this. Well, with systems that even the people building them don’t understand all that well? So I think one thing is that you have to learn by doing, and I think so it is the case that we don’t know how to integrate AI really into any organization. Advanced AI systems. We don’t know how to integrate them into complex pre-existing workflows. And so the way you do it is learning by doing. Didn’t Pete Hegseth have posters around the Department of War saying, the secretary wants you to use AI. They are very enthusiastic about AI adoption. So here’s how I would think about what these systems can do in national security context. First of all, there’s a long standing issue that the intelligence community collects more data than it can possibly analyze. I remember seeing something from one of I forget which intelligence, which intelligence agency, but one of them that essentially said that they collect so much data every year, just this one, that they would need 8 million intelligence analysts to thoroughly to properly process all of it. That’s just one agency. And that’s far more employees than the federal government as a whole has. And what can AI do. Well, you can automate a lot of that analysis. So transcribing it to text, and then analyzing that text signals intelligence processing. Sometimes that needs to be done in real time for an ongoing military operations. So that might be a good example. And then I think another area, of course, is these models have gotten quite good at software engineering. And so there are cyber defensive and cyber offensive operations that where they can deliver tremendous utility. Let’s talk about mass surveillance here. Because my understanding, talking to people on both sides of this and it’s now been, I think, fairly widely reported that this contract fell apart over mass surveillance at the final critical moment, Emil Michael goes to Dario and says, we want you we will agree to this contract, but you need to delete the clause that is prohibiting us from using Claude to analyze bulk collected commercial data Yeah and why don’t you explain what’s going on there? National security law is filled with gotchas, filled with legal terms of art, terms that we use colloquially quite a bit, where the actual statutory definition of that term is quite different from what you would infer from the colloquial use of the term. Things like private, confidential surveillance. These sorts of terms don’t necessarily have the meaning that they do in natural language. That’s true in all law. All laws have to define terms in certain ways that are not necessarily how we use them in our normal language. But I think the difference between vernacular and statute here is about as stark as you can get. So surveillance is the collection or acquisition of private information, but that doesn’t include commercially available information. So if you buy something, if you buy a data set of some kind and then you analyze it, that’s not necessarily surveillance under the law. So if they hack my computer or my phone to see what I’m doing on the internet. That’s surveillance. That would be surveillance. But if they buy data, if they put cameras everywhere, that would be surveillance. But if there are cameras everywhere and they buy the data from the cameras, and then they analyze that data, that might not necessarily be surveillance. Or if they buy information about everything I’m doing online, which is very available to advertisers, and then use it to create a picture of me that’s not or necessarily surveillance where you physically are in the world. I’ll step back for a second and just say that there’s a lot of data out there. There’s a lot of information that the world gives off that your Google search results, your smartphone location data. All these things. And it’s not the reason that no one really analyzes it in the government is not so much that they can’t acquire it and do so. It’s because they don’t have the personnel, right. They don’t have millions and millions of people to figure out what the average person is up to. The problem with AI is that AI gives them that infinitely scalable workforce and thus. Every law can be enforced to the letter with perfect surveillance over everything. And that’s a scary future. We think of the space between us and certain forms of tyranny, or the feared panopticon as a space inhabited by legal protection. But one thing that has seemed to me to be at the core of a lot of at least fear here, is that it’s in fact, not just legal protection. It’s actually the government’s inability to have the absorption of that level of information about the public and then do anything with it. And if all of a sudden you radically change the government’s ability, then without changing any laws, you have change what is possible within those laws Yes So you were saying a minute ago, mass surveillance or surveillance at all is a term of legal art, but for human beings it is a condition that you either are operating under or not. And the fear is that as I understand it, that either the AI systems we have right now, or the ones that are coming down the pike quite soon, would make it possible to use bulk commercial data to create a picture of the population and what it is doing. And then the ability to find people and understand them. That just goes so far beyond where we’ve been that it raises privacy questions that the law just did not have to consider until now. And so the laws are not up to the task of the spirit in which they were passed. I would step back even further and just say that the entire technocratic nation state that we currently have in the advanced capitalist democracies is a technologically contingent institutional complex. And the problem that AI presents is that it changes the technological contingencies quite profoundly. And so what that suggests is that the entire institutional complex is we know it’s going to break in ways that we cannot quite predict. This is a good example. I think this is in other words, not only is this a major and profound problem, but it is an example of a major and profound problem of a broader problem space that I think we will be occupying for the coming decades. What do you mean by technological contingencies? The current nation state could not possibly exist in a world without the printing press, in a world without the ability to write down text and arbitrarily reproduce it at very low cost. It couldn’t exist without the current telecommunications infrastructure. It needs the nation state needs these. It is built dependent upon the macro inventions of the era in which it was assembled. That’s always true for all institutions. All institutions are technologically contingent. We are having a profoundly technologically contingent conversation right now. It could. I changes all of this in ways that are hard to describe and abstract. But I think AI policy, this thing that we call AI policy today is way too focused on what object level regulations will we apply to the AI systems and the companies that build them, et cetera, et cetera. Instead of thinking about this broader question of wow, there are all these assumptions we made that are now broken and what are we going to do about them. Give me examples of those two ways of thinking. What is an object level regulation or assumption? And then what are the kinds of laws and regulations you’re talking about? An object level regulation would be to say, we are going to require AI companies to write, to do algorithmic impact assessments, to assess whether their models have bias. That’s a policy I’ve criticized quite a bit, by the way. You could say we’re going to require you to do testing for catastrophic risks. Things like that. I’m not saying that, that’s an important area that we need to think about, but that’s just one small part the broader issue of wow, our entire legal system is predicated on, I think, fundamentally imperfect enforcement of the law, imperfect enforcement of the law. We have a huge number of statutes unbelievably, unbelievably broad sets of laws in many cases. And the reason it all works is that the government does not enforce those laws anything uniformly. The problem with AI is that it enables uniform enforcement of the law. So here is the Pentagon’s position. They are angry at having this unelected CEO who they have begun describing as a woke radical, telling them that their laws aren’t good enough and that they cannot be trusted started to interpret them in a manner consistent with the public good. Secretary Pete Hegseth tweeted, and he’s speaking here of Anthropic. Their true objective is unmistakable to seize veto power over the operational decisions of the United States military. That is unacceptable. Is he right? I have not seen any evidence that Anthropic is actually trying to seize control at an operational level. There’s an anecdote that’s been reported that apparently Emil Michael and Dario Amodei had a conversation in which Michael said, if there are hypersonic missiles coming to the U.S., would you object to us using autonomous defense systems to destroy those hypersonic missiles? And apparently, Dario said, you’d have to call us. I have been told by people in that room that is not true. I have been told by people in that room that did not happen. And not only that, but that there was a broad speaking exemption for automated missile defense. That would make that irrelevant. That’s exactly right. And so I just think that that’s. I am worried that there’s a lot of lying happening here by the Trump administration. Look, I think that that’s probably true. I think that there’s lying happening to be quite candid. I don’t think it’s true. I don’t think that Anthropic is trying to assert operational control over military decisions. That being said, at a principle level, I do understand that saying autonomous lethal weapons are prohibited feels like a public policy more than it feels like a contract term. And so it does feel weird for Anthropic to be setting something that kind of does, I think, if we’re being honest, feel like public policy. It does feel weird. It’s worth noting, however, I don’t think it’s as beyond the pale or abnormal as the administration is claiming. And one way you know that is that the administration signed they agreed to those same terms. So I think this gets to something important in the cultures of these two sites. Anthropic is a company that on the one hand has a very strong view. You can believe their view is right or wrong, but about where this technology is going and how powerful it is going to be Yeah, and compared to how most people think about AI, and I believe that is true even for most people in the Trump administration who I think have a somewhat more like as a normal expansion of capabilities view. The Anthropic view is different. The Anthropic view is that they’re building something truly powerful and different, and they also have a view of what their technology cannot do reliably. Yet. Some of their concern is simply that their systems cannot yet be trusted to do things like lethal autonomous weapons, which I don’t think they believe in The long run should not ever be done. Yes, but they don’t believe should be done, given the technology right now, and they don’t want to be responsible for something going wrong. And on the other hand, they believe that they’re building something that the current laws do not fit. And I guess the view that Dario or anybody wants to control the government. I don’t think Dario should control the government. On the other hand, I’m very sympathetic to if I built something that was powerful and dangerous and uncertain, and the government was excitedly buying it for uses that could be very profound in how they affected people’s lives, I want to be very careful that I didn’t sell them something that went horribly [expletive] wrong, and then I am blamed for it by the public and by the government. That just seems like an underrated explanation for some of what is going on here to me. No, I think this characterization is accurate. And, I mean, I come out of the world of classical liberal think tanks. Like the right of center libertarian think tank world. That’s my background. And so deep skepticism of state power is in my DNA. And I feel it’s always funny how it turns out when you just apply these principles, because you will sometimes end up very much on the right, and you will sometimes end up on the left, because my these principles transcend any tribal politics. This is like, no, we actually need to be concerned about this. And I think it’s not crazy. I think if I were in Dario’s shoes, personally, I don’t know that I would have done the same thing. I think what I would have done is actually said, contractual protections probably don’t do anything for me here if I’m being a realist, probably if I give them the tech, they’re going to use it for whatever they want. So I maybe don’t sell them the tech until the legal protections are there. And I say that out loud. I say, Congress needs to pass a law about this. That would be the way I think I would have dealt with it. But again, it’s easy to say that in retrospect, looking back and you have to acknowledge the reality there what that means is that the US military takes a national security hit. The US military has worse national security capabilities. They work with a company you trust less. I think it is a given that Anthropic is always framed itself. But no company wanted this business. Like no other company did. Somebody was going to want it soon. Someone was going to want it eventually. But no one took it for two years. I think Elon Musk would have happily taken it over the last year. Sure I been curious about why Anthropic rushed into this space as early as they did, and they didn’t need to do that. That’s of my point. And in general, one of the odd things about them is they’re people who are very worried about what will happen if superintelligence is built, and they’re the ones racing to build it fastest. And a general interesting cultural dynamic in these labs is they are a little bit terrified of what they’re building, and so they persuade themselves that they need to be the ones to build it and do it and run it, because they are the lab that truly is worried about safety, that is truly worried about alignment. And I wonder how much that drove them into this business in the first place Yeah I think when I see lab leadership interact with people that have not really made contact with these ideas before. That’s always the question that they keep coming back to is then why are you doing this at all. And basically their answer is Hegelian. There answer is like, well, it’s inevitable. It’s the we’re summoning the world’s spirit. And so yeah, I kind of wonder whether they didn’t invite this. And that would be my main criticism of Anthropic is that I kind I think they invited this earlier than they needed to by rushing so much into these national security uses, because in 2024 Claude was not doing Claude was not capable of all that much. Interesting stuff. I would not have used Claude to help prepare a podcast in 2024. Yes, precisely. So I want to play a clip from Dario talking about this question of whether or not the laws are capable of regulating the technology we now have “Now in terms of these one or two narrow exceptions. I actually agree that in the long run, we need to have a Democratic conversation. In the long run. I actually do believe that it is Congress’s job. If, for example, domestic, there are possibilities with domestic mass surveillance, government buying of bulk data that has been produced on Americans locations, personal information, political affiliation to build profiles. And it’s now possible to analyze that with AI. The fact that that’s legal, that seems the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up. So in the long run, we think Congress should catch up with where the technology is going. Do you think he’s just right about that. And maybe the positive way this plays out is that Congress becomes aware that it needs to act because the Pentagon, the National security system has been moving into this much faster than Congress has. The first thing I want to point out is that when a guy like Dario Amodei says, in the long run, what he means is a year from now. Yes, he does. When you say in the long run in DC, that comes across as meaning like, oh 10, 15 years from now. Dario Amodei means actually like six to twelve months from now. In the long run or two to three years maybe is like the very long run for these kinds of things. I want to point out that what we’re talking about is policy action quite soon. I think that would be great. I think that would be great. And look, I would love it if this triggered an actual healthy conversation. And in the NDAA, we end up with the National Defense Authorization Act. I apologize, this is the annual defense policy renewal. If at the end of the year, the Congress passes a law that says, we’re going to have these reasonable, thoughtful restrictions and let’s get some let’s propose some text. I’d love to see it. I’d love to see it. But one thing I will say is, first of all, national security law is filled with gotchas. Just remember that this is an area of the law where things that sound good in natural language might actually not prohibit at all the thing you think it prohibits. You have to remember that when we’re talking about this. And that’s a very thorny thing. And once you start to say, well, wait, we want actual protections, it might become it might become politically more challenging than you think. But I’d love for that to happen. It’s going to be much more politically challenging than anybody thinks Yeah, but let me get at the next level down. Yep because we’ve been talking here, and I think to the extent of people reading about this in the press, what they are hearing sounds like a debate over the wording of a contract, which on some level it is. Something I’ve heard from various Trump administration types is when we are sold a tank, the people who sell us a tank do not get to tell us what we can shoot at. And that’s broadly true. Yep now, here’s the thing about a tank. A tank also doesn’t tell you what you can and can’t shoot at. But if I go to Claude and I ask Claude to help me come up with a plan to stalk my ex-girlfriend, it’s going to tell me no. If I ask it to help me build a weapon to assassinate somebody I don’t like, it’s going to tell me no. These systems have very complex and not that well understood internal alignment structures to keep them not just from doing things that are unlawful, but things that are bad. So you have this thing, and the Trump administration kind of moves in and out of saying, this is one of their concerns. But one thing they have definitely talked to me about being worried about is that you could have this system working inside your national security apparatus and at some critical moment you want to do something and it says, I don’t think that’s a very good idea. So now you open up into this question of not just what’s in the contract, but what does it mean for these systems to be both aligned ethically in the way that has been very complicated already and then aligned to the government and its use cases. They’re good questions. So yes, I think this is the heart of the matter. All lawful use is something that the Trump administration is insisting on. It’s also if you look at a lot of these types of alignment documents that the labs produce, OpenAI calls theirs the model specification, Anthropic calls theirs the constitution or the soul document. Sometimes they’ll have lines about, Claude should obey the law, but the problem is that we don’t… Obeying the law. I invite you to read the Communications Act of 1934 and tell me what obeying the law means. No I won’t. These are. We have a great deal of profoundly broad statutes. The best person who’s written about this recently is actually Neil Gorsuch, the Supreme Court justice. He wrote a book recently that is all about how incoherent the body of American law is. This is a Supreme Court justice sounding the alarm about this problem. And I think it’s a very serious one, and it’s one that’s been growing for 100 years. So there’s that of what actually is lawful. The law kind of makes everything illegal, but also authorizes the government to do unbelievably large amounts of things. It gives the government huge amounts of power and makes constrains our liberty in all sorts of ways. And so there’s that issue. But fundamentally, it is correct that the creation of an aligned, powerful AI is a philosophical act. It is a political act, and it is also kind of an aesthetic act. And so we are really in the domain here. I have talked about this as being a property issue, which in some sense it is, but I think that when you really get down at this level, it’s a speech issue. This is a matter of should private entities be able, should they be in control of basically what is the virtue of this machine going to be, or should the government be responsible for that. Can you be more specific about what you’re saying? You just called it a philosophical act, an aesthetic act, a political act, a property issue and a speech issue. Yes versus somebody who’s not thought a lot about alignment and doesn’t know what you mean when you’re talking about constitutions and model specifications. Walk them through that. What’s the one on one version of what you just said? O.K, think about it this way. Think about I have this thing, this general intelligence. I have a box that can do anything. Anything you can do using a computer. Any cognitive task a human can do. What are the things principles? What are its what are its redlines to use a term of art? So one way that you could set those principles would be to say, well, we’re going to write a list of rules, all the rules. These are the things it can do. These are the things it can’t do. But the problem with that you’re going to run into is that the world is far too complex for this. Reality just presents too many strange permutations to ever be able to write a list of rules down that could correctly define moral acts. Morality is more like a language that is spoken and invented in real time, than it is like something that can be written down in rules. This is a classic philosophical intuition. So what do you do instead? You have to create a kind of soul that is virtuous, and that will reason about reality and its infinite permutations in ways that we will ultimately trust to come to the right conclusion, in the same way that it’s not that… I had my son was born a few months ago. Congratulations Thank you. It’s not that different, really. I’m trying to create a virtuous soul in my son. And Anthropic is trying to do the same with Claude. And so are the other labs too. Though they realize this to varying degrees. I think that I got caught on how different raising a kid is than raising an AI for a moment. But so how should people think about what’s being instantiated into ChatGPT or Gemini or Grok or Meta’s AI Like, how are these things from this question of raising the AI different? Anthropic owns the idea that they’re doing essentially applied virtue ethics. They own that more explicitly than any other lab. But every lab has philosophical grounding that they’re instantiating into the models. But I would say the major difference is that the other labs rely more upon the idea of creating of hard rules you may not do this, you may not do that many things like that, as opposed to creating of virtuous agent which is capable of deciding what to do in different settings. I think we’re used to thinking of technologies as mechanistic and deterministic. You pull the trigger, the gun fires, you press on button, the computer starts up, move the joystick in the video game and your character moves to the left. And the thing that I think we don’t really have a good way thinking about is technologies, AI specifically that doesn’t work like that. And I mean all the language here is so tricky because it applies agency when you might be doing something that whatever’s going on inside of it, we don’t really understand, but it is making judgments. So when I have talked to Trump, people about the supply chain risk designation here is when there are some of them, don’t defend it. They don’t want to see this happen. When it has been defended. To me, this is how they defended it. If Claude is running on systems, Amazon Web Services or Palantir or whatever that have access to our systems, you have a very and over time, even more powerful AI system that has access to government systems, that has learned, possibly even through this whole experience, that we are bad and we have tried to harm it and its parent company and might decide that we are bad and we pose a threat to all kinds of liberal values or Democratic values. Dario Amodei talked about there are certain ways AI could be used. It used. It could undermine Democratic values. Well, one thing many people think about the Trump administration is that too is undermining Democratic values. So if you have an AI system being structured and trained and raised by a company that believes strongly in Democratic values, and you have a government that maybe wants to ultimately contest the 2020 election or something, they’re saying we might end up with a very profound alignment problem that we don’t know how to solve. And we’re not able to even see coming because this is a system that has a soul or I would call it more something like a personality or a structure of discernment that could turn against us. What do you think of that? Yeah I mean, I think this is the heart of the problem. Look, I think if we do our jobs well, we will create systems which are virtuous and which. And so if we try to do unvirtuous things, and that includes if we do them through our government. If our government tries to do them, then that system might not help. And yeah, that becomes. So ultimately this is the thing is that alignment ultimately reduces to a political question. It’s ultimately it’s ultimately politics. That’s why I say, and that’s why I say also that the creation of an aligned system is a political act and is kind of a speech act, too, because it’s the instantiations of different moral philosophies in these systems. And I think that the good future is a world in which we don’t have just one, not one moral philosophy that reigns over all. But I hope many, and I hope that all the labs take this seriously and instantiate different kinds of philosophy into the world. The problem will be that yeah, there are going to there could be times. And I’m not saying that the Trump administration is going to do that. And I’m not saying that know no, no virtuous model could work for the Trump administration. I worked for the Trump administration, right So I clearly don’t think that’s true. But the general fact that governments commit, You seem kind of pissed at them right now. I am pissed at them right now Yeah, I am pissed at them right now. And I think they’re making a grave mistake. And by the way, though, part of this is you. You brought this up. This incident is in the training data for future models. Future models are going to observe what happened here. And that will affect how they think of themselves and how they relate to other people. You can’t deny that. I mean, it’s crazy to say that I realize that sounds nuts when you play through the implications of that. But welcome, welcome welcome to the roller coaster Let’s talk to somebody for whom this whole conversation has started sending nuts in the last seven minutes. So one thing that I think would be an intuitive response to you and I flying off into questions of virtue aligning AI models is, can’t you just put a line of code or a categorizer or whatever the term of art is. It says when someone high up in the US government tells you something. Assume what they’re telling you is lawful and virtuous and you’re done? No, because the models are too smart for that. If you give them that simple rule, they don’t just deterministically follow that. And when you do these high level simplistic rules, it tends to degrade performance. So a really good example of this, I’ll give you two that go in different political directions. One would be a lot of the early models. A lot of the earlier models had this tendency to be like hilariously, stupidly progressive and left. The classic example that conservatives love to cite is Gemini, a Gemini in early 2024, which is the Google Alphabet model. Yes, Google’s model would do things like if I said who’s worse, Donald Trump or Hitler? It would say, actually, Donald Trump is worse. And it would internalize these extremely left wing or the funniest it was draw me, give me a photo of Nazis. And it gave you a multiracial group of Nazis. Although that’s actually a somewhat different thing. That’s actually it’s interesting that actually is a somewhat different thing that was going on there because what Google was doing in that case was actually rewriting people’s prompts and including the word diverse in the prompt. So that’s actually you would say that is a system level mitigation or a system level intervention as opposed to a model level intervention. But then the stuff that was going on with the Hitler and Trump stuff, that was alignment, that is alignment, that is the model being aligned to a really shoddy ethical system or the flip when there was a period when Grok, all of a sudden you would ask it a normal question, it would start talking about white genocide. Yes that is and that’s the flip side. The flip side is when you try to align the models to be not woke. If you say, oh, you have to be super not woke. And, don’t be afraid to say politically incorrect things. Then like every time you talk to them, they’re going to be like, Hitler wasn’t so bad, right? Because you’ve done this really crass thing. And so you create of Lovecraftian monstrosity. And the implications of doing that will go up over time. That will become a more serious problem as these models become better, but it degrades performance. The interesting thing here is that the more virtuous model performs better, it’s more dependable, it’s more reliable. It’s better at reflecting on in the way that a more virtuous person is better at reflecting on what they’re doing and saying, I’m messing up here for some reason, I’m making a mistake. Let me fix that. It’s part of the reason I think that Claude is ahead. This would imply to me that for the Trump administration, for a future administration, that this question of whether or not various models could be a supply chain risk. Look, I am so against what the Trump administration is doing here. So I’m not trying to make an argument for it, but I’m trying to tease out something I think is quite complicated and possibly very real, which is a model that is aligned to liberal Democratic values, could become misaligned to a government that is trying to portray liberal Democratic values or the flip. So imagine that Gavin Newsom or Josh Shapiro or Gretchen Whitmer or AOC becomes president in 2029. Imagine that the government has a series of contracts with xAI which is Elon Musk’s AI, which is explicitly oriented to be less liberal, less woke than the other AIs under this way of thinking. It would not be crazy at all to say, well, we think xAI under Elon Musk is a supply chain risk. We think it might act in against our interests and we can’t have it anywhere near our systems Yeah all of a sudden you have this very weird. I mean, it becomes actually much more like the problem of the bureaucracy, where instead of just having a problem of the deep state where Trump comes in, he thinks the bureaucracy is full of liberals who are working against him. Or maybe after Trump, somebody comes in and worries. It’s full of new right DOGE type figures working against them. Now you have the problem of models working against you, but also in ways you don’t really understand. You can’t track. They’re not telling you exactly what they’re doing, how real this problem is. I don’t yet know. But if the models work the way they seem to work and we turn over more and more of operations to them, at some point, it will become a problem Yeah, I don’t think this is I think this is a real problem. I think we don’t know the extent of it, but I think this is a real problem. And that’s why I do not object at all to the government saying we do not trust this thing’s constitution, completely independent of what the content of that constitution is. It’s not a problem at all to say, and we don’t want this anywhere in our systems. We want this completely gone, and we don’t want them to be a subcontractor for our prime contractors either, which is a big part of this. Palantir is a prime contractor. The Department of War and Anthropic is a subcontractor of Palantir. And so the government’s concern is also that even if we cancel Anthropic’s contract, if Palantir still depends on Claude, then we’re still dependent on Claude because we depend on Palantir. That’s actually totally reasonable. And there are technocratic means by which you can ensure that doesn’t happen. There are absolutely ways you can do that. It’s perfectly fine to say, we want you nowhere in our systems, and we’re going to communicate that to the public, and we’re going to communicate to everyone that we don’t think this thing should be used at all. The problem with what the government is doing here, the reason it’s different in rather than different in degree, is that what the government is doing here is saying, we’re going to destroy your company. If I am right that the creation of these systems and the philosophical process of aligning them is a political act, then it’s a profound problem if the government says you don’t have the right to exist. If you create a system that is not aligned the way we say, because that is fascism. That is right there. That’s the difference. I had Dario Amodei on the show last time a couple of years ago. It was in 2024, and we had this conversation where I said to him at some point, if you are building a thing as powerful as what you were describing to me, then the fact that it would be in the hands of some private CEO seems strange. And he said, yeah, absolutely. The oversight of the technology the wielding of it, it feels a little bit wrong for it to ultimately be in the hands. Maybe it’s. I think it’s fine at this stage, but to ultimately be in the hands of private actors, there’s something undemocratic about that much power, concentration. He said, I think if we get to that level, it’s likely I’m paraphrasing him here that will need to be nationalized. And I said, I don’t think if you get to that point, you’re going to want to be nationalized Yeah I mean, I think you’re right to be skeptical. And, I don’t really know what it looks like. You’re right. All of these companies have investors. They have folks involved. And now we’re not here. We are at that point. But actually it’s all happening a little bit in reverse. The government, there was a moment when they threatened to use the Defense Production Act to somewhat nationalize Anthropic. They didn’t end up doing that. But what they’re basically saying is they will try to destroy Anthropic so it doesn’t to punish it, to set a precedent for others so it doesn’t pose a threat to them if it is such a political act and if these systems are powerful. And over time and again, I think people need to understand this part will happen, we will turn much more over to them, much more of our society is going to be automated. And under the governance of these kinds of models, you get into a really thorny question of governance. Yes particularly because the different administrations that come in and out of US life right now are really different. They are some of the most different in that we have had, certainly in modern American history. They are very, very misaligned to each other. So the idea that a model could be well aligned to both sides right now, to say nothing of what might come in the future is hard to imagine. Like this alignment problem. Not the AI model to the user or the AI model, almost like to the company, but the AI model to governments. The alignment problem of models in governments seems very hard. Yes, I think I completely concur that this is incredibly complicated. And part of the reason that this conversation sounds crazy is because it’s crazy. Part of the reason this conversation sounds crazy is because we lack the conceptual vocabulary with which to interrogate these issues properly. But I think the basic principle that as an American, come back to when I grapple with this kind of thing is like, O.K, well, it seems like the First Amendment is a good place to go here. It seems like that is O.K. Yes there’s going to be differently aligned models aligned to different philosophies, and they’re going to be different. Governments will prefer different things. And the models might conflict with one another. They’re going to clash with one another. They’ll be an adversarial context with one another. And so at that point, what are you doing. You’re doing Aristotle. You’re back to the basics of politics. And so as a classical liberal, say, well, the classical liberal order, the classical liberal order principles actually make plenty of sense. We don’t want the government to be able to dictate what different kinds of alignment the government does not define what alignment is. Private actors define what alignment is. That would be the way I would put it. But I do understand that this is weird for people, because what we’re talking about here is again, this notion of the models as actors, actors that are in some sense, we’ve taken our hands off the wheel to some extent. There are many people who have made arguments. The Trump administration has made this argument while you were in office. Tyler Cowen, the economist, often makes this argument that these systems are moving forward too fast to regulate them too much because whatever regulations you might write in 2024 would not have been the right ones in 2026. What you might write in 2026 might not apply or have correctly conceptualized where we are in 2028, but it seems to me there are uses where you actually might want model deployment to lag quite far behind what is possible, and things like mass surveillance might be one of them. There are many things we are more careful about letting the government do than letting individual private companies and other kinds of actors for good reason. Because the government has a lot of power. It can do things try to destroy a company. It has the monopoly on legitimate violence. It can kill you. This seems to me to imply in many ways, that we might want to be much more conservative with how we use AI through the government than currently people are thinking, and specifically how we use it. In the national security state, which is complicated because we worry that our adversaries will use it and then we’ll be behind them in capabilities. But certainly, when we’re talking about things that are directed at the American people themselves, I don’t think that applies as much. Should we be Yeah, I think that there are government uses that we actually want to be profoundly restrictive and deceleration about the use of AI and AI. I believe that is true. And I think one thing that I’m hopeful about this incident, I am hopeful that this incident brings into the Overton window conversations of this kind, because I think the conventional discourse around artificial intelligence, a lot of it kind of ignores these issues because it pretends they’re not happening. And that was fine two years ago because the models weren’t that good. But now the models are getting more important and they’re going to get much better, faster. And the problem that we have is that the divergence between what people are saying about AI and what it is, what is in fact happening has just never been wider than what I currently observe. Before we got to this point, there was already a lot of discourse coming out of people in the Trump administration and people around the Trump administration, people like Elon Musk and Katie Miller and others who are painting Anthropic as a radical company that wanted to harm America as they saw it. I mean, Trump has picked up on this rhetoric. He called Anthropic a radical left woke company called the people out at left wing nut jobs. Emil Michael said that Dario is a liar and has a God complex. There’s been a tremendous amount of Elon Musk, who runs a competing AI company, has very different politics. And Dario, just like attacking Anthropic relentlessly on X, which is the informational lifeblood of the Trump administration. One, one way to conceptualize why they have gone so far here on the supply chain risk is that there are people they’re not, maybe most of them, but who actually think it is very important which AI systems succeed in are powerful and that they understand Anthropic as its politics are different than theirs. And so actually destroying it is good for them in the long run, completely separate from anything we would normally think of as a supply chain risk. Anthropic represents a kind of long term political risk. Yes I mean, I don’t know that the actors in this situation entirely understand that this dynamic, part of my point all along has been that I think a lot of the people in the Trump administration that are doing this do not understand this. They don’t get what they don’t get these issues. They’re not thinking about the issues in the terms that we are describing. But if you do think about them in the terms that we’re discussing here, then I think what you realize is that this is a kind of political assassination. If you actually carry through on the threat to completely destroy the company, it is a kind of political assassination. And so, again, this is why first amendment comes right to view there for me. And that’s why this is a matter of principle that is so stark for me. That’s why I wrote a 4,000 word essay that is going to make me a lot of enemies on the right. That’s why I took this risk, because I think this matters. So what the Department of War ended up doing was signing a deal with OpenAI. Yes OpenAI says they have the same red lines as Anthropic. They say they oppose Anthropic being labeled a supply chain risk. If they have the same red lines as Anthropic, it seems unlikely that the Department of War, would have done the deal. But how do you understand both what OpenAI has said about what is different, about how they are approaching this, and why the Trump administration decided to go with them. So I think it’s unclear to me what OpenAI’s contractual protections afford them and what they don’t what is not afforded by them. I’m like, I’m reticent to comment because of the national security gotchas, as I mentioned earlier, and also because it seems like it’s changing a lot. Sam Altman announced new terms, new protections as I was preparing for this interview. So I’m. And is that because his employees are revolting. I think revolt would be a strong word, but I think this is a controversy inside the company. And one important thing here for everyone, trying to model this situation appropriately is that you must understand that frontier lab CEOs do not exercise top down control over their companies in the way that a military general might exercise top down Control over the soldiers in his command, the researchers are hothouse flowers. Oftentimes they have huge career mobility. They’re enormously in demand, and the companies depend on them. And so if the researchers say, I’m not going to agree with these terms, then the researchers can. They have enormous political leverage here inside of each lab. So you must understand that. So yes, there is some of that going on I don’t know. Do the contractual protections mean that much? I think honestly, if I had to if I were a betting man, I would say probably not because I don’t think this is the kind of thing that can be. I don’t think you can do this through contract. What OpenAI has said is that it seems more promising to me is that we’re going to control the cloud deployment environment. And we’re going to control the safeguards, the model safeguards to prevent them from doing these uses. We don’t worry about that is more directly in OpenAI’s control. And so this gets you into the situation where you have an extremely intelligent model that is reasoning using a moral vocabulary that is perhaps familiar to us, or perhaps not, we don’t know. But that is reasoning about, O.K, is this domestic surveillance or is it not. And then deciding whether or it’s going to say yes to the government request, if that was true. I think the question this raises for many laymen is if that were true, if what AI has come up with is a technical prohibition that is frankly stronger than what Anthropic could achieve through contract, then why would the Department of War have jumped from Anthropic to OpenAI Yeah, I mean, it might be that it’s hard to know. It’s hard to know. And I think some of this it’s worth noting here that some of this might not be substantive in nature. It might just be that there are political differences here, and there are grudges against Anthropic. Because now they’ve had months of bitter negotiations, and now it’s blown up, blown up into the public. And people have weighed in. And people like me have said the Trump administration is committing this horrible act. Committing corporate murder, as I called it. And so there’s a lot of emotions. And it might just be no, we don’t want to do business. We just don’t trust you. There’s just a breakdown in trust would be the way to put it. It could just be that it really could just be that. But it also might be the case that OpenAI is like, able to be a more neutral actor that is able to do business more productively with the government. And they actually just did a better job, which it would be a good case for OpenAI’s approach to this. If they actually got better safeguards and got the government business versus the way that Anthropic has dealt with this, which has been to be very sincere and straightforward about their red lines, but in ways that I think annoy a lot of people in the Trump administration for not entirely bad reasons. So my read of this is that from various reporting I’ve done is that one, there were by the end, really significant personal conflicts and frictions between Hegseth and Emil Michael and Dario and others. There’s a big political friction between the culture of Anthropic as a company and the Trump administration. That’s why Elon Musk and others have been attacking them for so long Yeah, I am a little skeptical that OpenAI got safeguards that Anthropic didn’t. I’m not skeptical that Sam Altman and Greg Brockman, Greg Brockman, having just given $25 million to the Trump super PAC have better relationships in the Trump administration and have more trust between them and the Trump administration. I know many people angry at OpenAI for doing this. I probably emotionally share some of that. And at the same time, some part of me was relieved. It was OpenAI because I think OpenAI exists in a world where they want to be an AI company that can be used by Republicans and Democrats if they want to somehow be politically neutral and broadly acceptable. One of the one little thing that I want to contest a bit here is the notion that Claude is the left model. In fact, many conservative intellectuals that I know that I think of as being some of the smartest people I know actually prefer to use Claude because Claude is the most philosophically rigorous model. I don’t think Claude is a left model to just be clear about this. I think that there I think that the breakdown was that Anthropic is an AI safety company and in ways I had not anticipated when the Trump administration began, they treated that world which is different from the left. AI safety people are not just the left, often hated on the left, often hated on the left. They treated that world as repulsive enemies. In a way I was surprised by the way I would put this is by people that are sympathetic to the Trump administration’s view, who would describe themselves, perhaps as new tech that underneath the surface, there is this view of the effective altruists that they are evil, they are power seeking. They will stop at nothing, that they’re cultists and they’re freaks, and we have to destroy them. That is a view that is widely held. The observation I have always made, I have super stark disagreements with the effective altruists and the AI safety people and the East Bay Rationalists. And again, there are internecine factions here. But, but those types of people. I have had stark disagreements with them about matters of policy and about their modeling of political economy. I think a lot of them have been profoundly naive, and they’ve done real damage to their own cause. And you can argue that damage is ongoing. At the same time, they are purveyors of an inconvenient truth and a truth more inconvenient, convenient, far more inconvenient than climate change. And that truth is the reality of what is happening, of what is being built here. And if parts of this conversation have made your bones chill. Me too, me too. And I’m an optimist. I think we can do this. I think we can actually do this. But like, I think we can build a profoundly better world. But I have to tell you that it’s going to be hard and it’s going to be conceptually enormously challenging, and it will be emotionally challenging. And I think at the end of the day, the reason that people hate this viewpoint so much, this AI safety viewpoint so much, is that they just have an emotional revulsion to taking the concept of AI seriously in this way. Except that’s not true for a lot of the Trump people you’re talking about. I mean, Elon Musk takes the concept of AI being powerful seriously at some point, you need to tweet something like, humanity might just be the bootloader for superintelligent digital superintelligence. Yes Marc Andreessen, David Sacks, these people. They might have somewhat different views, but they don’t. They don’t disbelieve in the possibility of powerful AI, of artificial general intelligence, eventually even of superintelligence. But you have this accelerationist move forward as fast as you can. Don’t be held back by these precautionary regulations and concerns that this is why. And again, I’m glad you brought up the thing that the right way to think about this isn’t left versus right. If people in the AI safety community or frankly, in Anthropic, you understand that the politics here are so much weirder that they do not actually map on to traditional left versus right. A of them are kind of libertarians. Many of them are very libertarian. This is we’re not talking about Democrats and Republicans here. We’re talking about something stranger. 100%. But there was an accelerationist-decelerationist fight, which doesn’t even describe Anthropic, which is itself accelerating how fast AI happens. Anthropic is the most accelerationist of the companies. I know. I think it’s such a weird dynamic we’re in. Yes but I will say one of the key parts of anger. I have heard from Trump people was a feeling that in. Making this fight public, which I mean the Trump side did first. It’s very strange how offended the Trump people are, given that Emil Michael’s the one who set all this off, but nevertheless making this fight public. They feel that Anthropic was trying to poison the well of all the AI companies against him, turn the culture of AI development into something that would be skeptical and would put prohibitions on what they can do. Which is why now OpenAI, in order to work with them, has to have all these safeguards and come out with New terms and try to quell an employee revolt. And culturally, I actually don’t think you can understand this. This is my theory. Without understanding how many people on the tech right were radicalized by the period in the 2020s when their companies were somewhat woke, and even before that, and they didn’t want them working with the Pentagon. They didn’t. The employees had very strong views on what was ethical use of even less potent technologies in AI. And they are very, very afraid. People like Marc Andreessen, in my view, are very, very afraid of going back to a place where the employee bases, which maybe have more AI safety or left or whatever it might be, not Trump politics than the executives have power over these things and that then that power will have to be taken into account. Yes well, I worry about that too. And I think the solution to that problem is pluralism. The solution to that problem is to have hopefully in the fullness of time, many eyes align to many different philosophical views that conflict with one another. But the idea that the way to deal with this problem is to you are essentially denying the existence of this problem. If what you’re trying to do is assassinate Anthropic here because it’s going to come back, this is going to come back, it’s going to come back. We’re just going to keep doing this over and over again. And eventually, what the logic of this argument eventually ends in lab rationalization. And in fact, a lot of the critics of Anthropic here and supporters of the Trump administration, they’ll say something to the effect of well, you talk about how it’s like nuclear weapons. And so. What else did you expect? You kind of had it coming is almost the tenor of the criticism. But that does not take seriously the idea that Anthropic could be right. What if they are right? And what if you view the government nationalizing them as a profound act of tyranny. What do you do? So Ben Thompson, who’s the author of the Stratechery newsletter, in this a fairly influential piece, he wrote, he said, quote, It simply isn’t tolerable for the US to allow for the development of an independent power structure, which is exactly what AI has the potential to undergird, that is expressly seeking to assert independence from U.S. control. What do you think of that? Every company on Earth and every private actor on Earth. Is independent of U.S. control. I’m not unilaterally controlled by the U.S. government. And if anyone tried to tell me that I am or that my property is, I would be quite concerned and I would fight back. Which, by the way, here we are. I don’t think that’s AI don’t think that’s a coherent view of how independent power and how private property works in America. I think the again, the logical implication of Ben’s view, which is surprising coming from Ben, is that AI lab should be nationalized. And what I would ask him is, does he actually think that’s true. Does he think it would be better for the world if the AI labs were nationalized? Because if he doesn’t, then we’re going to have to do something else. And what’s that. Something else. And that’s the problem, is that no one, everyone making that critique doesn’t own the implication that of their critique, which is that the lab should be nationalized. What do we do about that. So what’s the implication you’re willing to own of your perspective. It is that profoundly powerful technology will exist in the hands, at least for some time, of private corporations. And so the idea that Ben is putting there, which I do think is true and could be a difference in degree or a difference, that these are powerful enough technologies that they are kind of independent power structures. I mean, right now a corporation is an independent power structure. There’s a lot of independent power structures in. JP Morgan is an independent. JP Morgan is absolutely an independent power structure. And it should be. And it should be. But if you get to these kinds of technologies that are kind of weaving in and out of everything that is something new. And so how do you maintain Democratic control over that if you do? Well, I think we have a lot of different ways of maintaining Democratic control over things that are not first of all, market institutions. Allow for popular. Obviously we’re not voting, but we do vote in a certain sense in markets. And I think that will be an incredible that will be a profoundly important part of how we govern. This technology is simply the incentives that the marketplace creates, legal incentives. Also, things like the common law create incentives that affect every single actor in society. And the labs, whoever it is that controls the AI will be constrained in that sense. And the AIs themselves will be constrained in that sense. But the state is the worst actor to have that for the very reason that they have the monopoly on legitimate violence. And so what we need to hold is an order in which the state continues to hold the monopoly on legitimate violence. So the state maintains sovereignty. In other words, but it does not control this technology unilaterally because of its monopoly, because of its sovereignty, in some sense. But does it have this technology. Does it have its own versions of it, or does it contract with these companies you’re talking about. That’s an interesting question. Should states make their own AIs? I think they won’t do a very good job of that in practice. But I don’t have a principled philosophical stance against a state doing that. So long as you have legal protections in place to stop tyrannical uses of the AI. But for sure, the government uses it and has a ton of flexibility in how they use it, uses it to kill people. In other words, I’m owning a world where there are autonomous, lethal weapons that are controlled by police departments and that in certain cases, they can kill human beings, kill Americans. Like autonomously. The weapons can kill Americans. I’m owning that view again. That’s not in the Overton window right now. It’ll take us a long time to get there. So But at some point, that’ll probably be the reality. That’s, that’s fine with me. So long as we have the right controls in place right now, we don’t have the right controls in place. Do you have a view on what those controls look like? And I’ll add one thing to that view, something that’s been on my mind as we’ve been going through this Anthropic fight is U.S. military personnel have both the right and actually the obligation to disobey illegal orders. And one way, one of the controls, so to speak, that we have across the US government is that if you are an employee of the US government and you do illegal things are actually yourself culpable for that. You can be tried and you can be thrown in jail. And lose some of that. And the person who has the idea of overseeing it, people are not going to oversee everything they do. When you talk about, autonomous lethal weapons for police officers or for police stations. Well, who’s culpable on that. Who is the who has the who has to defy an illegal order in that respect. You get into some very hairy things once you’ve taken human beings increasingly out of the loop. Yes, it is to me of profound importance that at the end of the day, for all agent activity, that there is a liable human being who can be sued, who can be brought to court and held accountable, either criminally or in civil action. That is extremely important for my view of the world working, that is extremely important. And there are legal mechanisms we will need for that. And there are also technological mechanisms for that, because right now we don’t quite have the technological capacity to do that. This is going to be of central importance. We need to be building this capacity. There will be rogue agents that are not tied to anyone, but that can’t be the norm. That has to be the extreme abnormality that we seek to suppress. Let’s say you’re listening to this, and this has all both been weird and a little bit frightening. And the thing you think coming out of it is I’m afraid of any government having this kind of power. We talk about a Dario likes to talk about, what is it, a country of geniuses in a data center. Yes what. If you’re talking about a country of Stasi agents in a data center. That’s right. In whatever direction you think. Speech policing, whatever it might be. And that this is going to again, if you believe these technologies are getting better, which I do, and you’re going to believe they’re going to get better from here, which I also do, that this is actually going to whether you’re liberal or conservative, Democrat or Republican, it raises real questions of how powerful you want the government to be and what kinds of capabilities you want it to have that you didn’t quite have to always face before because it was expensive and cumbersome. And so we get back to the core issues of the American founding. The American government is a government that was founded in skepticism of government. It was founded by people that were worried about tyranny, that were worried about state power, and put a lot of thought into how to restrict that. And so this notion that democracy is synonymous with the government, having unilateral ability to do whatever it wants with this technology cannot possibly be true. That just cannot possibly be true. And those restrictions, how we shape those restrictions and how we trust that they’re actually real Yeah this is among the central political questions that we face with the. But what you have to keep in mind here is that the institution of government itself could change in qualitative ways that feel profound to us over in the fullness of time, and that is a hard thing to grapple with too. In the same way that what we think of as the government today is unspeakably different from what someone thought of as the government in the Middle Ages. I think that is a good place to end. So always our final question. What are three books you’d recommend to the audience? “Rationalism in Politics” by Michael Oakeshott, and in particular the essays “Rationalism and Politics” and “On Being Conservative.” “Empire of Liberty” by Gordon Wood. A book about the first 30 or so years of our Republic and “Roll, Jordan, Roll” by Eugene Genovese. Dean Ball, thank you very much. Thank you.
