AI NewsWill the Pentagon’s Anthropic controversy scare startups away from defense work?

Will the Pentagon’s Anthropic controversy scare startups away from defense work?

4:52 AM IST · March 9, 2026

Will the Pentagon’s Anthropic controversy scare startups away from defense work?

In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology fell through, the Trump administrationdesignated Anthropic a supply-chain risk, and the AI company said it would fight that designation in court. OpenAI, meanwhile, quickly announced a deal of its own, prompting backlash that sawusers uninstalling ChatGPTandpushing Anthropic’s Claude to the top of the App Store charts. Andat least one OpenAI executive has quitover concerns that the announcement was rushed without appropriate guardrails in place. On the latest episode ofTechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for other startups seeking to work with the federal government, especially the Pentagon, as Kirsten wondered, “Are we going to see a changing of the tune a little bit?” Sean pointed out that this is an unusual situation in a number of ways, in part because OpenAI and Claude make products that “no one can shut up about.” And crucially, this is a dispute over “how their technologies are being used or not being used to kill people” so it’s naturally going to draw more scrutiny. Still, Kirsten argued, this is a situation that should “give any startup pause.” Read a preview of our conversation, edited for length and clarity, below. Kirsten:I’m wondering if other startups are starting to look at what’s happened with the federal government, specifically the Pentagon and Anthropic, that debate and wrestling match, and [take] pause about whether they want to be going after federal dollars. Are we going to see a changing of the tune a little bit? Sean:I wonder about that, too. I think no, to some extent, in the near term, if only because when you really try to think about all the different companies, whether they’re startups or even more established Fortune 500s that do work with the government and in particular with the Department of Defense or the Pentagon, [for] a lot of them, that work flies under the radar. General Motors makes defense vehicles for the Army and has done [that] for a very long time and has worked on all electric versions of those vehicles and autonomous versions. There’s stuff like that that goes on all the time and it just never really hits the zeitgeist. I think the problem that OpenAI and Anthropic ran into within the last week is like, these are companies that make products that a ton of people use — and also more importantly, [that] no one can shut up about. So there’s just such a spotlight on them, that naturally highlights their involvement to a level that I think most of the other companies that are contracting with the federal government — and, in particular, any of the war-fighting elements of the federal government — don’t necessarily have to deal with. The only caveat I’ll add to that is a lot of the heat around this discussion between Anthropic and OpenAI and the Pentagon is very specifically about how their technologies are being used or not being used to kill people, or in parts of the missions that are killing people. It’s not just the attention that’s on them and the familiarity we have with their brands, there is an extra element there that I feel is more abstract when you’re thinking about General Motors as a defense contractor or whatever. I don’t think we’re going to see, like, Applied Intuition or any of these other companies that have been framing themselves as dual use back off much, just because I don’t see the spotlight on it and there’s just not the sort of shared understanding of what that impact might be. Anthony:This story is so unique and specific to these companies and personalities in a lot of ways. I mean, there have been a lot ofreally interesting thought piecesabout: What is the role of technology in government? [Of] AI in government? And I think those are all good and worthwhile questions to ask and explore. I think also, though, that this is a very curious lens through which to examine some of those things because Anthropic and OpenAI are not actually that different in a lot of ways or the stances they’re taking. It’snotlike one company is saying, “Hey, I don’t want to work with the government” and one is saying, “Yes, I do.” Or one is saying, “You can do whatever you want.” and [the other is] saying, “No, I want to have restrictions.” Both of them, at least publicly, are saying, “We want restrictions on how our AI gets used.” It just seems like Anthropic is digging in their heels a lot more about: You cannot change the terms in this way. And then on top of that, there also just seems to be a personality layer where, the CEO of Anthropic and, Emil Michael — who a lot of TechCrunch readers mightremember from his Uber days, and is now [chief technology officer for the Department of Defense]. Apparently, they just really don’t like each other.Reportedly. Sean:Yes, there’s a very big “girls are fighting” element here that we should not overlook. Kirsten:Yeah, a little bit. There is, but the implications are a little bit stronger than that.  Again, to pull back a little bit, what we’re talking about here is the Pentagon and Anthropic coming into a dispute in which Anthropic appears to have lost, although I should say they are still very much being used by the military. They are considered a crucial technology, but OpenAI has kind of stepped in, and this is evolving and will likely change by the time this episode comes out. The blowback has been interesting for OpenAI, where we’ve seen a lot ofuninstalls of ChatGPT I think surged 295%after OpenAI locked in the deal with the Department of Defense. To me, all of this is noise to the really critical and dangerous thing, which is that the Pentagon was seeking to change existing terms on an existing contract. And that is really important and should give any startup pause because the political machine that’s happening right now, particularly with the DoD, appears to be different. This isn’t normal. Contracts take forever to get baked in at the government level and the fact that they’re seeking to change those terms is a problem. Loading the player…

read more

Latest AI News

View All News →
The Android Show I/O Edition: Google Showcases Gemini Intelligence on Android With New AI-Backed Widget Creation Tool

The Android Show I/O Edition: Google Showcases Gemini Intelligence on Android With New AI-Backed Widget Creation Tool

Google is bringing Gemini Intelligence to Android, its new suite of AI-powered tools for its operating system, the Mountain View-based tech giant announced during the Android Show I/O Edition event. The company hosted the event as part of Google I/O, which is scheduled to take place from May 19 to May 20. Slated to roll out to select Android devices soon, Gemini Intelligence will expand Google's multistep task automation feature beyond the Samsung Galaxy S26 series and Pixel 10 lineup. Moreover, the company has announced that it is also integrating Gemini into Chrome on Android, similar to the browser's desktop version.

2 hours ago

View

Threads tests a Meta AI integration that works similarly to Grok

Threads tests a Meta AI integration that works similarly to Grok

Threads is testing a Meta AI integration that works similarly to X’s Grok. Users with a public account will be able to mention Meta AI in a post or a reply to get more context. The feature is currenty in beta testing in Malaysia, Saudi Arabia, Mexico, Argentina, and Singapore. Meta told TechCrunch in an email that the feature is designed to help people get real-time context about trends and breaking stories, as well as receive recommendations, all within conversations. Now, users can mention Meta AI to ask questions like, “why are people talking about the World Cup this month?, “whose Met Gala looks are trending right now?” or “how are the Knicks doing in the playoffs?” Meta AI will then process the invocation and respond as a public reply authored by the @meta.ai account. Meta AI will respond in the language used in the post it was mentioned in. By integrating Meta AI into its platform, Threads is positioning itself as not just a destination for chatting about news and trends, but also a place where you can get information and recommendations without having to leave the app. The idea is similar to Grok’s role on X, which is filled with posts of users asking the AI chatbot questions like “is this real?” or “explain this.” Of course, giving an AI chatbot this level of visibility carries risks, as seen on X whenGrok generated postspraising Hitler. Still, Meta AI notably has stronger safeguards in place than Grok, though it remains to be seen whether it will be prone to similar issues. Meta notes that if you want to see fewer Meta AI replies in your feed, you can mute @meta.ai, use the “Not interested” option on any Meta AI post, or hide a Meta AI reply that appears directly on your post. The company says it plans to learn from early feedback and will continue improving the experience before expanding it to more people.

2 hours ago

View

Google’s ‘Create My Widget’ feature will let you vibe code your own widgets

Google’s ‘Create My Widget’ feature will let you vibe code your own widgets

Google on Tuesday unveiled a new “Create My Widget” feature for Android that allows users to vibe code their own custom widgets. The feature will first launch on the latest Samsung Galaxy and Google Pixel phones this summer. To create a widget, users will be able to describe what they want using natural language. For example, you could ask the feature to “suggest three high-protein meal prep recipes every week” in order to get a custom dashboard that you can add and resize on your home screen. Or, if you’re a cyclist who only cares about wind speed and rain, you can create a weather widget that just surfaces those exact stats on your home screen. Gemini can also pull information from the web and connect with Google apps like Gmail and Calendar to build a single, personalized dashboard. For instance, if you’re planning a family reunion in Berlin, it can gather your flight and hotel details, surface restaurant reservations, and even add a countdown. The feature signals Google’s latest push to bring generative AI deeper into the Android experience, as tech companies race to make customization tools more accessible to everyday users. “This is like you asking your personal assistant a question, and having them just bring you the answer on repeat,” said Ben Greenwood, Director, PM, Android Core Experiences, during a briefing with reporters. “So think of it as asking Gemini things about the world, things about its knowledge of what’s going on and events, as well as things about your personal data. Those are sort of the two areas that unlock an enormous number of use cases that we’re super excited about.” The company announced the new feature alongside the unveiling of Gemini Intelligence, which will bring additional features like advanced autofill, an AI-powered voice dictation feature for Gboard, and more.

2 hours ago

View

The AI legal services industry is heating up. Anthropic is getting in on the action.

The AI legal services industry is heating up. Anthropic is getting in on the action.

Anthropic announced Tuesday that it is launching a host of new chatbot features designed to provide automated assistance to law firms. The new features expand Claude for Legal — the law-focused offering thatlaunched earlier this year— offering users a new set of legal plugins and MCP connectors designed for specific areas of law. The new tools come amid hot competition in the legal AI space. In March, the AI law startup Harvey, which uses agentic AI to automate legal workflows,raised $200 millionat a valuation of $11 billion. Last month, a rival startup, Legora,raised a $600 millionseries D, and launcheda high-profile ad campaignfeaturing Jude Law. Legora offers similar services to Harvey — automated solutions built to simplify the often byzantine law processes that have traditionally involved entire teams of humans. Anthropic’s new tools are designed to help law firms automate specific clerical functions — things like document search and review, case law resources, deposition prep, document drafting, and other related areas. The plugins — which represent a bundle of functions and automated tools — are designed to work across legal fields like commercial, privacy, corporate, employment, product, and AI governance, Anthropic says. Anthropic is also offering a number of model context protocol connectors. MCPs connect specific data sources and third-party systems to AI models, allowing the models to interact with them directly. In this case, the new MCP connectors integrate Claude into a variety of software applications that are already routinely used by law firms — applications for document management like DocuSign and file search platforms like Box. Legal research sites like Thomson Reuters (which operates Westlaw) can also be connected. The new connectors and plugins are being made available to all paying Claude customers, the company said. The new features also build upon other plugins designed for the legal industrythat the company launchedin February. “The legal sector is facing mounting pressure to adopt AI, and the firms and in-house teams that move are pulling ahead fast,” a spokesperson for the company said. “Claude is making a deeper push into knowledge work, with the legal sector emerging as one of its most significant and fastest-growing industries.” As AI companies have sought to court law firms, AI-related failures have caused real problems in court. Dozens of lawyershave been caughtusing AI to generate error-ridden legal documents, as has at least onemajor law firm. Last year, Californiaissued a first-of-its-kind fineagainst an attorney who had used ChatGPT to draft an appeal riddled with fake quotes. Federal judgeshave also been caughtusing it to draft rulings, a trend thatdrew the scrutinyof Congressional leaders last year. Meanwhile,AI-generated lawsuitsare said to be clogging the arteries of justice — overwhelming courts with stacks of bizarrely argued legal “slop.”

2 hours ago

View