AI NewsAnthropic sues Defense Department over supply chain risk designation

Anthropic sues Defense Department over supply chain risk designation

12:54 AM IST · March 10, 2026

Anthropic sues Defense Department over supply chain risk designation

Anthropic has made good on itspromise to challengethe Department of Defense in court after the agencylabeled it a supply chain risklate last week. The Claude maker filed two complaints against the Department on Monday in California and Washington D.C. after a weeks-long conflict between Anthropic and the DOD over whether the military should have unrestricted access to Anthropic’s AI systems. Anthropic had two firm red lines: it didn’t want its technology to be used for mass surveillance of Americans and didn’t believe it was ready to power fully autonomous weapons with no humans making targeting and firing decisions. Defense Secretary Pete Hegseth argued that the Pentagon should have access to AI systems for “any lawful purpose” and that it shouldn’t be limited by a private contractor. A supply chain risk label is usually reserved for foreign adversaries, and requires any company or agency that does work with the Pentagon to certify that it doesn’t use Anthropic’s models. While several private companies arestill working with Anthropic, the firm is poised to losemuch of its businesswithin the government. Anthropic called the DOD’s actions “unprecedented and unlawful” and accuses the administration ofretaliationin a complaint filed in San Francisco federal court. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the lawsuit reads. The protected speech Anthropic refers to is its belief about the “limitations of its own AI servicesand important issues of AI safety,” per the lawsuit. The administration, including Defense Secretary Hegseth and President Trump, have criticized Anthropic and its CEO Dario Amodei as “woke” and “radical” over the company’s calls for stronger AI safety and transparency measures. In the lawsuit, Anthropic argued the government doesn’t have to agree with its views or use its products, but it cannot employ the power of the state to punish or suppress Anthropic’s expression. Anthropic also argued that “no federal statute authorizes the actions taken here,” claiming the Defense Department’s supply chain risk designation was issued “without observance of the procedures Congress required.” The law generally requires agencies to conduct a risk assessment, notify the targeted company and allow it to respond, make a written national-security determination, and notify Congress before excluding a vendor from federal supply chains. The firm also accuses the president of operating outside the bounds of the authority granted by Congress when hedirected every federal agencyto immediately stop using Anthropic’s technology, following Amodei’s statement that he would not budge on his hard lines. As a result of the statements made by both President Trump and Secretary Hegseth, the General Services Administration – the federal agency that manages government contracts and purchasing – terminated Anthropic’s “OneGov” contract, ending the availability of Anthropic services to all three branches of the federal government. “Defendants are seeking to destroy the economic value created by one of the world’s fastest-growing private companies,” the lawsuit reads. “The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance.” As part of its complaint, Anthropic asked the court to immediately pause the Defense Department’s designation while the case proceeds and ultimately invalidate and permanently block the government from enforcing it. “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson said in a statement. “We will continue to pursue every path toward resolution, including dialogue with the government.” Anthropic filed a separate complaint in the D.C. Circuit Court of Appeals because the federal procurement law allows companies to appeal supply chain risk designations. The petition asks  the court to review and overturn the Defense Department’s decision to designate the company a national security supply chain risk. In the complaint, Anthropic argued the move was unlawful, retaliatory, and improperly executed under federal procurement law. This story has been updated with more details and news that Anthropic has filed a separate lawsuit in the D.C. Circuit Court of Appeals. It was originally publishedMarch 9, 2026 at 8:39am PT.

read more

Latest AI News

View All News →
The Android Show I/O Edition: Google Showcases Gemini Intelligence on Android With New AI-Backed Widget Creation Tool

The Android Show I/O Edition: Google Showcases Gemini Intelligence on Android With New AI-Backed Widget Creation Tool

Google is bringing Gemini Intelligence to Android, its new suite of AI-powered tools for its operating system, the Mountain View-based tech giant announced during the Android Show I/O Edition event. The company hosted the event as part of Google I/O, which is scheduled to take place from May 19 to May 20. Slated to roll out to select Android devices soon, Gemini Intelligence will expand Google's multistep task automation feature beyond the Samsung Galaxy S26 series and Pixel 10 lineup. Moreover, the company has announced that it is also integrating Gemini into Chrome on Android, similar to the browser's desktop version.

1 hour ago

View

Threads tests a Meta AI integration that works similarly to Grok

Threads tests a Meta AI integration that works similarly to Grok

Threads is testing a Meta AI integration that works similarly to X’s Grok. Users with a public account will be able to mention Meta AI in a post or a reply to get more context. The feature is currenty in beta testing in Malaysia, Saudi Arabia, Mexico, Argentina, and Singapore. Meta told TechCrunch in an email that the feature is designed to help people get real-time context about trends and breaking stories, as well as receive recommendations, all within conversations. Now, users can mention Meta AI to ask questions like, “why are people talking about the World Cup this month?, “whose Met Gala looks are trending right now?” or “how are the Knicks doing in the playoffs?” Meta AI will then process the invocation and respond as a public reply authored by the @meta.ai account. Meta AI will respond in the language used in the post it was mentioned in. By integrating Meta AI into its platform, Threads is positioning itself as not just a destination for chatting about news and trends, but also a place where you can get information and recommendations without having to leave the app. The idea is similar to Grok’s role on X, which is filled with posts of users asking the AI chatbot questions like “is this real?” or “explain this.” Of course, giving an AI chatbot this level of visibility carries risks, as seen on X whenGrok generated postspraising Hitler. Still, Meta AI notably has stronger safeguards in place than Grok, though it remains to be seen whether it will be prone to similar issues. Meta notes that if you want to see fewer Meta AI replies in your feed, you can mute @meta.ai, use the “Not interested” option on any Meta AI post, or hide a Meta AI reply that appears directly on your post. The company says it plans to learn from early feedback and will continue improving the experience before expanding it to more people.

1 hour ago

View

Google’s ‘Create My Widget’ feature will let you vibe code your own widgets

Google’s ‘Create My Widget’ feature will let you vibe code your own widgets

Google on Tuesday unveiled a new “Create My Widget” feature for Android that allows users to vibe code their own custom widgets. The feature will first launch on the latest Samsung Galaxy and Google Pixel phones this summer. To create a widget, users will be able to describe what they want using natural language. For example, you could ask the feature to “suggest three high-protein meal prep recipes every week” in order to get a custom dashboard that you can add and resize on your home screen. Or, if you’re a cyclist who only cares about wind speed and rain, you can create a weather widget that just surfaces those exact stats on your home screen. Gemini can also pull information from the web and connect with Google apps like Gmail and Calendar to build a single, personalized dashboard. For instance, if you’re planning a family reunion in Berlin, it can gather your flight and hotel details, surface restaurant reservations, and even add a countdown. The feature signals Google’s latest push to bring generative AI deeper into the Android experience, as tech companies race to make customization tools more accessible to everyday users. “This is like you asking your personal assistant a question, and having them just bring you the answer on repeat,” said Ben Greenwood, Director, PM, Android Core Experiences, during a briefing with reporters. “So think of it as asking Gemini things about the world, things about its knowledge of what’s going on and events, as well as things about your personal data. Those are sort of the two areas that unlock an enormous number of use cases that we’re super excited about.” The company announced the new feature alongside the unveiling of Gemini Intelligence, which will bring additional features like advanced autofill, an AI-powered voice dictation feature for Gboard, and more.

1 hour ago

View

The AI legal services industry is heating up. Anthropic is getting in on the action.

The AI legal services industry is heating up. Anthropic is getting in on the action.

Anthropic announced Tuesday that it is launching a host of new chatbot features designed to provide automated assistance to law firms. The new features expand Claude for Legal — the law-focused offering thatlaunched earlier this year— offering users a new set of legal plugins and MCP connectors designed for specific areas of law. The new tools come amid hot competition in the legal AI space. In March, the AI law startup Harvey, which uses agentic AI to automate legal workflows,raised $200 millionat a valuation of $11 billion. Last month, a rival startup, Legora,raised a $600 millionseries D, and launcheda high-profile ad campaignfeaturing Jude Law. Legora offers similar services to Harvey — automated solutions built to simplify the often byzantine law processes that have traditionally involved entire teams of humans. Anthropic’s new tools are designed to help law firms automate specific clerical functions — things like document search and review, case law resources, deposition prep, document drafting, and other related areas. The plugins — which represent a bundle of functions and automated tools — are designed to work across legal fields like commercial, privacy, corporate, employment, product, and AI governance, Anthropic says. Anthropic is also offering a number of model context protocol connectors. MCPs connect specific data sources and third-party systems to AI models, allowing the models to interact with them directly. In this case, the new MCP connectors integrate Claude into a variety of software applications that are already routinely used by law firms — applications for document management like DocuSign and file search platforms like Box. Legal research sites like Thomson Reuters (which operates Westlaw) can also be connected. The new connectors and plugins are being made available to all paying Claude customers, the company said. The new features also build upon other plugins designed for the legal industrythat the company launchedin February. “The legal sector is facing mounting pressure to adopt AI, and the firms and in-house teams that move are pulling ahead fast,” a spokesperson for the company said. “Claude is making a deeper push into knowledge work, with the legal sector emerging as one of its most significant and fastest-growing industries.” As AI companies have sought to court law firms, AI-related failures have caused real problems in court. Dozens of lawyershave been caughtusing AI to generate error-ridden legal documents, as has at least onemajor law firm. Last year, Californiaissued a first-of-its-kind fineagainst an attorney who had used ChatGPT to draft an appeal riddled with fake quotes. Federal judgeshave also been caughtusing it to draft rulings, a trend thatdrew the scrutinyof Congressional leaders last year. Meanwhile,AI-generated lawsuitsare said to be clogging the arteries of justice — overwhelming courts with stacks of bizarrely argued legal “slop.”

1 hour ago

View