Latest AI News

Who Will Win in the OpenAI vs Anthropic Battle? Hint: Neither
The rivalry between OpenAI and Anthropic now reflects deeper tensions about scale, safety, and the commercial future of AI.
View

NEURA Robotics, Qualcomm Partner to Develop Platforms for Humanoid Robots
Qualcomm’s Dragonwing IQ10 Series robotics processors and software stack will be integrated with NEURA’s hardware platforms and embodied AI software.
View

Anthropic launches code review tool to check flood of AI-generated code
When it comes to coding, peer feedback is crucial for catching bugs early, maintaining consistency across a codebase, and improving overall software quality. The rise of “vibe coding” — using AI tools that take instructions given in plain language and quickly generate large amounts of code — has changed how developers work. While these tools have sped up development, they have also introduced new bugs, security risks, and poorly understood code. Anthropic’s solution is an AI reviewer designed to catch bugs before they make it into the software’s codebase. The new product, called Code Review, launched Monday inClaude Code. “We’ve seen a lot of growth in Claude Code, especially within the enterprise, and one of the questions that we keep getting from enterprise leaders is: Now that Claude Code is putting up a bunch of pull requests, how do I make sure that those get reviewed in an efficient manner?” Cat Wu, Anthropic’s head of product, told TechCrunch. Pull requests are a mechanism that developers use to submit code changes for review before those changes make it into the software. Wu saidClaude Codehas dramatically increased code output, which has increased pull request reviews that have caused a bottleneck to shipping code. “Code Review is our answer to that,” Wu said. Anthropic’s launch of Code Review — arriving first to Claude for Teams and Claude for Enterprise customers in research preview — comes at a pivotal moment for the company. On Monday, Anthropicfiled two lawsuitsagainst the Department of Defense in response to the agency’s designation of Anthropic as a supply chain risk. The dispute will likely see Anthropic leaning more heavily on its booming enterprise business, which has seen subscriptions quadruple since the start of the year.Claude Code’s run-rate revenue has surpassed $2.5 billion since launch, according to the company. “This product is very much targeted towards our larger scale enterprise users, so companies like Uber, Salesforce, Accenture, who already use Claude Code and now want help with the sheer amount of [pull requests] that it’s helping produce,” Wu said. She added that developer leads can turn on Code Review to run on default for every engineer on the team. Once enabled, it integrates with GitHub and automatically analyzes pull requests, leaving comments directly on the code explaining potential issues and suggested fixes. The focus is on fixing logical errors over style, Wu said. “This is really important because a lot of developers have seen AI automated feedback before, and they get annoyed when it’s not immediately actionable,” Wu said. “We decided we’re going to focus purely on logic errors. This way we’re catching the highest priority things to fix.” The AI explains its reasoning step by step, outlining what it thinks the issue is, why it might be problematic, and how it can potentially be fixed. The system will label the severity of issues using colors: red for highest severity, yellow for potential problems worth reviewing, and purple for issues tied to preexisting code or historical bugs. Wu said it does this quickly and efficiently by relying on multiple agents working in parallel, with each agent examining the codebase from a different perspective or dimension. A final agent aggregates and ranks the findings, removing duplicates and prioritizing what’s most important. The tool provides a lightsecurity analysis, and engineering leads can customize additional checks based on internal best practices. Wu said Anthropic’s more recently launchedClaude Code Securityprovides a deeper security analysis. The multi-agent architecture means this can be a resource-intensive product, Wu said. Similar to other AI services, pricing is token-based, and the cost varies depending on code complexity — though Wu estimated each review would cost $15 to $25 on average. She added that it’s a premium experience, and a necessary one as AI tools generate more and more code. “[Code Review] is something that’s coming from an insane amount of market pull,” Wu said. “As engineers develop with Claude Code, they’re seeing the friction to creating a new feature [decrease], and they’re seeing a much higher demand for code review. So we’re hopeful that with this, we’ll enable enterprises to build faster than they ever could before, and with much fewer bugs than they ever had before.”
View

OpenAI and Google employees rush to Anthropic’s defense in DOD lawsuit
More than 30 OpenAI and Google DeepMind employees filed a statement Monday supporting Anthropic’slawsuitagainst the U.S. Defense Department after the federal agency labeled the AI firm a supply-chain risk, according to court filings. “The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean. Late last week, thePentagon labeled Anthropica supply-chain risk — usually reserved for foreign adversaries — after the AI firm refused to allow the Department of Defense (DOD) to use its technology for mass surveillance of Americans or autonomously firing weapons. The DOD had argued that it should be able to use AI for any “lawful” purpose and not be constrained by a private contractor. The amicus brief in support of Anthropic showed up on the docket a few hours after the Claude maker filed two lawsuits against the DOD and other federal agencies.Wiredwas first to report the news. In thecourt filing, the Google and OpenAI employees make the point that if the Pentagon was “no longer satisfied with the agreed-upon terms of its contract with Anthropic,” the agency could have “simply canceled the contract and purchased the services of another leading AI company.” The DOD did, in fact, sign a deal with OpenAI within moments of designating Anthropic a supply-chain risk — a move many of the ChatGPT maker’s employees protested. “If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the brief reads. “And it will chill open deliberation in our field about the risks and benefits of today’s AI systems.” The filing also affirms that Anthropic’s stated red lines are legitimate concerns warranting strong guardrails. Without public law to govern AI use, it argues, the contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse. Many of the employees who signed the statement alsosigned open lettersover the last couple of weeks urging the DOD to withdraw the label andcalling on the leadersof their companies to support Anthropic and refuse unilateral use of their AI systems.
View

Qualcomm’s partnership with Neura Robotics is just the beginning
German robotics startup Neura Robotics has inked a partnership with semiconductor giant Qualcomm to build the next generation of robots and physical AI. The deal is the latest coupling in the emerging physical AI industry between robotics startups and larger tech hardware and software companies. While no specific products were mentioned in the Monday announcements, the companies will work together to build the “brain and nervous system” of robots in a quest to advance the deployment of humanoid and general-purpose robots in the real world in both domestic and industrial settings. More specifically, Neura will use Qualcomm’s Dragonwing Robotics IQ10 processors as reference designs in its robots. ThisIQ10 series was announced at CES earlier this year, and these chips are designed to work with autonomous mobile robots (AMRs) and humanoids. Neura also plans to use itsNeuraverse robotic simulation and training platform, which was released in June 2025, to test and fine-tune the robots running on Qualcomm’s IQ10 processors. “This collaboration marks a major step toward making physical AI real: open, scalable, and trusted,” David Reger, CEO and founder of Neura Robotics, said in a press release. “By bringing together our cognitive robotics platforms and the Neuraverse ecosystem with Qualcomm Technologies’ leadership in edge AI and connectivity, we’re aiming to accelerate a future where cognitive robots operate safely alongside humans across industries and throughout everyday life.” This deal makes a lot of sense for both sides. And it’s a formula that will likely become a popular strategy for robotics companies trying to bring their products into the real world. For instance,Boston Dynamics announced a strategic partnership with Google DeepMindin January to speed up the development of the robotic company’s Atlas humanoid robot by using Google’s AI foundational models. While Boston Dynamics and Neura’s respective partnerships deal with different technologies — AI models versus chips — the same conclusion can be drawn. Instead of these two companies just being customers of tech vendors, partnering allows for these robotic companies to better use and embed these technologies. A robotic company that has technical prowess in software will have a much easier — and likely cheaper — path to market and scale through partnering with hardware companies that have already figured out tough technical challenges like building robotics hands with dexterity, for example. In Neura’s case, the company gets to build and test robots designed for the chips they are running on while Qualcomm gets an intimate look at how robotic companies can use its processors. As more AI companies like Nvidia look tophysical AI as the next major market for their technology, they are going to want a seat at the table of how their tech is being used. The upshot: expect more partnerships.
View

Anthropic sues Defense Department over supply chain risk designation
Anthropic has made good on itspromise to challengethe Department of Defense in court after the agencylabeled it a supply chain risklate last week. The Claude maker filed two complaints against the Department on Monday in California and Washington D.C. after a weeks-long conflict between Anthropic and the DOD over whether the military should have unrestricted access to Anthropic’s AI systems. Anthropic had two firm red lines: it didn’t want its technology to be used for mass surveillance of Americans and didn’t believe it was ready to power fully autonomous weapons with no humans making targeting and firing decisions. Defense Secretary Pete Hegseth argued that the Pentagon should have access to AI systems for “any lawful purpose” and that it shouldn’t be limited by a private contractor. A supply chain risk label is usually reserved for foreign adversaries, and requires any company or agency that does work with the Pentagon to certify that it doesn’t use Anthropic’s models. While several private companies arestill working with Anthropic, the firm is poised to losemuch of its businesswithin the government. Anthropic called the DOD’s actions “unprecedented and unlawful” and accuses the administration ofretaliationin a complaint filed in San Francisco federal court. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the lawsuit reads. The protected speech Anthropic refers to is its belief about the “limitations of its own AI servicesand important issues of AI safety,” per the lawsuit. The administration, including Defense Secretary Hegseth and President Trump, have criticized Anthropic and its CEO Dario Amodei as “woke” and “radical” over the company’s calls for stronger AI safety and transparency measures. In the lawsuit, Anthropic argued the government doesn’t have to agree with its views or use its products, but it cannot employ the power of the state to punish or suppress Anthropic’s expression. Anthropic also argued that “no federal statute authorizes the actions taken here,” claiming the Defense Department’s supply chain risk designation was issued “without observance of the procedures Congress required.” The law generally requires agencies to conduct a risk assessment, notify the targeted company and allow it to respond, make a written national-security determination, and notify Congress before excluding a vendor from federal supply chains. The firm also accuses the president of operating outside the bounds of the authority granted by Congress when hedirected every federal agencyto immediately stop using Anthropic’s technology, following Amodei’s statement that he would not budge on his hard lines. As a result of the statements made by both President Trump and Secretary Hegseth, the General Services Administration – the federal agency that manages government contracts and purchasing – terminated Anthropic’s “OneGov” contract, ending the availability of Anthropic services to all three branches of the federal government. “Defendants are seeking to destroy the economic value created by one of the world’s fastest-growing private companies,” the lawsuit reads. “The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance.” As part of its complaint, Anthropic asked the court to immediately pause the Defense Department’s designation while the case proceeds and ultimately invalidate and permanently block the government from enforcing it. “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson said in a statement. “We will continue to pursue every path toward resolution, including dialogue with the government.” Anthropic filed a separate complaint in the D.C. Circuit Court of Appeals because the federal procurement law allows companies to appeal supply chain risk designations. The petition asks the court to review and overturn the Defense Department’s decision to designate the company a national security supply chain risk. In the complaint, Anthropic argued the move was unlawful, retaliatory, and improperly executed under federal procurement law. This story has been updated with more details and news that Anthropic has filed a separate lawsuit in the D.C. Circuit Court of Appeals. It was originally publishedMarch 9, 2026 at 8:39am PT.
View

OpenAI acquires Promptfoo to secure its AI agents
OpenAI announced Monday it has acquired Promptfoo, an AI security startup founded in 2024 to protect LLMs from online adversaries. The frontier lab said in ablog postthat once the deal closes, Promptfoo’s technology will be integrated into OpenAI Frontier, its enterprise platform for AI agents. The development of independent AI agents that perform digital tasks has generated excitement about productivity gains. But it’s also given bad actors fresh opportunities to access sensitive data or manipulate automated systems. This deal underscores how frontier labs are scrambling to prove their technology can be used safely in critical business operations. Promptfoo was founded by Ian Webster and Michael D’Angelo to develop tools that companies can use to test security vulnerabilities in LLMs, including an open-source interface and library. The company reports that its products are used by more than 25% of Fortune 500 companies. Promptfoo has raised just $23 million since its founding, and was valued at $86 million after its most recent round in July 2025, according to Pitchbook. OpenAI did not disclose the value of the transaction. OpenAI’s post said Promptfoo’s technology will allow its agent platform to perform automated red-teaming, evaluate agentic workflows for security concerns, and monitor activities for risks and compliance needs. The company also said it expects to continue building out Promptfoo’s open-source offering.
View

Anthropic’s Claude Finds 22 Vulnerabilities in Mozilla Firefox in Just Two Weeks
Anthropic's latest frontier artificial intelligence (AI) model, Claude Opus 4.6, has successfully identified several vulnerabilities in the Mozilla Firefox browser. The San Francisco-based AI firm announced that it had partnered with Mozilla to test the AI model's capabilities in finding bugs in real-world scenarios. The researchers claimed that in just two weeks, Claude was able to find 22 different vulnerabilities, out of which 14 were classified as high-severity vulnerabilities. The security risks were reportedly patched by the browser company with a recent update.
View

ChatGPT Adult Mode Delayed Again as OpenAI's 'Code Red' Reportedly Ends
OpenAI's “code red” has reportedly reached its end, nearly four months after the company CEO declared it. In December 2025, reports claimed that Sam Altman declared a code red, keeping every non-ChatGPT project on hold. Ever since then, the San Francisco-based artificial intelligence (AI) giant has released three new AI models and multiple new features for the popular chatbot. At the same time, the company has reportedly delayed its “Adult Mode” feature for ChatGPT once again due to other higher-priority projects.
View

Sandberg, Clegg join Nscale board as this ‘Stargate Norway’ startup hits $14.6B valuation
Amid growing demand for data centers that can deliver AI compute at scale,Nvidia-backedBritish AI infrastructure companyNscaleis now valued at $14.6 billion. This makes it one of Europe’s latest decacorns alongsideHelsingandMistral AI. Nscale has bet on vertical integration, from energy and data centers to compute and orchestration software. Its new valuation stems froma $2 billion Series C, which it calls “the largest in European history,” though the figure includes a$433 million pre-Series C SAFEbacked by Blue Owl, Dell, Nvidia and Nokia in October. The raise was supported by Goldman Sachs and JPMorgan, whose involvement has been interpretedas IPO preparation— and not without reason: Nscale CEO Josh Paynetold the New York Timeshis company might seek to go public “as early as this year” to generate more capital. Alongside its funding and plans, the company also announced that former Meta COO Sheryl Sandberg, former Yahoo president Susan Decker, and former UK deputy prime minister Nick Clegg are joining its board. Nscale is no newcomer to big rounds and announcements. In September, it announceda $1.1 billion Series Bled byAker. Aker is a public Norwegian company with interests in energy and is also co-leading the Series C alongside New York-based investment firm 8090 Industries. The companies also agreed that Aker’s joint venture with Nscale will now be fully managed by the startup. Dubbed “Stargate Norway,” this Norway-based AI infrastructure project has the ambitionto run on 100,000 Nvidia GPUs by the end of 2026, with OpenAI asan initial customer. According to Aker president and CEO, Øyvind Eriksen, who sits on Nscale’s board, “this step strengthens execution by putting delivery and governance under one roof, while keeping continuity for the people and projects already underway.” Last October, Nscale had also signed anexpanded dealwith Microsoft tobring approximately 200,000 Nvidia GPUs to three data centersin Europe and one in the U.S, in collaboration with Dell. Dell and Nvidia both participated in the Series C, as did Astra Capital, Citadel, Jane Street, Lenovo, Linden Advisors, Nokia, and Point72. Nscale expects the new funding to accelerate the development of its AI infrastructure across Europe, North America, and Asia, while helping the company expand its engineering and operations teams, and strengthen its platform. Equity aside, the company also raised debt last month, witha $1.4 billion delayed draw term loanbacked by GPUs to finance some of its clusters across Europe. It aims to harness rising enterprise demand and low-cost renewable energy while renewing its pledge to reuse waste heat, develop local skills, and invest in regional infrastructure as part of Stargate Norway.
View

Is Gap Between Large, Mid-Tier IT Firms Narrowing? Answer Lies in AI Deals
Modular AI-led projects are pushing big providers downmarket while smaller rivals are moving up.
View

MKS Inc Inaugurates Vacuum, Photonics Engineering Labs at Bengaluru GCC
MKS’ photonics engineering lab will focus on the design and development of next-generation optical and photonic solutions.
View
