Latest AI News

SaaS in, SaaS out: Here’s what’s driving the SaaSpocalypse
One day not long ago, a founder texted his investor with an update: he was replacing his entire customer service team with Claude Code, an AI tool that can write and deploy software on its own. To Lex Zhao, an investor at One Way Ventures, the message indicated something bigger — the moment when companies like Salesforce stopped being the automatic default. “The barriers to entry for creating software are so low now thanks to coding agents, that the build versus buy decision is shifting toward build in so many cases,” Zhao told TechCrunch. The build versus buy shift is only part of the problem. The whole idea of using AI agents instead of people to perform work throws into question the SaaS business model itself. SaaS companies currently price their software per seat — meaning by how many employees log in to use it. “SaaS has long been regarded as one of the most attractive business models due to its highly predictable recurring revenue, immense scalability, and 70-90% gross margins,” Abdul Abdirahman, an investor at the venture firm F-Prime, told TechCrunch. When one, or a handful, of AI agents can do that work — when employees simply ask their AI of choice to pull the data from the system — that per-seat model starts to break down. The rapid pace of AI development also means that new tools, like Claude Code or OpenAI’s Codex, can replicate not just the core functions of SaaS products but also the add-on tools a SaaS vendor would sell to grow revenue from existing customers. On top of that, customers now have the ultimate contract negotiation tool in their pockets: If they don’t like a SaaS vendor’s prices, they can, more easily than ever before, build their own alternative. “Even if they do not take the build route, this creates downward pressure on contracts that SaaS vendors can secure during renewals,” Abdirahman continued. We saw this as early as late 2024, when Klarna announced that it had ditchedSalesforce’s flagship CRM productin favor of its own homegrown AI system. The realization that a growing number of other companies can do the same is spooking public markets, where the stock prices of SaaS giants like Salesforce and Workday have been sliding. In early February, an investor sell-off wiped nearly$1 trillion in market valuefrom software and services stocks, followed byanother billion laterin the month. Experts are calling it theSaaSpocalypse, with one analyst dubbing it FOBO investing —or fear of becoming obsolete. Yet the venture investors TechCrunch spoke with believe such fears are only temporary. “This isn’t the death of SaaS,” Aaron Holiday, a managing partner at 645 Ventures, told TechCrunch. Rather, it’s the beginning of an old snake shedding its skin, he said. Thepublic market patternis best illustrated through Anthropic’s recent product launches. The company released Claude Code for cybersecurity, and related stocks dropped. It released legal tools in Claude Cowork AI, and the stock price of the iShares Expanded Tech-Software Sector ETF — a basket of publicly traded software companies that includes firms like LegalZoom and RELX — also dropped. In some ways, this was expected, as SaaS companies had long been overvalued, investors said. It also doesn’t help that these companies did the bulk of their growing during the zero-interest-rate era, which has since ended. The cost of doing business rises when the cost of borrowing money increases. Public market investors typically price SaaS companies by estimating future revenue. But there is no telling whether in one year or five years anyone will be using SaaS products to the extent they once did. That’s why every time a new advanced AI tool launches, SaaS stocks feel a tremor. “This may be the first time in history that the terminal value of software is being fundamentally questioned, materially reshaping how SaaS companies are underwritten going forward,” Abdirahman said. That’s because slapping AI features on top of existing SaaS products may not be enough. A horde of AI-native startups isrising at a record pace, having completely redefined what it means to be a software company. Software is now easier and cheaper to build, meaning it’s easier to replicate, Yoni Rechtman, a partner at Slow Ventures, told TechCrunch. That’s good news for the next generation of startups, but bad news for the incumbents that spent years building their tech stacks. On the other hand, the market also lacks enough time and evidence to show that whatever new business model emerges the SaaS’s wake will be worthwhile. AI companies are sometimes pricing their models based on consumption, meaning customers pay based on how much AI they use, measured in tokens (which each model provider defines slightly differently). Others are working on “outcome-based pricing,” where fees are charged based on how well the AI actually works. This, ironically, is the current approach of former Salesforce CEO Bret Taylor’s AI startup, Sierra, aquasi-Salesforce competitorthat offers customer service agents. The approach appears, so far, appears to be working. In November,Sierra hit $100 millionin annual recurring revenue in less than two years. There was once also the idea that cloud-based software like SaaS sells would never depreciate and that it could last for decades. This is still true in some ways compared to what came before — on-premises software, which companies had to install and maintain on their own servers. But being in the cloud doesn’t protect SaaS vendors from an entirely new technology rising to compete: AI. Investors are rightfully nervous as AI-native companies pop up, adapt, adopt, and build technology much faster than a traditional SaaS company can move. SaaS companies are, after all, themselves the incumbents, having replaced old-school on-premises vendors in the last era of disruption. This SaaSpocalypse calls to mind that Taylor Swift lyric about what happens when “someone else lights up the room” because “people love an ingénue.” “The most important thing to understand about the SaaS pullback is that it is simultaneously a real structural shift and potentially a market overreaction,” Abdirahman said, adding that investors typically “sell first and ask questions later.” Public-market SaaS companies aren’t the only ones feeling a chill from investors. A Crunchbase report released Wednesday showed that, thoughthe IPO market seems to be thawing for some sectors, there haven’t been — and aren’t expected to be — any venture-backed SaaS filings on the horizon. Holiday said this may be because there is a lot of pressure on large, private, late-stage SaaS companies like Canva and Rippling given the persnickety IPO window, high expectations driven by AI advancements, and the unsteady stock price of already public SaaS companies. Some of these companies, including mid-size SaaS companies, have even struggled to raise extension rounds in the private market, Holiday said, over the same fears public investors have. “Nobody wants to be subjected to the volatility of public markets when sentiment can send companies into downward tailspins,” Rechtman said, adding he expects to see companies like these to stay private for much longer. Meanwhile, the public market waits to get a good look at the finances of the first AI-native companies hoping to IPO. The scuttlebutt says that bothOpenAIandAnthropicare contemplating IPOs, maybe even later this year. The most likely outcome is something that weaves the old and the new together, as tech disruptions always have. Holiday said most of the new features companies are toying with these days “won’t stick” and that enterprises will always need software that meets compliance regulations, supports audits, manages workflow, and offers durability. “Durable shareholder value isn’t built on hype,” he continued. “It’s built on fundamentals, retention, margins, real budgets, and defensibility.”
View

Anthropic’s Claude rises to No. 1 in the App Store following Pentagon dispute
Anthropic’s chatbot Claude seems to have benefited from the attention around the company’s fraught negotiations with the Pentagon. Asfirst reported by CNBC, Claude has been rising to the top of the free app rankings in Apple’s US App Store. On Saturday evening, it overtook OpenAI’s ChatGPT to claim the number one spot, a position that it still held on Sunday morning. According todata from SensorTower, Claude was just outside the top 100 at the end of January, and has spent most of February somewhere in the top 20. It’s climbed rapidly in the past few days, from sixth on Wednesday, then fourth on Thursday, then first on Saturday. A company spokesperson said that daily signups have broken the all-time record every day this week, free users have increased more than 60% since January, and paid subscribers have more than doubled this year. After Anthropic attempted to negotiate for safeguards preventing the Department of Defense from using its AI models for mass domestic surveillance or fully autonomous weapons, President Donald Trump directed federal agencies to stop using all Anthropic products and Secretary of Defense Pete Hegseth said he’sdesignating the company a supply-chain threat. OpenAI subsequentlyannounced its own agreement with the Pentagon, which CEO Sam Altman claimed includes safeguards related to domestic surveillance and autonomous weapons. This post was first published on February 28, 2026. It has been updated to reflect Anthropic reaching No. 1, and to include growth numbers from the company.
View

Instagram to Notify Parents if Teens Search for Self-Harm or Suicide-Related Content
Meta said that the goal is to help parents step in when necessary, without overwhelming them with unnecessary alerts.
View

OpenAI Safety Crackdown Spotlights Philosophical Rift With Anthropic
As AI misuse escalates from fraud to fatal violence, OpenAI is strengthening bans, detection systems, and direct police coordination.
View

Amid SaaSpocalypse Fears, This Startup Wants to Improve Salesforce With RevOps
Y Combinator-backed Ressl AI looks to close the gap between Salesforce processes and revenue reality.
View

The trap Anthropic built for itself
Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen: the Trump administration was severing ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei. Defense Secretary Pete Hegseth had invoked anational security lawto blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic’s tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input. It was a jaw-dropping sequence. Anthropic stands to lose a contract worth up to $200 million and will be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to “immediately cease all use of Anthropic technology.” (Anthropic has since said it willchallenge the Pentagon in court.) Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world’s ability to govern them. The MIT physicist founded theFuture of Life Institutein 2014 and helped organize anopen letter— ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in advanced AI development. His view of the Anthropic crisis is unsparing: the company, like its rivals, has sown the seeds of its own predicament. Tegmark’s argument doesn’t begin with the Pentagon but with a decision made years earlier — a choice, shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Anthropic this week even dropped thecentral tenet of its own safety pledge— its promise not to release increasingly powerful AI systems until the company was confident they wouldn’t cause harm. Now, in the absence of rules, there’s not a lot to protect these players, says Tegmark. Here’s more from that interview, edited for length and clarity. You can hear the full conversation this coming week on TechCrunch’sStrictlyVC Downloadpodcast. When you saw this news just now about Anthropic, what was your first reaction? The road to hell is paved with good intentions. It’s so interesting to think back a decade ago, when people were so excited about how we were going to make artificial intelligence to cure cancer, to grow the prosperity in America and make America strong. And here we are now where the U.S. government is pissed off at this company for not wanting AI to be used for domestic mass surveillance of Americans, and also not wanting to have killer robots that can autonomously — without any human input at all — decide who gets killed. Anthropic has staked its entire identity on being a safety-first AI company, and yet it was collaborating with defense and intelligence agencies [dating back to at least 2024]. Do you think that’s at all contradictory? It is contradictory. If I can give a little cynical take on this — yes, Anthropic has been very good at marketing themselves as all about safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about safety. None of them has come out supporting binding safety regulation the way we have in other industries. And all four of these companies have now broken their own promises. First we had Google — this big slogan, ‘Don’t be evil.’ Then they dropped that. Then they dropped another longer commitment that basically said they promised not to do harm with AI. They dropped that so they could sell AI for surveillance and weapons. OpenAI just dropped the word safety from their mission statement. xAI shut down their whole safety team. And now Anthropic, earlier in the week, dropped their most important safety commitment — the promise not to release powerful AI systems until they were sure they weren’t going to cause harm. How did companies that made such prominent safety commitments end up in this position? All of these companies, especially OpenAI and Google DeepMind but to some extent also Anthropic, have persistently lobbied against regulation of AI, saying, ‘Just trust us, we’re going to regulate ourselves.’ And they’ve successfully lobbied. So we right now have less regulation on AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix it. But if you say, ‘Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends for 11-year-olds, and they’ve been linked to suicides in the past, and then I’m going to release something called superintelligence which might overthrow the U.S. government, but I have a good feeling about mine’ — the inspector has to say, ‘Fine, go ahead, just don’t sell sandwiches.’ There’s food safety regulation and no AI regulation.And this, I feel, all of these companies really share the blame for. Because if they had taken all these promises that they made back in the day for how they were going to be so safe and goody-goody, and gotten together, and then gone to the government and said, ‘Please take our voluntary commitments and turn them into U.S. law that binds even our most sloppy competitors’ — this would have happened instead. We’re in a complete regulatory vacuum. And we know what happens when there’s a complete corporate amnesty: you getthalidomide, you get tobacco companies pushing cigarettes on kids, you get asbestos causing lung cancer. So it’s sort of ironic that their own resistance to having laws saying what’s okay and not okay to do with AI is now coming back and biting them. There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it. If the companies themselves had earlier come out and said, ‘We want this law,’ they wouldn’t be in this pickle. They really shot themselves in the foot. The companies’ counter-argument is always the race with China — if American companies don’t do this, Beijing will. Does that argument hold? Let’s analyze that. The most common talking point from the lobbyists for the AI companies — they’re now better funded and more numerous than the lobbyists from the fossil fuel industry, the pharma industry and the military-industrial complex combined — is that whenever anyone proposes any kind of regulation, they say, ‘But China.’ So let’s look at that. China is in the process of banning AI girlfriends outright. Not just age limits — they’re looking at banningall anthropomorphic AI. Why? Not because they want to please America but because they feel this is screwing up Chinese youth and making China weak. Obviously, it’s making American youth weak, too. And when people say we have to race to build superintelligence so we can win against China — when we don’t actually know how to control superintelligence, so that the default outcome is that humanity loses control of Earth to alien machines — guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government? No way. It’s clearly really bad for the American government too if it gets overthrown in a coup by the first American company to build superintelligence. This is a national security threat. That’s compelling framing — superintelligence as a national security threat, not an asset. Do you see that view gaining traction in Washington? I think if people in the national security community listen to Dario Amodei describe his vision — he’s given a famous speech where he says we’ll soon have acountry of geniuses in a data center— they might start thinking: wait, did Dario just use the word ‘country’? Maybe I should put that country of geniuses in a data center on the same threat list I’m keeping tabs on, because that sounds threatening to the U.S. government. And I think fairly soon, enough people in the U.S. national security community are going to realize that uncontrollable superintelligence is a threat, not a tool. This is totally analogous to the Cold War. There was a race for dominance — economic and military — against the Soviet Union. We Americans won that one without ever engaging in the second race, which was to see who could put the most nuclear craters in the other superpower. People realized that was just suicide. No one wins. The same logic applies here. What does all of this mean for the pace of AI development more broadly? How close do you think we are to the systems you’re describing? Six years ago, almost every expert in AI I knew predicted we were decades away from having AI that could master language and knowledge at human level — maybe 2040, maybe 2050. They were all wrong, because we already have that now. We’ve seen AI progress quite rapidly from high school level to college level to PhD level to university professor level in some areas. Last year, AI won the gold medal at the International Mathematics Olympiad, which is about as difficult as human tasks get. Iwrote a papertogether withYoshua Bengio,Dan Hendrycks, and other top AI researchers just a few months ago giving a rigorous definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we’re not there yet, but going from 27% to 57% that quickly suggests it might not be that long. When I lectured to my students yesterday at MIT, I told them that even if it takes four years, that means when they graduate, they might not be able to get any jobs anymore. It’s certainly not too soon to start preparing for it. Anthropic is now blacklisted. I’m curious to see what happens next — will the other AI giants stand with them and say, we won’t do this either? Or does someone like xAI raise their hand and say, Anthropic didn’t want that contract, we’ll take it?[Editor’s note: Hours after the interview, OpenAI announced itsown dealwith the Pentagon.] Last night, Sam Altman came out and said he stands with Anthropic and has the same red lines. I admire him for the courage of saying that. Google, as of when we started this interview, had said nothing. If they just stay quiet, I think that’s incredibly embarrassing for them as a company, and a lot of their staff will feel the same. We haven’t heard anything from xAI yet either. So it’ll be interesting to see. Basically, there’s this moment where everybody has to show their true colors. Is there a version of this where the outcome is actually good? Yes, and this is why I’m actually optimistic in a strange way. There’s such an obvious alternative here. If we just start treating AI companies like any other companies — drop the corporate amnesty — they would clearly have to do something like a clinical trial before they released something this powerful, and demonstrate to independent experts that they know how to control it. Then we get a golden age with all the good stuff from AI, without the existential angst. That’s not the path we’re on right now. But it could be.
View

The billion-dollar infrastructure deals powering the AI boom
It takes a lot of computing power to run an AI product — and as the tech industry races to tap the power of AI models, there’s a parallel race underway to build the infrastructure that will power them. On arecent earnings call, Nvidia CEO Jensen Huang estimated that between $3 trillion and $4 trillion will be spent on AI infrastructure by the end of the decade — with much of that money coming from AI companies. Along the way, they’re placing immense strain on power grids and pushing the industry’s building capacity to its limit. Below, we’ve laid out everything we know about the biggest AI infrastructure projects, including major spending from Meta, Oracle, Microsoft, Google, and OpenAI. We’ll keep it updated as the boom continues and the numbers climb even higher. This is arguably the deal that kicked off the whole contemporary AI boom:In 2019, Microsoft made a $1 billion investment in a buzzy non-profit called OpenAI, known mostly for its association with Elon Musk. Crucially, the deal made Microsoft the exclusive cloud provider for OpenAI — and as the demands of model training became more intense, more of Microsoft’s investment started to comein the form of Azure cloud creditrather than cash. It was a great deal for both sides: Microsoft was able to claim more Azure sales, and OpenAI got more money for its biggest single expense. In the years that followed, Microsoft would build its investment up to nearly $14 billion — a move that is set to pay off enormously when OpenAI converts into a for-profit company. The partnership between the two companies has unwound more recently. Last year, OpenAI announced it wouldno longer be using Microsoft’s cloud exclusively, instead giving the company a right of first refusal on future infrastructure demands but pursuing others if Azure couldn’t meet their needs. Microsoft has also begun exploring other foundation models to power its AI products, establishing even more independence from the AI giant. OpenAI’s arrangement with Microsoft was so successful that it’s become a common practice for AI services to sign on with a particular cloud provider. Anthropic has received $8 billion in investment from Amazon, whilemaking kernel-level modificationson the company’s hardware to make it better suited for AI training. Google Cloud has also signed onsmaller AI companies like Lovable and Windsurfas “primary computing partners,” although those deals did not involve any investment. And even OpenAI has gone back to the well, receiving a $100 billion investment from Nvidiain September, giving it capacity to buy even more of the company’s GPUs. On June 30, 2025, Oracle revealed in an SEC filing that it had signed a $30 billion cloud services deal with an unnamed partner; this is more than the company’s cloud revenues for all of the previous fiscal year. OpenAI was eventually revealed as the partner, securing Oraclea spot alongside Googleas one of OpenAI’s string of post-Microsoft hosting partners. Unsurprisingly, the company’s stock went shooting up. A few months later, it happened again.On September 10, Oracle revealed a five-year, $300 billion deal for compute power, set to begin in 2027. Oracle’s stockclimbed even higher, briefly making founder Larry Ellison the richest man in the world. The sheer scale of the deal is stunning: OpenAI does not have $300 billion to spend, so the figure presumes immense growth for both companies, and more than a little faith. But before a single dollar is spent, the deal has already cemented Oracle as one of the leading AI infrastructure providers — and a financial force to be reckoned with. As AI labs scramble to build infrastructure, they’re mostly buying GPUs from one company: Nvidia. That trade has made Nvidia flush with cash — and it’s been investing that cash back into the industry in increasingly unconventional ways. In September 2025, Nvidia boughta 4% stake in rival Intelfor $5 billion — but even more surprising has been the deals with its own customers. One week after the Intel deal was revealed, the company announceda $100 billion investment in OpenAI, paid for with GPUs that would be used in OpenAI’s ongoing data center projects. Nvidia has since announced a similar deal with Elon Musk’s xAI, and OpenAI launcheda separate GPU-for-stock arrangementwith AMD. If that seems circular, it’s because it is. Nvidia’s GPUs are valuable because they’re so scarce — and by trading them directly into an ever-inflating data center scheme, Nvidia is making sure they stay that way. You could say the same thing about OpenAI’s privately held stock, which is all the more valuable because it can’t be obtained through public markets. For now, OpenAI and Nvidia are riding high and nobody seems too worried — but if the momentum starts to flag, this sort of arrangement will get a lot more scrutiny. For companies like Meta that already havesignificant legacy infrastructure, the story is more complicated — although equally expensive. Meta CEO Mark Zuckerberg has said that the company plans to spend $600 billion on U.S. infrastructurethrough the end of 2028. In the first half of 2025, the company spent$30 billion morethan the previous year, driven largely by the company’s growing AI ambitions. Some of that spending goes toward big ticket cloud contracts, like a recent$10 billion deal with Google Cloud, but even more resources are being poured into two massive new data centers. A new 2,250-acre site in Louisiana,dubbed Hyperion, will cost an estimated $10 billion to build out andprovide an estimated 5 gigawatts of compute power. Notably, the site includes an arrangement with a local nuclear power plant to handle the increased energy load. A smaller site in Ohio, called Prometheus, is expected to come online in 2026, powered by natural gas. That kind of buildout comes with real environmental costs. Elon Musk’s xAI built its own hybrid data center and power-generation plant in South Memphis, Tennessee. The plant has quickly become one of the county’s largest emitters of smog-producing chemicals, thanks to a string of natural gas turbines thatexperts say violate the Clean Air Act. Just two days after his second inauguration last January, President Trump announced a joint venture between SoftBank, OpenAI, and Oracle, meant to spend $500 billion building AI infrastructure in the United States. Named “Stargate” after the 1994 film, the project arrived with incredible amounts of hype, with Trump calling it “the largest AI infrastructure project in history.” OpenAI’s Sam Altman seemed to agree, saying, ”I think this will be the most important project of this era.” In broad strokes, the plan was for SoftBank to provide the funding, with Oracle handling the buildout with input from OpenAI. Overseeing it all was Trump, who promised to clear away any regulatory hurdles that might slow down the build. But there were doubts from the beginning, including from Elon Musk, Altman’s business rival, who claimed the project did not have the available funds. As the hype has died down, the project has lost some momentum.In August, Bloomberg reported that the partners were failing to reach consensus. Nonetheless, the project has moved forward with the construction ofeight data centers in Abilene, Texas, with construction on the final building set to be finished by the end of 2026. “Capital expenditures” are usually a pretty dry metric, referring to a company’s spending on physical assets. But as tech companies lined up to report their capex plans for 2026, the rush of data center spendingmade the figures a lot more interesting— and a lot bigger. Amazon was the capex leader, projecting $200 billion in 2026 spending (up from $131 billion in 2025), while Google was a close second with an estimate between $175 billion and $185 billion (up from $91 billion in 2025). Meta estimated $115 billion to $135 billion (up from $71 billion the previous year), although that figure is a little deceptive because a lot of the data center projects have beenkept off their books entirely. All told, hyperscalers are planning to spendnearly $700 billion on data center projects in 2026 alone. It was enough money to spook some investors. The companies were mostly undeterred, however, explaining that AI infrastructure was vital to their companies’ future. It’s set up a strange dynamic. As you might expect, tech executives are more bullish on AI than their Wall Street counterparts — and the more tech companies spend, the more nervous their bankers get. Add in thehuge amounts of debtmany companies are taking on to fund those buildouts, and you start to hear CFOs across the valley grinding their teeth. That hasn’t put a damper on AI spending yet, but it will soon — unless of course, hyperscalers show they can make those investments pay off. This article was first published on September 22.
View

Anthropic’s Claude rises to No. 2 in the App Store following Pentagon dispute
Anthropic’s chatbot Claude seems to have benefited from the attention around the company’s fraught negotiations with the Pentagon. Asfirst reported by CNBC, as of Saturday afternoon, Claude is currently ranked number two among free apps in Apple’s US App Store — the number one app is OpenAI’s ChatGPT, and number three is Google Gemini. According todata from SensorTower, Claude was just outside the top 100 at the end of January, and has spent most of February somewhere in the top 20. Its ranking has climbed in the last few days, from sixth on Wednesday to fourth on Thursday to second on Saturday (today). After Anthropic attempted to negotiate for safeguards preventing the Department of Defense from using its AI models for mass domestic surveillance or fully autonomous weapons, President Donald Trump directed federal agencies to stop using all Anthropic products and Secretary of Defense Pete Hegseth said he’sdesignating the company a supply-chain threat. OpenAI subsequentlyannounced its own agreement with the Pentagon, which CEO Sam Altman claimed includes safeguards related to domestic surveillance and autonomous weapons.
View

OpenAI’s Sam Altman announces Pentagon deal with ‘technical safeguards’
OpenAI CEO Sam Altman announced late on Friday that his company has reached an agreement allowing the Department of Defense to use its AI models in the department’s classified network. This follows a high-profile standoff between the DoD — also known under the Trump administration as the Department of War — and OpenAI’s rival Anthropic. The Pentagonpushed AI companies, including Anthropic, to allow their models to be used for “all lawful purposes,”while Anthropic sought to draw a red line around mass domestic surveillance and fully autonomous weapons. Ina lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company “never raised objections to particular military operations nor attempted to limit use of our technology in anad hocmanner,” but he argued that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” More than 60 OpenAI employees and 300 Google employeessigned an open letter this weekasking their employers to support Anthropic’s position. After Anthropic and the Pentagon failed to reach an agreement, President Donald Trump criticized the “Leftwing nut jobs at Anthropic” ina social media postthat also directed federal agencies to stop using the company’s products after a six-month phase-out period. Inaseparatepost, Secretary of Defense Pete Hegseth claimed Anthropic was trying to “seize veto power over the operational decisions of the United States military.” Hegseth also said he is designating Anthropic as a supply-chain risk: “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” On Friday,Anthropic saidit had “not yet received direct communication from the Department of War or the White House on the status of our negotiations,” but insisted it would “challenge any supply chain risk designation in court.” Surprisingly, Altmanclaimed in a post on Xthat OpenAI’s new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman said. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.” Altman said OpenAI “will build technical safeguards to ensure our models behave as they should, which the DoW also wanted,” and it will deploy engineers with the Pentagon “to help with our models and to ensure their safety.” “We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” Altman added. “We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.” Fortune’s Sharon Goldman reportsthat Altman told OpenAI employees at an all-hands meeting that the government will allow the company to build its own “safety stack” to prevent misuse and that “if the model refuses to do a task, then the government would not force OpenAI to make it do that task.” Altman’s post came shortly before news broke that the U.S. and Israeli governmentshave begun bombing Iran, with Trump calling for the overthrow of the Iranian government.
View

Trump Orders Federal Agencies to Cease Use of Anthropic
Trump said the company’s actions were “putting American lives at risk, our Troops in danger, and our National Security in jeopardy.”
View

OpenAI Reaches Agreement With Department of War to Deploy Models on Classified Network
OpenAI said its models will operate under specific safety principles, including prohibitions on domestic mass surveillance.
View

Open Compute Might Be AMD’s Biggest Moat Yet
AMD’s Archana Vemulapalli says Helios emerged from the company’s work within the Open Compute Project, where it worked closely with Meta to advance rack-scale AI.
View
