AI NewsThe trap Anthropic built for itself

The trap Anthropic built for itself

8:36 AM IST · March 1, 2026

The trap Anthropic built for itself

Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen: the Trump administration was severing ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei. Defense Secretary Pete Hegseth had invoked anational security lawto blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic’s tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input. It was a jaw-dropping sequence. Anthropic stands to lose a contract worth up to $200 million and will be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to “immediately cease all use of Anthropic technology.” (Anthropic has since said it willchallenge the Pentagon in court.) Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world’s ability to govern them. The MIT physicist founded theFuture of Life Institutein 2014 and helped organize anopen letter— ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in advanced AI development. His view of the Anthropic crisis is unsparing: the company, like its rivals, has sown the seeds of its own predicament. Tegmark’s argument doesn’t begin with the Pentagon but with a decision made years earlier — a choice, shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Anthropic this week even dropped thecentral tenet of its own safety pledge— its promise not to release increasingly powerful AI systems until the company was confident they wouldn’t cause harm. Now, in the absence of rules, there’s not a lot to protect these players, says Tegmark. Here’s more from that interview, edited for length and clarity. You can hear the full conversation this coming week on TechCrunch’sStrictlyVC Downloadpodcast. When you saw this news just now about Anthropic, what was your first reaction? The road to hell is paved with good intentions. It’s so interesting to think back a decade ago, when people were so excited about how we were going to make artificial intelligence to cure cancer, to grow the prosperity in America and make America strong. And here we are now where the U.S. government is pissed off at this company for not wanting AI to be used for domestic mass surveillance of Americans, and also not wanting to have killer robots that can autonomously — without any human input at all — decide who gets killed. Anthropic has staked its entire identity on being a safety-first AI company, and yet it was collaborating with defense and intelligence agencies [dating back to at least 2024]. Do you think that’s at all contradictory? It is contradictory. If I can give a little cynical take on this — yes, Anthropic has been very good at marketing themselves as all about safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about safety. None of them has come out supporting binding safety regulation the way we have in other industries. And all four of these companies have now broken their own promises. First we had Google — this big slogan, ‘Don’t be evil.’ Then they dropped that. Then they dropped another longer commitment that basically said they promised not to do harm with AI. They dropped that so they could sell AI for surveillance and weapons. OpenAI just dropped the word safety from their mission statement. xAI shut down their whole safety team. And now Anthropic, earlier in the week, dropped their most important safety commitment — the promise not to release powerful AI systems until they were sure they weren’t going to cause harm. How did companies that made such prominent safety commitments end up in this position? All of these companies, especially OpenAI and Google DeepMind but to some extent also Anthropic, have persistently lobbied against regulation of AI, saying, ‘Just trust us, we’re going to regulate ourselves.’ And they’ve successfully lobbied. So we right now have less regulation on AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix it. But if you say, ‘Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends for 11-year-olds, and they’ve been linked to suicides in the past, and then I’m going to release something called superintelligence which might overthrow the U.S. government, but I have a good feeling about mine’ — the inspector has to say, ‘Fine, go ahead, just don’t sell sandwiches.’ There’s food safety regulation and no AI regulation.And this, I feel, all of these companies really share the blame for. Because if they had taken all these promises that they made back in the day for how they were going to be so safe and goody-goody, and gotten together, and then gone to the government and said, ‘Please take our voluntary commitments and turn them into U.S. law that binds even our most sloppy competitors’ — this would have happened instead. We’re in a complete regulatory vacuum. And we know what happens when there’s a complete corporate amnesty: you getthalidomide, you get tobacco companies pushing cigarettes on kids, you get asbestos causing lung cancer. So it’s sort of ironic that their own resistance to having laws saying what’s okay and not okay to do with AI is now coming back and biting them. There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it. If the companies themselves had earlier come out and said, ‘We want this law,’ they wouldn’t be in this pickle. They really shot themselves in the foot. The companies’ counter-argument is always the race with China — if American companies don’t do this, Beijing will. Does that argument hold? Let’s analyze that. The most common talking point from the lobbyists for the AI companies — they’re now better funded and more numerous than the lobbyists from the fossil fuel industry, the pharma industry and the military-industrial complex combined — is that whenever anyone proposes any kind of regulation, they say, ‘But China.’ So let’s look at that. China is in the process of banning AI girlfriends outright. Not just age limits — they’re looking at banningall anthropomorphic AI. Why? Not because they want to please America but because they feel this is screwing up Chinese youth and making China weak. Obviously, it’s making American youth weak, too. And when people say we have to race to build superintelligence so we can win against China — when we don’t actually know how to control superintelligence, so that the default outcome is that humanity loses control of Earth to alien machines — guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government? No way. It’s clearly really bad for the American government too if it gets overthrown in a coup by the first American company to build superintelligence. This is a national security threat. That’s compelling framing — superintelligence as a national security threat, not an asset. Do you see that view gaining traction in Washington? I think if people in the national security community listen to Dario Amodei describe his vision — he’s given a famous speech where he says we’ll soon have acountry of geniuses in a data center— they might start thinking: wait, did Dario just use the word ‘country’? Maybe I should put that country of geniuses in a data center on the same threat list I’m keeping tabs on, because that sounds threatening to the U.S. government. And I think fairly soon, enough people in the U.S. national security community are going to realize that uncontrollable superintelligence is a threat, not a tool. This is totally analogous to the Cold War. There was a race for dominance — economic and military — against the Soviet Union. We Americans won that one without ever engaging in the second race, which was to see who could put the most nuclear craters in the other superpower. People realized that was just suicide. No one wins. The same logic applies here. What does all of this mean for the pace of AI development more broadly? How close do you think we are to the systems you’re describing? Six years ago, almost every expert in AI I knew predicted we were decades away from having AI that could master language and knowledge at human level — maybe 2040, maybe 2050. They were all wrong, because we already have that now. We’ve seen AI progress quite rapidly from high school level to college level to PhD level to university professor level in some areas. Last year, AI won the gold medal at the International Mathematics Olympiad, which is about as difficult as human tasks get. Iwrote a papertogether withYoshua Bengio,Dan Hendrycks, and other top AI researchers just a few months ago giving a rigorous definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we’re not there yet, but going from 27% to 57% that quickly suggests it might not be that long. When I lectured to my students yesterday at MIT, I told them that even if it takes four years, that means when they graduate, they might not be able to get any jobs anymore. It’s certainly not too soon to start preparing for it. Anthropic is now blacklisted. I’m curious to see what happens next — will the other AI giants stand with them and say, we won’t do this either? Or does someone like xAI raise their hand and say, Anthropic didn’t want that contract, we’ll take it?[Editor’s note: Hours after the interview, OpenAI announced itsown dealwith the Pentagon.] Last night, Sam Altman came out and said he stands with Anthropic and has the same red lines. I admire him for the courage of saying that. Google, as of when we started this interview, had said nothing. If they just stay quiet, I think that’s incredibly embarrassing for them as a company, and a lot of their staff will feel the same. We haven’t heard anything from xAI yet either. So it’ll be interesting to see. Basically, there’s this moment where everybody has to show their true colors. Is there a version of this where the outcome is actually good? Yes, and this is why I’m actually optimistic in a strange way. There’s such an obvious alternative here. If we just start treating AI companies like any other companies — drop the corporate amnesty — they would clearly have to do something like a clinical trial before they released something this powerful, and demonstrate to independent experts that they know how to control it. Then we get a golden age with all the good stuff from AI, without the existential angst. That’s not the path we’re on right now. But it could be.

read more

Latest AI News

View All News →
Medicare’s new payment model is built for AI, and most of the tech world has no idea

Medicare’s new payment model is built for AI, and most of the tech world has no idea

Neil Batlivala has spent seven years building a healthcare company that most of the tech industry has never heard of and that serves a patient population most of Silicon Valley ignores. But last month, that work put him at the center of something much bigger. His company,Pair Team, announced on April 30 it had beenacceptedintoACCESS, a Medicare program — as one of 150 participants chosen by the Centers for Medicare & Medicaid Services to test what AI-driven medical care could look like at federal scale. The program goes live July 5. “The government is creating swim lanes for AI innovation in traditionally regulated industries,” he told me over a Zoom call a few days later. “The best solution wins, which, in regulated industries like healthcare — that’s not been the case.” ACCESS — Advancing Chronic Care with Effective, Scalable Solutions — is a 10-year CMS program testing a payment model that rewards health outcomes rather than required activities (like a certain number of check-ins). Participating organizations like Pair Team receive predictable payments for managing qualifying conditions and earn the full amount only when patients meet measurable health goals, like lower blood pressure or reduced pain. It covers diabetes, hypertension, chronic kidney disease, obesity, depression, and anxiety. That payment structure is the real news. Traditional Medicare reimburses based on time spent with a clinician. There’s no mechanism to pay for an AI agent that monitors a patient between visits, calls to check in, coordinates a housing referral, or makes sure someone picks up their medication. ACCESS creates that mechanism for the first time. “It’s a payment model transformation,” Batlivala said. “You just couldn’t do this before.” The first cohort spans a wide range of participants — AI doctor startups, virtual nutrition therapy providers, connected device companies, and wearable makers like Whoop. Batlivala is skeptical of some of them. "I'm a big fan of wearables, but for a senior who's struggling with food insecurity, I don't know how much Whoop is going to be able to do," he said, adding of his own company, "We've been building toward this for five-plus years now." Pair Team launched in 2019 with a specific kind of patient in mind: people managing chronic conditions who were also dealing with unstable housing, too little food, or lack of transportation. About a third of Americans fall somewhere in that category. The company's premise was that you can't improve health outcomes without addressing the full context of someone's life. It now employs roughly 850 clinical professionals, runs what it describes as the largest community health workforce in California, and, per Batlivala, generates revenue above nine figures. It has raised about $30 million, backed by Kleiner Perkins, Kraft Ventures, and Next Ventures. The model has peer-reviewed evidence behind it. A study, co-authored by Pair Team researchers and peer-reviewed by theJournal of General Internal Medicine, evaluated Pair Team's community-integrated model, which blends medical, behavioral, and social care for Medicaid members with high rates of homelessness, serious mental illness, and chronic disease and it showed strong patient engagement and significant reductions in avoidable emergency and inpatient utilization. Batlivala says one in four hospital visits and one in two ER visits don't happen when a patient is in his company's care. But for years, delivering that level of care required human teams, which limited how fast and cheaply it could scale. Then, about nine months ago, Pair Team deployed a voice AI agent called Flora as its primary patient-facing interface. Flora is available 24 hours a day, handles intake, coordinates referrals, and does the check-ins that keep patients engaged between clinical visits. The first call that shifted his thinking was with a 67-year-old woman living out of her car, managing PTSD and congestive heart failure. She spoke with Flora for over an hour. "It was both incredible and depressing," Batlivala told me. "Flora was probably the only 'person' she'd talked to in weeks about her situation." Now, hour-long conversations with Flora are routine. "That's the companionship piece," he said. "And it turns out that is truly an intervention." The architects of ACCESS are themselves former startup operators. The program was designed by Abe Sutton, Director of the CMS Innovation Center, and Jacob Shiff, Chief AI and Technology Officer of the CMS Innovation Center. Sutton was previously a venture capitalist at a healthcare fund called Rubicon Founders. Shiff is a former healthcare founder. Both joined CMS under the Trump administration and their startup backgrounds are reflected in the program's design: outcome-based payments, direct-to-consumer enrollment, and a deliberate push for competition. There are real risks. Participants are feeding extraordinarily sensitive patient data — intimate conversations about housing and diseases and mental illness — into a federal infrastructure with a documented history of breaches, includingexposed Social Security numbers. For the vulnerable populations ACCESS is designed to serve, that's not an impractical concern. There are financial risks, too. The track record of CMS innovation programs is mixed. A 2023 Congressional Budget Officeanalysisfound that the CMS Innovation Center increased federal spending by $5.4 billion during its first decade rather than producing the projected savings. CMS is also paying less per patient per month than many participants anticipated, which means the math only works for organizations that have fully automated most of their patient interactions. Batlivala's answer to the reimbursement concern is that it's a feature, not a bug. "If you want to build a model that truly incentivizes the use of AI, the reimbursement rates have to be low," he told me. "The economics only work if you're running a lean, AI-first operation." Pair Team says it right now has partnerships in place that give it access to roughly 500,000 potential patients, and that it wants to reach a million within three years. Healthcare investors have been watching this closely. Digital health funding hit itshighest Q1 totalsince the pandemic this year, with AI companies capturing the bulk of it. But ACCESS has barely registered outside health tech trade press.

3 hours ago

View

Google adds Gemini-powered dictation to Gboard, which could be bad news for dictation startups

Google adds Gemini-powered dictation to Gboard, which could be bad news for dictation startups

Google announced Rambler, a new AI-powered voice dictation feature for Gboard — its widely used Android keyboard app — at its Android Show: I/O Edition 2026 event on Tuesday morning. The launch puts Google in direct competition with the likes ofWispr Flow and Typeless, a growing crop of AI-powered dictation apps that have built audiences on desktop and mobile in recent years — most of which have yet to establish a strong foothold on Android. Just like other dictation apps, Rambler removes filler words like “ums” and “ahs.” It also understands midsentence corrections like, “I am going to meet you on Wednesday at our usual coffee shop at 3 p.m. … um, 2 p.m.” Google said it is using Gemini-based multilingual models that also support code switching. Code switching means users can move between languages midsentence — say, from English to Hindi — and Rambler will follow along without losing context. It’s a capability that reflects how many multilingual speakers actually communicate, and one that most Western dictation apps have been slow to support. The company said that Gboard will clearly indicate to its users that the Rambler feature is in use. It doesn’t store any voice recordings and uses the audio only to transcribe what users speak. Google mentioned during the briefing that, as you can use the Rambler feature across all apps, it is like “reinventing the keyboard.” Loading the player… On privacy, Ben Greenwood, director of Android Core Experiences, said Google uses a combination of on-device and cloud-based processing and has “invested significantly over many years” to ensure features are “safe and private” — a calculated message to users weighing Rambler against third-party dictation apps that may handle data differently. In the past few years, a host of dictation apps — Wispr Flow, Willow, Superwhisper, Monologue, Handy, and Typeless — have cropped up. But until now, most of that activity has been on desktop and iOS, leaving Android relatively underserved. Google itself releasedAI Edge Eloquent, an offline-first dictation app powered by its on-device Gemma AI models, on iOS last month. Rambler is Google's clearest move yet to close that gap. These new features will be limited to Samsung Galaxy and Google Pixel phones for an initial summer rollout but will eventually reach other Android devices. The core advantage here is distribution: Gboard is the default keyboard for the vast majority of Android users worldwide, meaning Rambler arrives pre-installed for hundreds of millions of people. When a platform player enters a market at the operating-system level, stand-alone apps need a compelling reason — better accuracy, deeper features, or stronger privacy guarantees — to justify a separate download. For dictation startups, the question is no longer whether they can build something good — it's whether they can build something good enough that users actively go looking for it.

7 hours ago

View

Report: Google and SpaceX in talks to put data centers into orbit

Report: Google and SpaceX in talks to put data centers into orbit

Google and SpaceX are in talks to launch orbital data centers in space,reportsThe Wall Street Journal, citing sources familiar with the matter. The potential deal comes as SpaceX gears up for its$1.75 trillion IPOlater this year, selling investors on the idea that data centers in space will be the cheapest place to put AI compute within the next few years. It also followsSpaceX’s deal with Anthropiclast week to use computing resources from xAI’s data center in Memphis, Tennessee, with the potential to work together on orbital ones in the future. (SpaceX acquired xAI in February.) Google is reportedly talking to other rocket-launch companies, as well. The company also plans to launch prototype satellites by 2027 as part of an initiative called Project Suncatcher, announced late last year. Elon Musk hascreated hypefor orbital data centers, claiming they are cheaper to operate. Advocates also point out they are free from local backlash that U.S. ground-based buildouts attract. However, asTechCrunch recently reported, today’s terrestrial data centers are much cheaper than those in orbit once satellite construction and launch costs are factored in. Google invested $900 million in SpaceX in 2015, according toregulatory filings. TechCrunch has reached out to Google and SpaceX for comment.

7 hours ago

View

Anthropic warns investors against secondary platforms offering access to its shares

Anthropic warns investors against secondary platforms offering access to its shares

As investors scramble to get their hands on shares of AI companies of all stripes, Anthropic this weekupdated its websiteto warn investors that a slew of private and secondary investment platforms that offer access to shares in the AI company are not, in fact, allowed to do so. The company named Open Doors Partners, Unicorns Exchange, Pachamama Capital, Lionheart Ventures, Hiive (new offerings), Forge Global (new offerings), Sydecar and Upmarket as companies that are not authorized to provide access to buy or sell its shares. “Any sale or transfer of Anthropic stock, or any interest in Anthropic stock, offered by these firms is void and will not be recognized on our books and records,” the company’ssupport pagereads. Reached for comment, Forge Global claimed to have been included erroneously. “We are working with Anthropic to remove Forge’s name from this alert,” the platform told TechCrunch. “Forge does not facilitate transactions in any private company’s shares without the explicit approval of the company.” Sydecar, meanwhile, said it only acts in an administrative capacity. “The company does not buy or sell securities or solicit transactions in any private companies. Further, Sydecar requires sponsors to attest that they have reviewed relevant documents relating to the transferability of shares and that they have the required approvals and consents from the company,” the company said in an emailed statement. Anthropic’s update comes alongside a rise in the number of investment platforms offering exposure to AI companies’ shares (and thus their growth) via secondary markets where existing shareholders sell their shares, “tokenized” securities, special purpose vehicles (SPVs), or secondary market holdings. Anthropic, rumored to beraising fresh funding at a $900 billion valuation, hasespecially been in demand, with some secondary market brokers telling TechCrunch last month that it’s one of the “hardest” stocks to source. "Anthropic is right to take seriously concerns around unauthorized share sales and investment scams," Hiive spokesperson Dakota Betts said in an emailed statement. "We share those concerns. They are a major reason why Hiive invested heavily in legal, compliance, and diligence infrastructure from the beginning, and all share transfers facilitated by Hiive are approved by the issuer." Over the past year, some crypto companies, likecrypto exchange OKX, have spun up investment products selling exposure to AI companies. These often take the form of pre-IPO perpetual futures contracts, which are derivative instruments that track the value of private companies on secondary markets but don't offer ownership of actual shares. SPVs are different from those derivative systems, offering investors a chance to buy shares of an entity that holds at least some stake in Anthropic. That equity could be from an official investor, or have been acquired when an investor is forced to liquidate its holdings, as happened duringthe bankruptcy of FTX. In other cases, the equity claim may be entirely fraudulent. Anthropic says both its preferred and common stock are subject to transfer restrictions, which means any share sale or transfer not approved by its board of directors will be considered invalid. According to Anthropic, any third-party platform (specifically SPVs and retail investment firms) that claims to sell its shares directly or using forward contracts are unauthorized to do so. "We do not permit special purpose vehicles (SPVs) to acquire Anthropic stock and any transfer of shares to an SPV are void under our transfer restrictions," the company's blog reads. "Offers to invest in Anthropic’s past or future financing rounds through an SPV are prohibited." Note: This story was updated to include comments from Hiive and Sydecar.

7 hours ago

View

The trap Anthropic built for itself