AI NewsNo one has a good plan for how AI companies should work with the government

No one has a good plan for how AI companies should work with the government

4:38 AM IST · March 3, 2026

No one has a good plan for how AI companies should work with the government

As Sam Altman discovered Saturday night, it’s a fraught time to do work for the U.S. government. Around 7 p.m., the OpenAI CEOannounced he would be fielding questionspublicly on X, as a way of demystifying his company’sdecision to pick upthe Pentagon contract that Anthropic had just walked away from. Most of the questions boiled down to OpenAI’s willingness to participate in mass surveillance and automated killing – the exact activities Anthropichad ruled outin its negotiations with the Pentagon. Altman typically punted to the public sector, saying it wasn’t his role to set national policy. “I very deeply believe in the democratic process,” he wrote in one response, “and that our elected leaders have the power, and that we all have to uphold the constitution.” An hour later, he confessed surprise that so many people seemed to disagree. “There is more open debate than I thought there would be,” Altman said, “about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something people disagree on.” It’s a telling moment for both OpenAI and the tech industry at large. In his Q&A, Altman employed a stance that’s standard in the defense industry, where military leaders and industry partners are expected to defer to civilian leadership. But what’s more telling is that, as OpenAI transitions from a wildly successful consumer startup into a piece of national security infrastructure, the company appears unequipped to manage its new responsibilities. Altman’s public town hall came at a heightened time for his company. The Pentagon had justblacklisted OpenAI rival Anthropicfor insisting on contractual limitations for surveillance and automated weaponry. Days later, OpenAI announced it had won the same contract Anthropic had given up. Altman portrayed the deal as a quick way to deescalate the conflict – and it was surely a lucrative one. But he seemed unprepared for how much blowback it generated from both the company’s users and its employees. OpenAI has been engaging with the U.S. government for years — but not like this. When Altman was making his case to the Congressional committeesin 2023, for instance, he was still mostly following the social media playbook. He was bombastic about the company’s world-changing potential while acknowledging the risks and enthusiastically engaging with lawmakers — a perfect combination for stirring up investors while heading off regulation. Less than three years later, that approach is no longer tenable. AI is so obviously powerful and the capital needs are so intense that it’s impossible to avoid a more serious engagement with the government. The surprise is how unprepared both sides seem to be for it. The biggest immediate conflict is Anthropic itself, and U.S. Defense Secretary Pete Hegseth’s stated plan Friday to designate the lab as a supply chain risk. That threat looms over the whole conversation like an unfired gun. As former Trump official Dean Ballwrote over the weekend, the designation would cut Anthropic off from hardware and hosting partners, effectively destroying the company. It would be an unprecedented move against an American company, and while it mightultimately be reversed in court, it will cause damage in the interim and send shockwaves through the industry. As Ball describes the process, Anthropic was carrying out an existing contract under terms that had been established years earlier – only to have the administration insist on changing the terms. It’s far beyond anything that would fly between private companies, and sends a chilling message to other vendors. “Even if Secretary Hegseth backs down and narrows his extremely broad threat against Anthropic, great damage has been done,” Ball wrote. “Most corporations, political actors, and others will have to operate under the assumption that the logic of the tribe will now reign.” It’s a direct threat to Anthropic, but also a serious problem for OpenAI. The company is already under intense pressure from employees to maintain some semblance of a red line. At the same time, right-wing media will be on alert for any sign of OpenAI being a less-then-staunch political ally. In the middle of everything is the Trump administration, doing its best to make the situation as difficult as possible. It can be argued that OpenAI didn’t set out to become a defense contractor, but by virtue of its massive ambitions, it’s been forced to play the same game as Palantir and Anduril. Making inroads during the Trump administration means picking sides. There are no apolitical actors here, and winning some friends will mean alienating others. It remains to be seen how high a price OpenAI will pay, either in lost business or lost employees, but it’s unlikely to emerge unscathed. It might seem strange that this crackdown is coming at a time when there are more prominent tech investors holding influential positions in Washington than ever, but most of them seem entirely happy with tribal logic. Among Trump-aligned venture capitalists, Anthropic has long been perceived as currying favor with the Biden administration in ways that would damage the larger industry – a perception underscored by Trump advisorDavid Sacks’ reaction to the ongoing conflict. Now that the reverse has happened, few seem willing to stand up for the broader principle of free enterprise. This is a difficult position for any company to be in – and while politically aligned players may benefit in the short term, they’ll be just as exposed when political winds inevitably shift. There’s a reason why, for decades, the defense sector was dominated by slow-moving, heavily regulated conglomerates like Raytheon and Lockheed Martin. Operating as an industrial wing of the Pentagon gave them the political cover they needed to avoid the politics, staying focused on the technology without having to press reset every time the White House changed hands. Today’s startup competitors might move faster than their predecessors – but they’re much less prepared for the long term.

read more

Latest AI News

View All News →
Medicare’s new payment model is built for AI, and most of the tech world has no idea

Medicare’s new payment model is built for AI, and most of the tech world has no idea

Neil Batlivala has spent seven years building a healthcare company that most of the tech industry has never heard of and that serves a patient population most of Silicon Valley ignores. But last month, that work put him at the center of something much bigger. His company,Pair Team, announced on April 30 it had beenacceptedintoACCESS, a Medicare program — as one of 150 participants chosen by the Centers for Medicare & Medicaid Services to test what AI-driven medical care could look like at federal scale. The program goes live July 5. “The government is creating swim lanes for AI innovation in traditionally regulated industries,” he told me over a Zoom call a few days later. “The best solution wins, which, in regulated industries like healthcare — that’s not been the case.” ACCESS — Advancing Chronic Care with Effective, Scalable Solutions — is a 10-year CMS program testing a payment model that rewards health outcomes rather than required activities (like a certain number of check-ins). Participating organizations like Pair Team receive predictable payments for managing qualifying conditions and earn the full amount only when patients meet measurable health goals, like lower blood pressure or reduced pain. It covers diabetes, hypertension, chronic kidney disease, obesity, depression, and anxiety. That payment structure is the real news. Traditional Medicare reimburses based on time spent with a clinician. There’s no mechanism to pay for an AI agent that monitors a patient between visits, calls to check in, coordinates a housing referral, or makes sure someone picks up their medication. ACCESS creates that mechanism for the first time. “It’s a payment model transformation,” Batlivala said. “You just couldn’t do this before.” The first cohort spans a wide range of participants — AI doctor startups, virtual nutrition therapy providers, connected device companies, and wearable makers like Whoop. Batlivala is skeptical of some of them. "I'm a big fan of wearables, but for a senior who's struggling with food insecurity, I don't know how much Whoop is going to be able to do," he said, adding of his own company, "We've been building toward this for five-plus years now." Pair Team launched in 2019 with a specific kind of patient in mind: people managing chronic conditions who were also dealing with unstable housing, too little food, or lack of transportation. About a third of Americans fall somewhere in that category. The company's premise was that you can't improve health outcomes without addressing the full context of someone's life. It now employs roughly 850 clinical professionals, runs what it describes as the largest community health workforce in California, and, per Batlivala, generates revenue above nine figures. It has raised about $30 million, backed by Kleiner Perkins, Kraft Ventures, and Next Ventures. The model has peer-reviewed evidence behind it. A study, co-authored by Pair Team researchers and peer-reviewed by theJournal of General Internal Medicine, evaluated Pair Team's community-integrated model, which blends medical, behavioral, and social care for Medicaid members with high rates of homelessness, serious mental illness, and chronic disease and it showed strong patient engagement and significant reductions in avoidable emergency and inpatient utilization. Batlivala says one in four hospital visits and one in two ER visits don't happen when a patient is in his company's care. But for years, delivering that level of care required human teams, which limited how fast and cheaply it could scale. Then, about nine months ago, Pair Team deployed a voice AI agent called Flora as its primary patient-facing interface. Flora is available 24 hours a day, handles intake, coordinates referrals, and does the check-ins that keep patients engaged between clinical visits. The first call that shifted his thinking was with a 67-year-old woman living out of her car, managing PTSD and congestive heart failure. She spoke with Flora for over an hour. "It was both incredible and depressing," Batlivala told me. "Flora was probably the only 'person' she'd talked to in weeks about her situation." Now, hour-long conversations with Flora are routine. "That's the companionship piece," he said. "And it turns out that is truly an intervention." The architects of ACCESS are themselves former startup operators. The program was designed by Abe Sutton, Director of the CMS Innovation Center, and Jacob Shiff, Chief AI and Technology Officer of the CMS Innovation Center. Sutton was previously a venture capitalist at a healthcare fund called Rubicon Founders. Shiff is a former healthcare founder. Both joined CMS under the Trump administration and their startup backgrounds are reflected in the program's design: outcome-based payments, direct-to-consumer enrollment, and a deliberate push for competition. There are real risks. Participants are feeding extraordinarily sensitive patient data — intimate conversations about housing and diseases and mental illness — into a federal infrastructure with a documented history of breaches, includingexposed Social Security numbers. For the vulnerable populations ACCESS is designed to serve, that's not an impractical concern. There are financial risks, too. The track record of CMS innovation programs is mixed. A 2023 Congressional Budget Officeanalysisfound that the CMS Innovation Center increased federal spending by $5.4 billion during its first decade rather than producing the projected savings. CMS is also paying less per patient per month than many participants anticipated, which means the math only works for organizations that have fully automated most of their patient interactions. Batlivala's answer to the reimbursement concern is that it's a feature, not a bug. "If you want to build a model that truly incentivizes the use of AI, the reimbursement rates have to be low," he told me. "The economics only work if you're running a lean, AI-first operation." Pair Team says it right now has partnerships in place that give it access to roughly 500,000 potential patients, and that it wants to reach a million within three years. Healthcare investors have been watching this closely. Digital health funding hit itshighest Q1 totalsince the pandemic this year, with AI companies capturing the bulk of it. But ACCESS has barely registered outside health tech trade press.

2 hours ago

View

Google adds Gemini-powered dictation to Gboard, which could be bad news for dictation startups

Google adds Gemini-powered dictation to Gboard, which could be bad news for dictation startups

Google announced Rambler, a new AI-powered voice dictation feature for Gboard — its widely used Android keyboard app — at its Android Show: I/O Edition 2026 event on Tuesday morning. The launch puts Google in direct competition with the likes ofWispr Flow and Typeless, a growing crop of AI-powered dictation apps that have built audiences on desktop and mobile in recent years — most of which have yet to establish a strong foothold on Android. Just like other dictation apps, Rambler removes filler words like “ums” and “ahs.” It also understands midsentence corrections like, “I am going to meet you on Wednesday at our usual coffee shop at 3 p.m. … um, 2 p.m.” Google said it is using Gemini-based multilingual models that also support code switching. Code switching means users can move between languages midsentence — say, from English to Hindi — and Rambler will follow along without losing context. It’s a capability that reflects how many multilingual speakers actually communicate, and one that most Western dictation apps have been slow to support. The company said that Gboard will clearly indicate to its users that the Rambler feature is in use. It doesn’t store any voice recordings and uses the audio only to transcribe what users speak. Google mentioned during the briefing that, as you can use the Rambler feature across all apps, it is like “reinventing the keyboard.” Loading the player… On privacy, Ben Greenwood, director of Android Core Experiences, said Google uses a combination of on-device and cloud-based processing and has “invested significantly over many years” to ensure features are “safe and private” — a calculated message to users weighing Rambler against third-party dictation apps that may handle data differently. In the past few years, a host of dictation apps — Wispr Flow, Willow, Superwhisper, Monologue, Handy, and Typeless — have cropped up. But until now, most of that activity has been on desktop and iOS, leaving Android relatively underserved. Google itself releasedAI Edge Eloquent, an offline-first dictation app powered by its on-device Gemma AI models, on iOS last month. Rambler is Google's clearest move yet to close that gap. These new features will be limited to Samsung Galaxy and Google Pixel phones for an initial summer rollout but will eventually reach other Android devices. The core advantage here is distribution: Gboard is the default keyboard for the vast majority of Android users worldwide, meaning Rambler arrives pre-installed for hundreds of millions of people. When a platform player enters a market at the operating-system level, stand-alone apps need a compelling reason — better accuracy, deeper features, or stronger privacy guarantees — to justify a separate download. For dictation startups, the question is no longer whether they can build something good — it's whether they can build something good enough that users actively go looking for it.

6 hours ago

View

Report: Google and SpaceX in talks to put data centers into orbit

Report: Google and SpaceX in talks to put data centers into orbit

Google and SpaceX are in talks to launch orbital data centers in space,reportsThe Wall Street Journal, citing sources familiar with the matter. The potential deal comes as SpaceX gears up for its$1.75 trillion IPOlater this year, selling investors on the idea that data centers in space will be the cheapest place to put AI compute within the next few years. It also followsSpaceX’s deal with Anthropiclast week to use computing resources from xAI’s data center in Memphis, Tennessee, with the potential to work together on orbital ones in the future. (SpaceX acquired xAI in February.) Google is reportedly talking to other rocket-launch companies, as well. The company also plans to launch prototype satellites by 2027 as part of an initiative called Project Suncatcher, announced late last year. Elon Musk hascreated hypefor orbital data centers, claiming they are cheaper to operate. Advocates also point out they are free from local backlash that U.S. ground-based buildouts attract. However, asTechCrunch recently reported, today’s terrestrial data centers are much cheaper than those in orbit once satellite construction and launch costs are factored in. Google invested $900 million in SpaceX in 2015, according toregulatory filings. TechCrunch has reached out to Google and SpaceX for comment.

6 hours ago

View

Anthropic warns investors against secondary platforms offering access to its shares

Anthropic warns investors against secondary platforms offering access to its shares

As investors scramble to get their hands on shares of AI companies of all stripes, Anthropic this weekupdated its websiteto warn investors that a slew of private and secondary investment platforms that offer access to shares in the AI company are not, in fact, allowed to do so. The company named Open Doors Partners, Unicorns Exchange, Pachamama Capital, Lionheart Ventures, Hiive (new offerings), Forge Global (new offerings), Sydecar and Upmarket as companies that are not authorized to provide access to buy or sell its shares. “Any sale or transfer of Anthropic stock, or any interest in Anthropic stock, offered by these firms is void and will not be recognized on our books and records,” the company’ssupport pagereads. Reached for comment, Forge Global claimed to have been included erroneously. “We are working with Anthropic to remove Forge’s name from this alert,” the platform told TechCrunch. “Forge does not facilitate transactions in any private company’s shares without the explicit approval of the company.” Sydecar, meanwhile, said it only acts in an administrative capacity. “The company does not buy or sell securities or solicit transactions in any private companies. Further, Sydecar requires sponsors to attest that they have reviewed relevant documents relating to the transferability of shares and that they have the required approvals and consents from the company,” the company said in an emailed statement. Anthropic’s update comes alongside a rise in the number of investment platforms offering exposure to AI companies’ shares (and thus their growth) via secondary markets where existing shareholders sell their shares, “tokenized” securities, special purpose vehicles (SPVs), or secondary market holdings. Anthropic, rumored to beraising fresh funding at a $900 billion valuation, hasespecially been in demand, with some secondary market brokers telling TechCrunch last month that it’s one of the “hardest” stocks to source. "Anthropic is right to take seriously concerns around unauthorized share sales and investment scams," Hiive spokesperson Dakota Betts said in an emailed statement. "We share those concerns. They are a major reason why Hiive invested heavily in legal, compliance, and diligence infrastructure from the beginning, and all share transfers facilitated by Hiive are approved by the issuer." Over the past year, some crypto companies, likecrypto exchange OKX, have spun up investment products selling exposure to AI companies. These often take the form of pre-IPO perpetual futures contracts, which are derivative instruments that track the value of private companies on secondary markets but don't offer ownership of actual shares. SPVs are different from those derivative systems, offering investors a chance to buy shares of an entity that holds at least some stake in Anthropic. That equity could be from an official investor, or have been acquired when an investor is forced to liquidate its holdings, as happened duringthe bankruptcy of FTX. In other cases, the equity claim may be entirely fraudulent. Anthropic says both its preferred and common stock are subject to transfer restrictions, which means any share sale or transfer not approved by its board of directors will be considered invalid. According to Anthropic, any third-party platform (specifically SPVs and retail investment firms) that claims to sell its shares directly or using forward contracts are unauthorized to do so. "We do not permit special purpose vehicles (SPVs) to acquire Anthropic stock and any transfer of shares to an SPV are void under our transfer restrictions," the company's blog reads. "Offers to invest in Anthropic’s past or future financing rounds through an SPV are prohibited." Note: This story was updated to include comments from Hiive and Sydecar.

6 hours ago

View