Latest AI News

Lovable’s Internal LLM Routing Handles 1 Bn Tokens/Min While Preserving Prompt Caching

Lovable’s Internal LLM Routing Handles 1 Bn Tokens/Min While Preserving Prompt Caching

Lovable’s AI infrastructure dynamically distributes traffic across multiple model providers.

2 months ago

View

Emergent Runs Over 30,000 AI Coding Environments Using Kubernetes Pods

Emergent Runs Over 30,000 AI Coding Environments Using Kubernetes Pods

“AI coding agents need real infrastructure, not lightweight sandboxes dressed up as infrastructure,” Emergent wrote in a blog.

2 months ago

View

In 630 Lines of Code, Andrej Karpathy Builds AI Research System Running on a Single GPU

In 630 Lines of Code, Andrej Karpathy Builds AI Research System Running on a Single GPU

The autonomous system allows roughly 12 experiments per hour and about 100 experiments overnight.

2 months ago

View

Will the Pentagon’s Anthropic controversy scare startups away from defense work?

Will the Pentagon’s Anthropic controversy scare startups away from defense work?

In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology fell through, the Trump administrationdesignated Anthropic a supply-chain risk, and the AI company said it would fight that designation in court. OpenAI, meanwhile, quickly announced a deal of its own, prompting backlash that sawusers uninstalling ChatGPTandpushing Anthropic’s Claude to the top of the App Store charts. Andat least one OpenAI executive has quitover concerns that the announcement was rushed without appropriate guardrails in place. On the latest episode ofTechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for other startups seeking to work with the federal government, especially the Pentagon, as Kirsten wondered, “Are we going to see a changing of the tune a little bit?” Sean pointed out that this is an unusual situation in a number of ways, in part because OpenAI and Claude make products that “no one can shut up about.” And crucially, this is a dispute over “how their technologies are being used or not being used to kill people” so it’s naturally going to draw more scrutiny. Still, Kirsten argued, this is a situation that should “give any startup pause.” Read a preview of our conversation, edited for length and clarity, below. Kirsten:I’m wondering if other startups are starting to look at what’s happened with the federal government, specifically the Pentagon and Anthropic, that debate and wrestling match, and [take] pause about whether they want to be going after federal dollars. Are we going to see a changing of the tune a little bit? Sean:I wonder about that, too. I think no, to some extent, in the near term, if only because when you really try to think about all the different companies, whether they’re startups or even more established Fortune 500s that do work with the government and in particular with the Department of Defense or the Pentagon, [for] a lot of them, that work flies under the radar. General Motors makes defense vehicles for the Army and has done [that] for a very long time and has worked on all electric versions of those vehicles and autonomous versions. There’s stuff like that that goes on all the time and it just never really hits the zeitgeist. I think the problem that OpenAI and Anthropic ran into within the last week is like, these are companies that make products that a ton of people use — and also more importantly, [that] no one can shut up about. So there’s just such a spotlight on them, that naturally highlights their involvement to a level that I think most of the other companies that are contracting with the federal government — and, in particular, any of the war-fighting elements of the federal government — don’t necessarily have to deal with. The only caveat I’ll add to that is a lot of the heat around this discussion between Anthropic and OpenAI and the Pentagon is very specifically about how their technologies are being used or not being used to kill people, or in parts of the missions that are killing people. It’s not just the attention that’s on them and the familiarity we have with their brands, there is an extra element there that I feel is more abstract when you’re thinking about General Motors as a defense contractor or whatever. I don’t think we’re going to see, like, Applied Intuition or any of these other companies that have been framing themselves as dual use back off much, just because I don’t see the spotlight on it and there’s just not the sort of shared understanding of what that impact might be. Anthony:This story is so unique and specific to these companies and personalities in a lot of ways. I mean, there have been a lot ofreally interesting thought piecesabout: What is the role of technology in government? [Of] AI in government? And I think those are all good and worthwhile questions to ask and explore. I think also, though, that this is a very curious lens through which to examine some of those things because Anthropic and OpenAI are not actually that different in a lot of ways or the stances they’re taking. It’snotlike one company is saying, “Hey, I don’t want to work with the government” and one is saying, “Yes, I do.” Or one is saying, “You can do whatever you want.” and [the other is] saying, “No, I want to have restrictions.” Both of them, at least publicly, are saying, “We want restrictions on how our AI gets used.” It just seems like Anthropic is digging in their heels a lot more about: You cannot change the terms in this way. And then on top of that, there also just seems to be a personality layer where, the CEO of Anthropic and, Emil Michael — who a lot of TechCrunch readers mightremember from his Uber days, and is now [chief technology officer for the Department of Defense]. Apparently, they just really don’t like each other.Reportedly. Sean:Yes, there’s a very big “girls are fighting” element here that we should not overlook. Kirsten:Yeah, a little bit. There is, but the implications are a little bit stronger than that.  Again, to pull back a little bit, what we’re talking about here is the Pentagon and Anthropic coming into a dispute in which Anthropic appears to have lost, although I should say they are still very much being used by the military. They are considered a crucial technology, but OpenAI has kind of stepped in, and this is evolving and will likely change by the time this episode comes out. The blowback has been interesting for OpenAI, where we’ve seen a lot ofuninstalls of ChatGPT I think surged 295%after OpenAI locked in the deal with the Department of Defense. To me, all of this is noise to the really critical and dangerous thing, which is that the Pentagon was seeking to change existing terms on an existing contract. And that is really important and should give any startup pause because the political machine that’s happening right now, particularly with the DoD, appears to be different. This isn’t normal. Contracts take forever to get baked in at the government level and the fact that they’re seeking to change those terms is a problem. Loading the player…

2 months ago

View

Owner of ICE detention facility sees big opportunity in AI man camps

Owner of ICE detention facility sees big opportunity in AI man camps

To house the hundreds or thousands of temporary workers needed to build an AI data center, developers are increasingly relying on temporary villages known as man camps. This style of camp was popularized ashousing for men working in remote oil fields. For example, as a Bitcoin mining facility in rural Dickens County, Texas is converted into a 1.6 gigawatt data center,Bloomberg reportsits workers are living in gray housing units with access to a gym, a laundromat, game rooms, and a cafeteria that grills steaks on-demand. A company called Target Hospitality has signed multiple contracts worth a total of $132 million to build and operate the Dickens County camp, which could eventually house more than 1,000 workers. Target apparently sees the U.S. data center construction boom as its most lucrative growth opportunity, with chief commercial officer Troy Schrenk describing it as “the largest, most actionable pipeline I’ve ever seen.” Target alsoowns the Dilley Immigration Processing Centerin Texas, which holds families detained by Immigration and Customs Enforcement. Court filings have alleged that the center’s food has had worms and mold, and that children have suffered without accommodation for allergies and special diets.

2 months ago

View

A roadmap for AI, if anyone will listen

A roadmap for AI, if anyone will listen

While Washington’s breakup with Anthropic exposed the complete lack of any coherent rules governing artificial intelligence, a bipartisan coalition of thinkers has assembled something the government has so far declined to produce: a framework for what responsible AI development should actually look like. ThePro-Human Declarationwas finalized before last week’s Pentagon-Anthropic standoff, but the collision of the two events wasn’t lost on anyone involved. “There’s something quite remarkable that has happened in America just in the last four months,” said Max Tegmark, the MIT physicist and AI researcher who helped organize the effort,in conversationwith this editor. “Polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence.” The newly published document, signed by hundreds of experts, former officials, and public figures, opens with the no-nonsense observation that humanity is at a fork in the road. One path, which the declaration calls “the race to replace,” leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other leads to AI that massively expands human potential. The latter scenario depends on five key pillars: keeping humans in charge, avoiding the concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable. Among its more muscular provisions is an outright prohibition on superintelligence development until there’s scientific consensus it can be done safely and genuine democratic buy-in; mandatory off-switches on powerful systems; and a ban on architectures that are capable of self-replication, autonomous self-improvement, or resistance to shutdown. The declaration’s release coincides with a period that makes its urgency far easier to appreciate. On the last Friday in February, Defense Secretary Pete Hegseth designated Anthropic — whose AI already runs on classified military platforms — a “supply chain risk” after the company refused to grant the Pentagon unlimited use of its technology, a label ordinarily reserved for firms with ties to China. Hours later, OpenAI cut its own deal with the Defense Department, one that legal experts say will be difficult to enforce in any meaningful way. What it all laid bare is how costly Congressional inaction on AI has become. As Dean Ball, a senior fellow at the Foundation for American Innovation,told The New York Timesafterward, “This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems.” Tegmark reached for an analogy that most people can understand when we spoke. “You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe,” he said, “because the FDA won’t allow them to release anything until it’s safe enough.” Washington turf wars rarely generate the kind of public pressure that changes laws. Instead, Tegmark sees child safety as the pressure point most likely to crack the current impasse. Indeed, the declaration calls for mandatory pre-deployment testing of AI products — particularly chatbots and companion apps aimed at younger users — covering risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation. “If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that,” Tegmark said. “We already have laws. It’s illegal. So why is it different if a machine does it?” He believes that once the principle of pre-release testing is established for children’s products, the scope will widen almost inevitably. “People will come along and be like — let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.” It is no small thing that former Trump advisor Steve Bannon and Susan Rice, President Obama’s National Security Advisor, have signed the same document — along with former Joint Chiefs Chairman Mike Mullen and progressive faith leaders. “What they agree on, of course, is that they’re all human,” says Tegmark. “If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side.”

2 months ago

View

In 2026, Leadership Still Evades Women in Tech. But GCCs May Be Changing That

In 2026, Leadership Still Evades Women in Tech. But GCCs May Be Changing That

Global capability centres, leaders argue, are giving women something that traditional technology structures rarely did.

2 months ago

View

Google just gave Sundar Pichai a $692M pay package

Google just gave Sundar Pichai a $692M pay package

Sundar Pichai’s new pay package could be worth $692 million. Per afilingfirstspied by the FT, Alphabet has structured a three-year deal for its Google CEO that could make him one of the highest-paid executives on the planet — but most of it is tied to performance, including new stock incentives linked to Waymo and Wing, its drone delivery venture. What’s striking is how little public fascination Pichai attracts compared to Google’s founders. Larry Page and Sergey Brin — the second- and fourth richest people in the world — have lately captured headlines for a different reason entirely; both have been snapping up lavish Miami properties, widely seen as a response to California’s proposedBillionaire Tax Act— a ballot initiative targeting the state’s roughly 200 billionaires with a one-time 5% levy on net worth exceeding $1 billion. Page reportedly spent over $173 million on two mansions in Coconut Grove, Florida, recently, while Brin was just linked to a $51 million megamansion14 miles away, atop two earlier purchases totaling $92 million. Pichai, by contrast, remains quietly rooted in Los Altos, California, as far as the public knows. He’s a billionaire, too — the nearly sevenfold growth in Google’s market cap since he took the helm in 2015 has made the stock he’s accumulated along the way hugely valuable. He and his wife currently hold shares worth nearly$500 million, with another estimated$650 millionsold as of last summer, per Bloomberg’s calculations.

2 months ago

View

OpenAI robotics lead Caitlin Kalinowski quits in response to Pentagon deal

OpenAI robotics lead Caitlin Kalinowski quits in response to Pentagon deal

Hardware executive Caitlin Kalinowski announced today that in response toOpenAI’s controversial agreement with the Department of Defense, she’s resigned from her role leading the company’s robotics team. “This wasn’t an easy call,” Kalinowski saidin a social media post. “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” Kalinowski, who previously led the team building augmented reality glasses at Meta,joined OpenAI in November 2024. In her announcement today, she emphasized that the decision was “about principle, not people” and said she has “deep respect” for CEO Sam Altman and the OpenAI team. In a follow-up post on X, Kalinowski added, “To be clear, my issue is that the announcement was rushed without the guardrails defined. It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed.” An OpenAI spokesperson confirmed Kalinowski’s departure to TechCrunch. “We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,” the company said in a statement. “We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world.” OpenAI’s agreement with the Pentagon was announced just over a week ago, afterdiscussions between the Pentagon and Anthropic fell throughas the AI company tried to negotiate for safeguards preventing its technology from being used in mass domestic surveillance or fully autonomous weapons. The Pentagon subsequentlydesignated Anthropic a supply-chain risk. (Anthropic said it willfight the designation in court; in the meantime, Microsoft, Google, and Amazon said they will continue tomake Anthropic’s Claude available to non-defense customers.) Then, OpenAI quicklyannounced a agreement of its ownallowing its technology to be used in classified environments. As executives attempted to explain the deal on social media,the company described itas taking “a more expansive, multi-layered approach” that relies not just on contract language, but also technical safeguards, to protect red lines similar to Anthropic’s. Nonetheless, the controversy appears to have damaged OpenAI’s reputation among some consumers, withChatGPT uninstalls surging 295%andClaude climbing to the top of the App Store charts. As of Saturday afternoon, Claude and ChatGPT remain the U.S. App Store’s number one and number two free apps, respectively.

2 months ago

View

Grammarly’s ‘expert review’ is just missing the actual experts

Grammarly’s ‘expert review’ is just missing the actual experts

A recently-added feature inGrammarlypurports to improve users’ writing with help from the world’s great writers and thinkers — and some tech journalists, too. Launched in August 2025as part ofa broader set of AI-powered features, Expert Review appears in the sidebar of Grammarly’s main writing assistant, allowing users to bring up revision suggestions “from the perspective” of subject matter experts. Wired notedthat this Grammarly frames this feedback as if it was coming from well-known authors, whether they’re living or dead. In some cases,according to The Verge, it can even appear to come from tech journalists at The Verge, Wired, Bloomberg, The New York Times, and other publications. Of course, I couldn’t help but wonder: What about TechCrunch? I copy-pasted an early draft of this post into Grammarly in the hopes that that I might see some tips from my TC colleagues, but I was instead told to add ethical context like Casey Newton, “leverage the anecdote for reader alignment” like Kara Swisher, and “pose the bigger accountability question” like Timnit Gebru. Which was all rather disappointing: Yes, the feature seems a bit thoughtless and ill-advised, butifall those other pubs are going to get mentioned, then what are we doing wrong? Anyway, to state the obvious, none of these figures appear to be involved in Expert Reviews or to have given Grammarly permission to use their names. Alex Gay, vice president of product and corporate marketing at Grammarly’s parent company Superhuman, told The Verge that these experts are mentioned “because their published works are publicly available and widely cited.” And in itsuser guide to the feature, Grammarly says, “References to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities.” Which is reasonably clear, I guess. But it raises the question: In what sense is Grammarly actually providing an “expert review”? Perhaps none at all, as historian C.E. Aubin told Wired: “These are not expert reviews, because there are no ‘experts’ involved in producing them.”

2 months ago

View

OpenAI delays ChatGPT’s ‘adult mode’ again

OpenAI delays ChatGPT’s ‘adult mode’ again

OpenAI has delayed the launch of “adult mode,” a ChatGPT feature that will give verified adult users access to erotica and other adult content. OpenAI CEO Sam Altman firstannounced the feature in October, writing, “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.” The launch had already beendelayed once,from December — when Altman reportedly sent an internal memo declaring a “code red” and calling for teams to focus on the core ChatGPT experience — until the first quarter of this year. Nowan OpenAI spokesperson has told Axiosthat the company is  “pushing out the launch of adult mode” in order to “focus on work that is a higher priority for more users right now,” such as work on aspects like intelligence, personality, and making the chatbot “more proactive.” “We still believe in the principle of treating adults like adults, but getting the experience right will take more time, ” the spokesperson said. It’s not clear how long the delay is expected to last. The news wasfirst reported by Sources.

2 months ago

View

How a Portable AI Device is Helping Women in Rural India Detect Breast Cancer Early

How a Portable AI Device is Helping Women in Rural India Detect Breast Cancer Early

Over 1.83 lakh women were screened for cancers across India using a portable, radiation-free AI breast screening device.

2 months ago

View

PreviousPage 116 of 155Next