Latest AI News

LTTS Sees Wave of Senior Leadership Reshuffle Over 8 Months
LTTS saw board-level exit, CHRO switch, COO appointment, and two senior business heads stepping down.
View

With AI, investor loyalty is (almost) dead: At least a dozen OpenAI VCs now also back Anthropic
With OpenAI onthe verge of finalizing a new $100 billion round, and Anthropic just closing its own monster$30 billion raise, one thing is clear: The concept of investor “loyalty” is only hanging on by a thread. At least a dozen direct investors in OpenAI wereannouncedas backers in Anthropic’s $30 billion raise earlier this month, including Founders Fund, Iconiq, Insight Partners, and Sequoia Capital. Some dual investments are understandable if they come from the hedge fund or asset manager worlds, where their focus is still largely investing in public stocks (competitors or not). These include D1, Fidelity, and TPG. One of these was a bit shocking. Affiliated funds of BlackRock joined in Anthropic’s $30 billion raise even though BlackRock’s senior managing director and board memberAdebayo Ogunlesiis also on OpenAI’s board of directors. In that world, it’s true that if various BlackRock funds get a chance to own OpenAI stock, they are likely to take it, never mind the personal association of a member of their senior leadership. (BlackRock runs every type of fund, including mutuals, closed-ends, and ETFs). And we all know thehistory of OpenAI and Microsoft’s relationshipand why Microsoft is hedging its bets. Ditto for Nvidia. But venture capital funds have — until now — operated differently. VCs market themselves as “founder friendly” and “helpful,” the idea being that when a VC firm buys a chunk of a startup’s company, the investor will help that startup be successful, particularly against its major rivals. If you are an owner of both OpenAI and Anthropic, who does your loyalty belong to, besides your own investors? Additionally, startups are private companies. They typically share confidential information with their direct investors on their business status — data that isn’t disclosed publicly the way it is with public companies. In many cases, the VCs also take board seats, which carries another level of fiduciary responsibility to their portfolio companies. What makes this particular case even more interesting is that Sam Altman comes from the world of venture capital, as a former president of Y Combinator. He knows the drill. In 2024, he reportedly gave his investorsa list of OpenAI’s rivalsthat he didn’t want them to back. It largely included companies launched by folks who left OpenAI, including Anthropic, xAI, and Safe Superintelligence. Altman later denied that he told OpenAI investors they would be barred from future rounds if they backed his list of perceived rivals. Altman did admit that he said if they “made non-passive investments,” they would no longer receive OpenAI’s confidential business information, according todocuments in the lawsuitbetween Elon Musk and OpenAI,Business Insider reported. AI is also breaking the mold because of the record-breaking amounts of money that the largest AI labs are raising as they experience never-before-seen growth (and never-before-seen data center needs). At some point, when the hat is being passed around, the needs are so great and the possibilities of returns are so large, who can be expected to say no? It turns out that not all venture investors have yet slid down the slippery slope. Andreessen Horowitz backs OpenAI but not (yet) Anthropic. Menlo Ventures backs Anthropic but not (yet) OpenAI, for instance. In fact, in our admittedly not exhaustive research, we found a dozen investors that appear to only have direct investments in one of these companies, not both. Others include Bessemer Venture Partners, General Catalyst, and Greenoaks. (Note: We originally asked Claude to give us the list of dual investors. It got almost as many entries wrong as it got right, so all this for a very cool tech whose work sometimes remains less trustworthy than an intern’s.) Still, as we previously reported, the fact that this longstanding rule has been tossed by some of the most respected firms in the Valley,like Sequoia, is notable. One investor we reached out to simply shrugged and said that as long as the firm doesn’t have a board seat, no one sees the harm in it anymore. Still, conflict-of-interest policies should now become another thing that founders ask about before signing that term sheet, no matter who it’s from.
View

A Meta AI security researcher said an OpenClaw agent ran amok on her inbox
Thenow-viral X postfrom Meta AI security researcher Summer Yue reads, at first, like satire. She told her OpenClaw AI agent to check her overstuffed email inbox and suggest what to delete or archive. The agent proceeded to run amok. It started deleting all her email in a “speed run” while ignoring her commands from her phone telling it to stop. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, posting images of the ignored stop prompts as receipts. The Mac Mini, an affordable Apple computer that sits flat on a desk andfits in the palm of your hand, has become the favored device these days for running OpenClaw. (The Mini is selling “like hotcakes,” one “confused” Apple employee apparently toldfamed AI researcher Andrej Karpathywhen he bought one to run an OpenClaw alternative called NanoClaw.) OpenClawis, of course, the open source AI agent that achieved fame through Moltbook, an AI-only social network. OpenClaw agents were at the center of thatnow largely debunked episodeon Moltbook in which it looked like the AIs were plotting against humans. But OpenClaw’s mission, according to itsGitHub page, is not focused on social networks. It aims to be a personal AI assistant that runs on your own devices. The Silicon Valley in-crowd has fallen so in love with OpenClaw that “claw” and “claws” have become thebuzzwords of choicefor agents that run on personal hardware. Other such agents includeZeroClaw,IronClaw, andPicoClaw. Y Combinator’s podcast team even appeared on theirmost recent episodedressed in lobster costumes. But Yue’s post serves as a warning. As others on X noted, if an AI security researcher could run into this problem, what hope do mere mortals have? “Were you intentionally testing its guardrails or did you make a rookie mistake?” a software developer asked her on X. “Rookie mistake tbh,” she replied. She had been testing her agent with a smaller “toy” inbox, as she called it, and it had been running well on less important email. It had earned her trust, so she thought she’d let it loose on the real thing. Yue believes that the large amount of data in her real inbox “triggered compaction,” she wrote. Compaction happens when the context window — the running record of everything the AI has been told and has done in a session — grows too large, causing the agent to begin summarizing, compressing, and managing the conversation. At that point, the AI may skip over instructions that the human considers quite important. In this case, it may have skipped her last prompt — where she told it not to act — and reverted back to its instructions from the “toy” inbox. As several otherson X pointed out,prompts can’t be trustedto act as security guardrails. Models may misconstrue or ignore them. Various people offered suggestions that ranged from the exact syntax Yue should have used to stop the agent, to various methods to ensure better adherence to guardrails, like writing instructions to dedicated files or using other open source tools. In the interest of full transparency, TechCrunch could not independently verify what happened to Yue’s inbox. (She didn’t respond to our request for comment, though she did respond to many questions and comments sent her way on X.) But it doesn’t really matter. The point of the tale is that agents aimed at knowledge workers, at their current stage of development, are risky. People who say they are using them successfully are cobbling together methods to protect themselves. One day, perhaps soon (by 2027? 2028?), they may be ready for widespread use. Goodness knows many of us would love help with email, grocery orders, and scheduling dentist appointments. But that day has not yet come.
View

Google’s Cloud AI leads on the three frontiers of model capability
As a product VP at Google Cloud, Michael Gerstenhaber works mostly on Vertex AI, the company’s unified platform for deploying enterprise AI. It gives him a high-level view of how companies are actually using AI models, and what still needs to be done to unleash the potential of agentic AI. When I spoke with Gerstenhaber, I was particularly struck by one idea I hadn’t heard before. As he put it, AI models are pushing against three frontiers at once: raw intelligence, response time, and a third quality that has less to do with raw capability than with cost — whether a model can be deployed cheaply enough to run at massive, unpredictable scale. It’s a new way of thinking about model capabilities, and a particularly valuable one for anyone trying to push frontier models in a new direction. This interview has been edited for length and clarity. Why don’t you start by walking us through your experience in AI so far, and what you do at Google. I’ve been in AI for about two years now. I was at Anthropic for a year and a half, I’ve been at Google almost half a year now. I run Vertex AI, Google’s developer platform. Most of our customers are engineers building their own applications. They want access to agentic patterns. They want access to an agentic platform. They want access to the inference of the smartest models in the world. I provide them that, but I don’t provide the applications themselves. That’s for Shopify, Thomson Reuters, and our various customers to provide in their own domains. What drew you to Google? Google is I think unique in the world in that we have everything from the interface to the infrastructure layer. We can build data centers. We can buy electricity and build power plants. We have our own chips. We have our own model. We have the inference layer that we control. We have the agentic layer we control. We have APIs for memory, for interleaved code writing. We have an agent engine on top of that that ensures compliance and governance. And then we even have the chat interface with Gemini enterprise and Gemini chat for consumers, right? So part of the reason I came here is because I saw Google as uniquely vertically integrated, and that being a strength for us. It’s odd because, even with all the differences between companies, it feels like all three of the big labs are reallyclose in capabilities. Is it just a race for more intelligence, or is it more complicated than that? I see three boundaries. Models like Gemini Pro are tuned for raw intelligence. Think about writing code. You just want the best code you can get, doesn’t matter if it takes 45 minutes, because I have to maintain it, I have to put it in production. I just want the best. Then there’s this other boundary with latency. If I’m doing customer support and I need to know how to apply a policy, you need intelligence to apply that policy. Are you allowed to transact a return? Can I upgrade my seat on an airplane? But it doesn’t matter how right you are if it took 45 minutes to get the answer. So for those cases, you want the most intelligent product within that latency budget, because more intelligence no longer matters once that person gets bored and hangs up the phone. And then there’s this last bucket, where somebody like Reddit or Meta wants to moderate the entire internet. They have large budgets, but they can’t take an enterprise risk on something if they don’t know how it scales. They don’t know how many poisonous posts there will be today or tomorrow. So they have to restrict their budget to a model at the highest intelligence they can afford, but in a scalable way to an infinite number of subjects. And for that, cost becomes very, very important. One of the things I’ve been puzzling about is why agentic systems are taking so long to catch on. It feels like the models are there and I’ve seen incredible demos, but we’re not seeing the kind of major changes I would have expected a year ago. What do you think is holding it back? This technology is basically two years old, and there’s still a lot of missing infrastructure. We don’t have patterns for auditing what the agents are doing. We don’t have patterns for authorization of data to an agent. There are these patterns that are going to require work to put into production. And production is always a trailing indicator of what the technology is capable of. So two years isn’t long enough to see what the intelligence supports in production, and that’s where people are struggling. I think it’s moved uniquely quickly in software engineering because it fits nicely in the software development lifecycle. We have a dev environment in which it’s safe to break things, and then we promote from the dev environment to the test environment. The process of writing code at Google requires two people to audit that code and both affirm that it’s good enough to put Google’s brand behind and give to our customers. So we have a lot of those human-in-the-loop processes that make the implementation exceptionally low-risk. But we need to produce those patterns in other places and for other professions.
View

Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports
Anthropic isaccusingthree Chinese AI companies of setting up more than 24,000 fake accounts with its Claude AI model to improve their own models. The labs —DeepSeek,Moonshot AI, andMiniMax— allegedly generated more than 16 million exchanges with Claude through those accounts using a technique called “distillation.” Anthropic said the labs “targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.” The accusations come amid debates over how strictly to enforce export controls on advanced AI chips, a policy aimed at curbing China’s AI development. Distillation is a common training method that AI labs use on their own models to create smaller, cheaper versions, but competitors can use it to essentially copy the homework of other labs. OpenAI sent a memo to House lawmakers earlier this month accusing DeepSeek of using distillation to mimic its products. DeepSeek firstmade wavesa year ago when it released its open-source R1 reasoning model that nearly matched American frontier labs in performance at a fraction of the cost. DeepSeek is expected to soon release DeepSeek V4, its latest model, whichreportedlycan outperform Anthropic’s Claude and OpenAI’s ChatGPT in coding. The scale of each attack differed in scope. Anthropic tracked more than 150,000 exchanges from DeepSeek that seemed aimed at improving foundational logic and alignment, specifically around censor-ship safe alternatives to policy-sensitive queries. Moonshot AI had more than 3.4 million exchanges targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. Last month, the firmreleaseda new open source model Kimi K2.5 and a coding agent. MiniMax’s 13 million exchanges targeted agentic coding and tool use and orchestration. Anthropic said it was able to observe MiniMax in action as it redirected nearly half its traffic to siphon capabilities from the latest Claude model when it was launched. Anthropic says it will continue to invest in defenses that make distillation attacks harder to execute and easier to identify, but is calling on “a coordinated response across the AI industry, cloud providers, and policymakers.” The distillation attacks come at a time whenAmerican chip exportsto China are still hotly debated. Last month, the Trump administration formally allowed U.S. companies like Nvidia toexport advanced AI chips(like the H200) to China. Critics have argued that this loosening of export controls increases China’s AI computing capacity at a critical time in the global race for AI dominance. Anthropic says that the scale of extraction DeepSeek, MiniMax, and Moonshot performed “requires access to advanced chips.” “Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” per Anthropic’s blog. Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think-tank and co-founder of CrowdStrike, told TechCrunch he’s not surprised to see these attacks. “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact,” Alperovitch said. “This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies],which would only advantage them further.” Anthropic also said distillation doesn’t only threaten to undercut American AI dominance, but could also create national security risks. “Anthropic and other U.S. companies build systems that prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities,” reads Anthropic’s blog post. “Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.” Anthropic pointed to authoritarian governments deploying frontier AI for things like “offensive cyber operations, disinformation campaigns, and mass surveillance,” a risk that is multiplied if those models are open-sourced. TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for comment.
View

TCS, ServiceNow Partner to Accelerate Large-Scale AI Adoption For Enterprises
The partnership will build industry-specific AI solutions that transform manual, fragmented processes into autonomous workflows.
View

Wait, Where is Krutrim?
Bhavish Aggarwal-led Krutrim's absence from the India AI Impact Summit is hard to ignore.
View

Maverick Introduces SIDDH for Critical Care Adult Patient Training
Designed to replicate real-world ICU scenarios, SIDDH enables immersive, high-accuracy training for doctors and critical care teams.
View

Karnataka Biotech Economy Crosses $39 Bn Fuelled by Biomanufacturing
Bioindustrial biotechnology emerged as the fastest-growing segment, reaching $11.46 billion in 2025.
View

Top 15 AI Startups Powering India’s Self-Reliance Mission
From foundational models to applied platforms, domestic builders are creating an end-to-end AI ecosystem rooted in Indian data and needs.
View

Amazon Opens Second-Largest Asia Office in Bengaluru
The 1.1 million square feet campus, spread across 12 floors on a five-acre site, can accommodate more than 7,000 employees.
View

LTM Lands $100 Mn Deal With European Medtech to Support Hearing Devices
LTM’s pact with a European medtech marks its first win after the rebrand.
View
