Latest AI News

India’s IT Layoff Wave Revives Debate Over Taxing Severance Pay
Layoffs and AI shocks revive calls to exempt severance pay from tax as worker protection gaps widen.
View

The 2.4% Revenue Fall at TCS Signals Indian IT's AI Nightmare
Strong deal wins mask a deeper shift as AI-led efficiency begins to compress traditional IT services revenue.
View

Intel and Google Bet on CPUs Again for AI Infrastructure
The companies said Intel Xeon CPUs will continue to power Google Cloud systems across AI, inference, and general-purpose workloads.
View

Alibaba Quietly Tops Global AI Video Charts as OpenAI's Sora Fades
Alibaba has launched its new model, HappyHorse-1.0, quietly topping global video charts just as OpenAI’s Sora project shuts down.
View

Meet Harshita Arora, Y Combinator’s Youngest General Partner at 25
A self-taught dropout, from Saharanpur, Arora built, scaled, and now backs startups, rewriting Silicon Valley’s playbook.
View

TCS Eyes Deeper Ties with OpenAI, Anthropic, Mistral Beyond Traditional GTM
COO Aarthi Subramanian said TCS is looking to embed AI across infrastructure, platforms and industry solutions.
View

ChatGPT finally offers $100/month Pro plan
OpenAI announced on Thursday something that power users have been asking for forever: a $100/month plan. Until now, plans were priced at: free (which now includes ads), an $8/month Go plan (that also includes ads), a $20/month Plus plan (ad free), and then all the way up to a $200 Pro plan (also ad free). OpenAI’spricing plan pagecurrently does not list a $200/month plan at all. However,that highest tieris still available, OpenAI confirmed to TechCrunch. The model maker says Plus (which remains at $20/month) and the new $100 Pro tier are geared to support daily usage of ChatGPT’s coding tool Codex. The $100 Pro plan will offer 5x more Codex than the Plus plan. OpenAI makes no bones that this new pricing tier is to challenge Anthropic, which has long had a $100/month option for Claude. “The new $100 Pro Tier is designed to give developers more practical coding capacity for the money, especially during high-intensity work sessions where limits matter most. Compared with Claude Code, Codex delivers more coding capacity per dollar across paid tiers, with the difference showing up most clearly during active coding use,” an OpenAI spokesperson tells TechCrunch. One thing to know: OpenAI is offering even higher limits of Codex on the $100 plan through May 31. So anyone who tries the new tier, goes relatively mad with coding and never gets a rate warning: Be advised that such a situation likely won’t last. None of the plans offer unlimited usage. The $200 plan, however, offers 20× higher limits than Plus. The model maker promises on its FAQ that this is enough to support “your most demanding workflows continuously, even across parallel projects.” Both Pro plans offer the same core features. The main difference is the rate limits, the company says. The spokesperson also says that more than 3 million people globally are using Codex every week, “up 5x in the past three months, with usage growing more than 70% month over month.”
View

Sierra’s Bret Taylor says the era of clicking buttons is over
Bret Taylor, co-founder and CEO ofSierra, a startup that builds customer service AI agents for enterprises, is convinced that the way humans interact with software will change in the near future. Last month, Sierra launchedGhostwriter, an agent designed to build other agents. With this “agent as a service” tool, the startup intends to replace traditional click-based web applications with natural language. Users simply describe what they need, prompting Ghostwriter to autonomously create and deploy a specialized agent to execute the task. The idea of replacing software with language-driven prompts is intriguing in large part because many of the tools currently used in enterprises are not used regularly, contends Taylor, who was formerly co-CEO of Salesforce. “You sign into Workday when you onboard as a new employee, and maybe for open enrollment,” Taylor told the audience at theHumanXconference taking place this week in San Francisco. Instead of learning to navigate complex systems, he argued that users will soon use natural language to complete tasks without ever interacting with the software interface. “I truly think that’s where the world is going,” Taylor said. He added that Sierra is already leveraging Ghostwriter to deploy agents at “unparalleled speeds.” Taylor offered, as an example, that his startup implemented an agent for Nordstrom in just four weeks. Sierra announced last fall that it reached$100 millionin annual revenue run rate (ARR), less than 21 months after its founding. The company was last valued at$10 billionwhen it raised a $350 million round led by Greenoaks Capital in September. “Most companies don’t want to make software,” Taylor said. “They want solutions to their problems.” While a fundamental shift in software may be coming as Taylor predicts, several technologists and investors tell TechCrunch that, for now, AI agent implementation is far from autonomous. Many companies claiming to offer AI agents, including Sierra and legal AI startup Harvey, employ “forward-deployed” engineers who must constantly update and fine-tune customer agents to ensure they work as intended.
View

Google and Intel deepen AI infrastructure partnership
Google and Intel announced an expanded multiyear partnership on Thursday for Google Cloud to continue utilizing Intel AI infrastructure and to keep developing processors together. Google Cloud will use Intel’s Xeon processors, including Intel’s latest Xeon 6 chips, for AI, cloud, and inference tasks. The company has used Intel’s various Xeon processors for decades. The companies will also expand the co-development of custom infrastructure processing units (IPUs), which help accelerate and manage data center tasks by offloading them from CPUs. This chip development partnership, which started in 2021, will focus on custom ASIC-based IPUs. Intel declined to share any information regarding pricing for the deal. This expansion comes as the industry is hungry for CPUs. While GPUs are used for developing and training AI models, CPUs are crucial for running AI models and within general AI infrastructure. “AI is reshaping how infrastructure is built and scaled,” Intel chief executive Lip-Bu Tan said in acompany press release. “Scaling AI requires more than accelerators — it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.” More companies have been turning their focus to CPUs in recent months as there is a growing shortage for the chips. SoftBank-ownedArm Holdings recently announced the Arm AGI CPU, the first chip that the semiconductor giant has produced itself, amid a worldwide crunch for CPUs.
View

Is Anthropic limiting the release of Mythos to protect the internet — or Anthropic?
Anthropic said this week that itlimited the release of its newest model, dubbed Mythos, because it is too capable of finding security exploits in software relied upon by users around the world. Instead of unleashing Mythos on the public, the frontier lab willshare itwith a group of large companies and organizations that operate critical online infrastructure, from Amazon Web Services to JPMorgan Chase. OpenAI isreportedlyconsidering a similar plan for its next cybersecurity tool. The ostensible idea is to let these big enterprises get ahead of bad actors who could leverage advanced LLMs to penetrate secure software. But the “e-word” in the sentence above is a hint that there might be more to this release strategy than cybersecurity — or the hyping of model capabilities. Dan Lahav, the CEO of the AI cybersecurity labIrregular, told TechCrunch in March, before the release of Mythos, that while the discovery of vulnerabilities by AI tools matters, the specific value of any weakness to an attacker depends on many factors, including how they can be used in combination. “The question I always have in my mind,” Lahav said, “is did they find something that is exploitable in a very meaningful way, whether individually or as part of a chain?” Anthropic says Mythos is able to exploit vulnerabilities far more than its previous model, Opus. But it’s not clear that Mythos is actually the be-all and end-all of cybersecurity models. Aisle, an AI cybersecurity startup,saidit was able to replicate much of what Anthropic says Mythos accomplished using smaller, open-weight models. Aisle’s team argues that these results show there is no single deep learning model for cybersecurity, but instead depends on the task at hand. Given that Opus was already seen as a game changer for cybersecurity, there’s another reason that frontier labs may want to limit their releases to big organizations: It creates a flywheel for big enterprise contracts, while making it harder for competitors to copy their models using distillation, a technique that leverages frontier models to train new LLMs on the cheap. “This is marketing cover for fact that top-end models are now gated by enterprise agreements and no longer available to small labs to distill,” David Crawshaw, a software engineer and CEO of the startup exe.dev,suggestedin a social media post. “By the time you and I can use Mythos, there will be a new top-end rev that is enterprise only. That treadmill helps keep the enterprise dollars flowing (which is most of the dollars) by relegating distillation companies to second rank,” said Crawshaw. That analysis jibes with what we’re seeing in the AI ecosystem: A race between frontier labs developing the largest, most capable models, and companies like Aisle that rely on multiple models and see open source LLMs, often from China and often allegedly developed through distillation, as a path to economic advantage. The frontier labs have been taking a harder line on distillation this year, with Anthropic publicly revealing what it says are attempts by Chinese firms to copy its models, and three leading labs — Anthropic, Google, and OpenAI — teaming up to identify distillers and block them, according toa Bloomberg report.Distillation is a threat to the business model of frontier labs because it eliminates the advantages conveyed by using huge amounts of capital to scale. Blocking distillation, then, is already a worthwhile endeavor, but the selective release approach to doing so also gives the labs a way to differentiate their enterprise offerings as the category becomes the key to profitable deployment. Whether Mythos or any new model truly threatens the security of the internet remains to be seen, and a careful rollout of the technology is a responsible way forward. Anthropic didn’t respond to our questions about whether the decision also relates to distillation concerns at press time, but the company may have found a clever approach to protecting the internet — and its bottom line.
View

Meta AI app climbs to No. 5 on the App Store after Muse Spark launch
Meta’s AI app has seen a sizable jump in installs following Wednesday’s launch of thecompany’s newest AI model, Muse Spark— its first model release under Alexandr Wang, the head of Meta’s Superintelligence Labs who was recruited from Scale AI last year to overhaul the social giant’s AI efforts. According to new data from market intelligence provider Appfigures, consumer demand for the Meta AI app has pushed the app up from No. 57 right before the launch of Muse Spark on Wednesday to No. 5 on the U.S App Store on Thursday — a move suggesting a flood of new installs. Metasays its new AI model, which is available on both the web and mobile, is a significant upgrade over its earlier Llama 4 models. It’s also the company’s latest attempt to catch up to rivals like OpenAI and Anthropic, an effort that’s already cost Metabillions in recruiting AI talent, in addition to its $14.3 billioninvestmentin Scale AI. Currently, Muse Spark accepts multimodal input, including voice, text, and images, and has been designed to perform well on a number of tasks, like helping people learn about their health and reasoning through complex questions in areas like science and math. It can also aid in visual coding, letting users create websites and mini-games from prompts. Plus, Meta AI is able to launch multiple subagents to handle users’ questions, thecompany said. The model will roll out to other platforms, including WhatsApp, Instagram, Facebook, Messenger, and Meta’s AI glasses, in the weeks ahead. Alongside the model’s launch, the Meta AI mobile app and website were upgraded with a new look and feel and now allow users to switch between modes depending on the task. Despite the recent growth, Meta AI’s app still lags behind the AI chatbots from other top model makers, including OpenAI’s ChatGPT (No. 1), Anthropic’s Claude (No. 2), and Google’s Gemini (No. 3). Wang pointed to the new high rank in apost on Xearlier Thursday, and he noted that the app is “still growing.” Meta AI is up to #6 in the App Store overnight, and still growing :)Also who knew the 7-Eleven app was so popularpic.twitter.com/55JduZWsds Appfigures data indicates that Meta AI’s app has been installed a total of 60.5 million times worldwide across both the App Store and Google Play, with 25 million of those downloads occurring just this year. Over the past five months, Meta AI app downloads have increased by 138% when compared with the first five months of the app’s availability. India is now Meta AI’s top market by downloads, followed by the U.S., Brazil, Pakistan, and Mexico, according to Appfigures.
View

After data breach, $10B valued startup Mercor is having a month
Six months ago, Mercor was flying highafter raising a massive $350 million Series Cthat valued the AI data training startup at $10 billion. But after admittingon March 31 that it was the target of a data breach, the company has been facing a world of trouble. Since then, a hacker group has claimed to have obtained 4TB of stolen data from Mercor’s systems, including candidate profiles, personally identifiable information, employer data, source code, and API keys. Mercor has not commented on the authenticity of the data, reiterating only that it is investigating and “will continue to communicate with our customers and contractors directly as appropriate and devote the resources necessary to resolving the matter as soon as possible.” Mercor said its data breach wasthe result of a hack of the open source tool LiteLLM. This tool is so popular that it’s downloaded millions of times a day. For 40 minutes, the tool harbored credential harvesting malware — rogue software that could steal login credentials. Those credentials were used to gain access to more software and accounts, which it used to harvest more credentials, and so on. While there have been no formal acknowledgments of how much data was scooped up from Mercor, there have been repercussions all the same. Meta has paused its contracts with Mercor indefinitely,sources told Wired. (Mercor declined to comment to TechCrunch about this.) Like other contract AI data training companies, Mercor handles some of the model makers’ biggest trade secrets: the custom data sets and processes they use to teach their models. This is so important to them that even after Meta spent$14.3 billion on Mercor’s competitor Scale AI, it continued working with Mercor. In a spot of good news for Mercor (maybe…we’ll see): OpenAI also confirmed to Wired that it was investigating its exposure in Mercor’s breach, but said it had not paused or ended its contracts at the time. However, TechCrunch has heard from multiple sources that other large model makers may also be weighing their relationships with Mercor after the breach, although we have not confirmed enough details to name names as of yet. In the meantime, five of Mercor’s contractors have filed lawsuits,Business Insider reports, over their alleged personal data exposure. Whether these suits represent a serious threat or are just opportunistic and a nuisance remains to be seen. (Mercor declined to comment.) One lawsuit, reviewed by TechCrunch, even named LiteLLM and Delve as defendants. This is wild, and perhaps a stretch, but here’s the connection: LiteLLM used AI compliance startup Delve to obtain its security certifications. Delvehas been accusedby an anonymous whistleblower of allegedly faking data for security certifications and using rubber-stamping auditors. A security certification does not directly prevent hackers from launching successful attacks, but it is intended to ensure that companies have processes in place to minimize such threats. Although Delve has denied those allegations while simultaneously instituting operational changes, it has been a world of hurt of its own,to the point where Y Combinator severed tieswith the company. LiteLLMditched Delveand is now working with another AI compliance startup to obtain its security certifications again. LiteLLM also publisheda complete reporton the security incident. But Mercor itself was not a Delve customer, the company confirmed to TechCrunch. If, however, the fallout for Mercor continues, a lot of revenue could be at stake. The company was reportedly on pace to hit over $1 billion in annualized revenue earlier this year before the data leak, ananonymous source told The Information.
View
