Latest AI News

India’s AI Dream Falls Short of a Million GPUs
Without scaling GPUs and investment, India risks trailing in the frontier AI race.
View

Claude Mythos Explained: Everything You Need to Know About Anthropic’s Cybersecurity AI Model
In just 48 hours, Anthropic announced its new cybersecurity-focused artificial intelligence (AI) model, Claude Mythos Preview, and raised alarms across the entire global tech space. The San Francisco-based AI startup called it the most powerful model when it comes to cybersecurity tasks, especially finding undiscovered vulnerabilities in codebases. The company also warned that the model found thousands of high-severity vulnerabilities in “every major operating system and web browser,” which, if true, is a major concern. Anthropic has also limited its release, citing its ability to hack into any system.
View

Anthropic temporarily banned OpenClaw’s creator from accessing Claude
“Yeah folks, it’s gonna be harder in the future to ensure OpenClaw still works with Anthropic models,” OpenClaw creator Peter Steinbergerposted on X early Friday morning, along with a photo of a message from Anthropic saying his account had been suspended over “suspicious” activity. The ban didn’t last long. A few hours later, after the post went viral, Steinberger said his account had been reinstated. Among hundreds of comments — many of them in conspiracy theory land, given that Steinberger isnow employedby Anthropic rival OpenAI — was one by an Anthropic engineer. The engineer told the famed developer that Anthropic has never banned anyone for using OpenClaw and offered to help. Yeah folks, it's gonna be harder in the future to ensure OpenClaw still works with Anthropic models.pic.twitter.com/U6F8GZvPcH It’s not clear if that was the key that restored the account. (We’ve asked Anthropic about it.) But the whole message string was enlightening on many levels. To recap the recent history: This ban followed news last week thatsubscriptions to Anthropic’s Claude would no longer cover“third-party harnesses including OpenClaw,” the AI model company said. OpenClaw users now have to pay for that usage separately, based on consumption, through Claude’s API. In essence, Anthropic, which offers its own agent, Cowork, is now charging a “claw tax.” Steinberger said he was following this new rule and using his API but was banned anyway. Anthropic said it instituted the pricing change because subscriptions weren’t built to handle the “usage patterns” of claws. Claws can bemore compute-intensivethan prompts or simple scripts because they may run continuous reasoning loops, automatically repeat or retry tasks, and tie into a lot of other third-party tools. Steinberger, however, wasn’t buying that excuse. After Anthropic changed the pricing,he posted, “Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source.” Though he didn’t specify, he may have been referring to features added to Claude’s Cowork agent, such as Claude Dispatch,which lets users remotely control agents and assign tasks. Dispatch rolled out a couple of weeks before Anthropic changed its OpenClaw pricing policy. Steinberger’s frustration with Anthropic was again on display Friday. One person implied that some of this is on him for taking a job at OpenAI instead of Anthropic, posting, “You had the choice, but you went to the wrong one.” To which Steinberger replied: “One welcomed me, one sent legal threats.” Ouch. When multiple people asked him why he’s using Claude instead of his employer’s models at all, he explained that he only uses it for testing, to ensure updates to OpenClaw won’t break things for Claude users. He explained: “You need to separate two things. My work at the OpenClaw Foundation where we wanna make OpenClaw work great for *any* model provider, and my job at OpenAI to help them with future product strategy.” Multiple people also pointed out that the need to test Claude is because that model remains a popular choice for OpenClaw users over ChatGPT. He also heard that when Anthropic changed its pricing, to which he replied: “Working on that.” (So, that’s a clue about what his job at OpenAI entails.) Steinberger did not respond to a request for comment.
View

TechCrunch is heading to Tokyo — and bringing the Startup Battlefield with it
TechCrunch is partnering withSusHi Tech Tokyo 2026, Asia’s largest global innovation conference, taking place April 27–29 at Tokyo Big Sight. And we’re not just showing up to cover it — our very own Startup Battlefield program manager, Isabelle Johannessen, will be on the ground as a judge for the SusHi Tech Challenge, the conference’s flagship global pitch competition. For the winner, the stakes couldn’t be higher: The SusHi Tech Challenge Grand Prix recipient will be automatically entered into the TechCrunch Disrupt Startup Battlefield Top 200 — making them eligible to pitch on one of the most coveted stages in the startup world. Now in its fourth year,SusHi Tech Tokyo— short forSustainableHigh City Tech Tokyo — has grown into the largest innovation conference in Asia, drawing startups, investors, corporate partners, and city leaders from around the world. This year’s edition is the biggest yet: 750 startup exhibitors from 60 countries, more than 10,000 facilitated business meetings, and an expected 60,000 attendees across three days. The conference is organized by the Tokyo Metropolitan Government with a clear mission: bring together the world’s best innovators to build the sustainable cities of the future. On the expo floor, 62 corporate partners — including Sony, Google, Microsoft, and Mizuho — are hosting reverse pitches and actively seeking startup collaborators, making it as much a live dealmaking marketplace as a conference. And the programming reflects that ambition. SusHi Tech 2026 is zeroing in on four technology domains reshaping society: AI, Robotics, Resilience, and Entertainment. Expect live demos of humanoid robots, sessions on autonomous driving’s software revolution, deep dives into cyber defense and climate tech, and candid conversations about how AI is rewriting the global music and anime industries. Speakers include Howard Wright (Nvidia), Rob Chu (AWS), Eva Chen (Trend Micro), Qasar Younis (Applied Intuition), Christine Tsai (500 Global), Kathy Matsui (MPower Partners), and Tokyo governor Yuriko Koike, among many others. Roughly 60% of speakers come from outside Japan, and approximately half are women. Going to be in Tokyo? Don’t miss it.Get your tickets here. The pitch competition drew 820 applications from 60 countries and regions — 437 international, 383 Japanese. Twenty semifinalists compete on April 27, seven finalists advance to the final on April 28, and one Grand Prix winner takes home ¥10,000,000 and automatic entry into the TechCrunch Disrupt Startup Battlefield Top 200. The conference extends well beyond the convention floor. City leaders from 49 cities across five continents — from Los Angeles to Nairobi to Singapore — are convening for the G-NETS Leaders Summit to forge concrete commitments on climate resilience and urban sustainability. On the expo floor, 62 corporate partners, including Sony, Google, Microsoft, and Mizuho, are hosting reverse pitches and actively seeking startup collaborators. And because this is Tokyo, the experience doesn’t stop at 6 p.m.: Classical music performances from La Folle Journée, waterfront cruises along Tokyo Bay, and the Tokyo Innovation NIGHTs networking series round out the program. The officialSusHi Tech Tokyo 2026 Official appis your command center on the ground. Before you even arrive, AI-powered matching recommends the right startups, investors, and partners for you to connect with — and lets you book meeting rooms in advance. On-site, a GPS floor map, QR business card exchange, and real-time push notifications keep you oriented across the sprawling Tokyo Big Sight venue. Download foriOSorAndroid. SusHi Tech Tokyo 2026runs April 27–29 at Tokyo Big Sight. Business days are April 27–28; Public Day (free admission) is April 29.
View

Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings
After months of conversations with ChatGPT, a 53-year-old Silicon Valley entrepreneur became convinced he’d discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend. Now the ex-girlfriend is suing OpenAI, alleging the company’s technology enabled the acceleration of her harassment, TechCrunch has exclusively learned. She claims OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag classifying his account activity as involving mass-casualty weapons. The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. She also filed a temporary restraining order Friday asking the court to force OpenAI to block the user’s account, prevent him from creating new ones, notify her if he attempts to access ChatGPT, and preserve his complete chat logs for discovery. OpenAI has agreed to suspend the user’s account but has refused the rest, according to Doe’s lawyers. They say the company is withholding information about specific plans for harming Doe and other potential victims the user may have discussed with ChatGPT. The lawsuit lands amid growing concern over the real-world risks of sycophantic AI systems. GPT-4o, the model cited in this and many other cases, wasretired from ChatGPT in February. The case is brought by Edelson PC, the firm behind the wrongful death suits involving teenagerAdam Raine, who died by suicide after months of conversations with ChatGPT, andJonathan Gavalas, whose family alleges Google’s Gemini fueled his delusions and potential mass-casualty event before his death. Lead attorney Jay Edelson has warned that AI-induced psychosis is escalating fromindividual harm toward mass-casualty events. That legal pressure is now colliding directly with OpenAI’s legislative strategy: The company isbacking an Illinois billthat would shield AI labs from liability even in cases involving mass deaths or catastrophic financial harm. OpenAI did not respond in time to comment. TechCrunch will update the article if the company responds. The Jane Doe lawsuit lays out in detail how that liability played out for one woman over several months. Last year, the ChatGPT user in the lawsuit (whose name is not included in the lawsuit to protect his identity) became convinced that he had invented a cure for sleep apnea after months of “high volume, sustained use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were watching him, including using helicopters to surveil his activities, according to the complaint. In July 2025, Jane Doe urged him to stop using ChatGPT and to seek help from a mental health professional. He instead turned back to ChatGPT, which assured him he was “a level 10 in sanity” and helped him double down on his delusions, per the lawsuit. Doe had broken up with the user in 2024, and he used ChatGPT to process the split, according to emails and communications cited in the lawsuit. Rather than push back on his one-sided account, it repeatedly cast him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the screen and into the real world, using them to stalk and harass her. This manifested in several AI-generated, clinical-looking psychological reports that he distributed to her family, friends, and employer. Meanwhile, the user continued to spiral. In August 2025, OpenAI’s automated safety system flagged him for “Mass Casualty Weapons” activity and deactivated his account. A human safety team member reviewed the account the next day and restored it, even though his account may have contained evidence that he was targeting and stalking individuals, including Doe, in real life. For example, a September screenshot the user sent to Doe showed a list of conversation titles including “violence list expansion” and “fetal suffocation calculation.” The decision to reinstate is notable following two recent school shootings in Tumbler Ridge, Canada, and at Florida State University (FSU). OpenAI’s safety team had flagged the Tumbler Ridge shooter as a potential threat, but higher-upsreportedlydecided not to alert authorities. Florida’s attorney generalthis week opened an investigationinto OpenAI’s possible link with the FSU shooter. According to the Jane Doe lawsuit, when OpenAI restored her stalker’s account, his Pro subscription wasn’t reinstated alongside it. He emailed the trust and safety team to sort it out, copying Doe on the message. In his emails, he wrote things like: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He claimed he was “in the process of writing 215 scientific papers,” which he was writing so fast he didn’t “even have time to read.” Included in those emails was a list of tens of AI-generated “scientific papers” with titles like: “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.” “The user’s communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the engine of his delusional thinking and escalating conduct,” the lawsuit states. “The user’s stream of urgent, disorganized, and grandiose claims, along with a concrete ChatGPT-generated report targeting Plaintiff by name and a sprawling body of purported ‘scientific’ materials, was unmistakable evidence of that reality. OpenAI did not intervene, restrict his access, or implement any safeguards. Instead, it enabled him to continue using the account and restored his full Pro access.” Doe, who claims in the lawsuit that she was living in fear and could not sleep in her own home, submitted a Notice of Abuse to OpenAI in November. “For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise,” Doe wrote in her letter to OpenAI requesting the company permanently ban the user’s account. OpenAI responded, acknowledging the report was “extremely serious and troubling” and that it was carefully reviewing the information. Doe never heard back. Over the next couple of months, the user continued to harass Doe, sending her a series of threatening voicemails. In January, he was arrested and charged with four felony counts of communicating bomb threats and assault with a deadly weapon. Doe’s lawyers allege this validates warnings both she and OpenAI’s own safety systems had raised months earlier, warnings the company allegedly chose to ignore. The user was found incompetent to stand trial and committed to a mental health facility, but a “procedural failure by the State” means he will soon be released to the public, according to Doe’s lawyers. Edelson called on OpenAI to cooperate. “In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger,” he said. “We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.”
View

Last 24 hours: Save up to $500 on your TechCrunch Disrupt 2026 pass
This is it. The clock is running out. Tonight is your last chance to lock in savings of up to $500 for yourTechCrunch Disrupt 2026pass. These discounts end at 11:59 p.m. PT. Register here to secure yours with the limited-time offer. This year,Disrupttakes over San Francisco’s Moscone West from October 13–15, bringing together 10,000 founders, VCs, operators, and tech leaders for a tightly curated, three-day experience focused on real outcomes. Attendees return for: With300+ startupsexpected to showcase their innovations across the venue, the intensity of the live pitch competitionStartup Battlefield 200, and curated networking designed to drive results, Disrupt isn’t just another conference. It’s where momentum is built. Disrupt isn’t about wandering between sessions. It’s about intentional connections and curated experiences designed for how people actually grow in tech. If you’re hands-on in tech, Disrupt was built for you. Founders meet investors actively backing breakthrough ideas. VCs cut through the noise to discover startups aligned with their investment focus. Operators exchange real-world lessons on building, scaling, and shipping what’s next. Aspiring innovators get a front-row seat to tomorrow’s tech. Each Disrupt brings together 250+ of the most influential names in tech, leaders who have shaped the industry and continue to define what’s next. Keep an eye on theDisrupt 2026 event pageas the agenda goes live to see who will take the stage this year. Past speakers include: At 11:59 p.m. PT tonight, prices go up and this opportunity will be gone. Disrupt will still be filled with the same founders, investors, and operators you’ll meet. The only difference is what you paid to be there. If Disrupt is part of your 2026 strategy, make the move now. Secure your pass, lock in the savings, and step into the conversations that move your business forward.Register before today ends.
View

India's Data Centre Boom Outpaces Power Planning
As data centres expand rapidly, rising energy demand strains grids and threatens India’s clean-power ambitions, warns a parliamentary panel.
View

Google Rolls Out AI Mode Agentic Features in India, Enables Restaurant Booking via Search
Google on Friday announced the rollout of agentic capabilities in AI Mode in India. The update introduces the ability to discover and book restaurant reservations directly through Search. As per the Mountain View-based tech giant, it is aimed at achieving a practical use case by helping users complete real-world tasks more efficiently. Google claims the new agentic capabilities can handle multi-step queries and reduce the effort required to search across platforms manually.
View

Y Combinator Partners with Emergent and Polaris School of Technology to Foster Early-Stage Talent in India
This collaboration introduces the Vibecon Student Track, a hackathon where participants can pitch their innovative ideas and connect with YC.
View

Amazon Ramps AI Investments with $200 Bn Capex Plan, Says Andy Jassy
The chips division has reached an annual revenue run rate of over $20 billion, growing at triple-digit rates.
View

OpenAI’s New $100 Plan Brings 5x More Codex Usage
OpenAI also reset rate limits for current $200 tier members to ensure uninterrupted service during the transition.
View

Govt Has Introduced a New Portal for Tech MSMEs to Access US Market
The India–US Trade Facilitation Portal aims to simplify compliance, link exporters to US buyers, boost services and SME access.
View
