Latest AI News

Anthropic Launches Feature to Bring Memory From ChatGPT, Gemini to Claude

Anthropic Launches Feature to Bring Memory From ChatGPT, Gemini to Claude

The company said users can copy and paste a provided prompt into a chat with another AI service to retrieve their stored context.

2 months ago

View

Huawei’s AI Infrastructure Play is Hard to Ignore at MWC 2026

Huawei’s AI Infrastructure Play is Hard to Ignore at MWC 2026

Huawei provides AI infrastructure, industry models, agent platforms, and applications to address sector-specific challenges.

2 months ago

View

Accenture to Acquire Ookla to Strengthen Network Intelligence and Experience for Enterprises

Accenture to Acquire Ookla to Strengthen Network Intelligence and Experience for Enterprises

With this acquisition, Accenture seeks to optimise Wi-Fi and 5G networks, improve fraud prevention, and enable better analytics across sectors.

2 months ago

View

Qwen’s Core Team Shaken as Technical Lead, Researchers Exit

Qwen’s Core Team Shaken as Technical Lead, Researchers Exit

Developers and users on social media expressed surprise at the departures from the Qwen project.

2 months ago

View

SK Telecom, Supermicro & Schneider Electric Partner to Build Solutions for AI Data Centres

SK Telecom, Supermicro & Schneider Electric Partner to Build Solutions for AI Data Centres

SK Telecom will provide operational expertise, Supermicro will supply GPU servers, while Schneider Electric will handle infrastructure design and construction.

2 months ago

View

Anthropic’s Annual Revenue Run Rate Climbs Towards $20 Billion, Bloomberg Reports

Anthropic’s Annual Revenue Run Rate Climbs Towards $20 Billion, Bloomberg Reports

Anthropic’s revenue surge coincides with its growing dispute with the US Department of Defense.

2 months ago

View

Alibaba’s Qwen tech lead steps down after major AI push

Alibaba’s Qwen tech lead steps down after major AI push

Alibaba’s Qwen AI project has lost one of its most visible technical leaders just a day after the Chinese tech giant unveiled its new Qwen 3.5 open-weight small models. Junyang Lin, a central technical leader on Alibaba’s Qwen team,saidin a post on X on Tuesday that he was “stepping down” from the project, without elaborating. He joined Alibaba in July 2019 and became part of the Qwen team in April 2023,accordingto his LinkedIn profile. The abrupt departure, which drew strong reactions from colleagues and industry partners, comes as global competition among AI developers intensifies and companies race to build models rivaling those from OpenAI, Google, and Anthropic. Alibaba’s Qwen family of models has emerged as one of China’s most prominent open-weight AI efforts, with recent releases posting benchmark results that often rival systems from leading U.S. developers. The Chinese tech giant introduced the model in April 2023 and opened it to public use that September after receiving regulatory clearance. Alibabaintroducedits Qwen 3.5 Small Model series on Monday, with four models spanning 0.8B, 2B, 4B, and 9B parameters. The systems, the company said, are native multimodal models designed for uses ranging from on-device AI deployment to lightweight agents. The launch drew attention from figures in the AI community, including Elon Musk, whowroteon X that the models showed “impressive intelligence density.” Lin’s departure came just as the Qwen team was pushing ahead with new releases, prompting unusually strong reactions from colleagues and partners who described his role in the project as central. Wenting Zhao, a research scientist on the Qwen team,describedLin’s departure as “the end of an era,” thanking him in a post on X for helping drive the project’s advances in open source AI and engineering. Yuchen Jin, chief technology officer of AI infrastructure startup Hyperbolic,saidLin helped connect Qwen with the global developer community, recalling late-night collaboration with the team during model launches. Tiezhen Wang, head of APAC ecosystem at Hugging Face, alsodescribedLin’s departure as “an immense loss” for the Qwen project. The circumstances surrounding Lin’s departure remain unclear. Lin did not respond to a request for comment. Chen Cheng, a contributor to the Qwen project,wrotethat he was “heartbroken” by the news. In his post on X, Cheng appeared to be addressing Lin directly, writing, “I know leaving wasn’t your choice” and said the team had been working together on model launches only hours earlier. Binyuan Hui, another member of the Qwen team, hasupdatedhis X profile to describe himself as “formerly MTS @Alibaba_Qwen.” However, it is not immediately clear whether he had left the company or when the change was made. Alibaba did not respond to a request for comment on the reasons for the move or on the leadership structure of the Qwen team.

2 months ago

View

Why AI startups are selling the same equity at two different prices

Why AI startups are selling the same equity at two different prices

As competition among AI startups heats up, founders and VCs are turning to novel valuation mechanisms to manufacture a perception of market dominance. Until recently, the most sought-after companies raised multiple rounds of funding in quick succession at escalating valuations. However, because constant fundraising distracts founders from building their products, lead VCs have devised a new pricing structure that effectively consolidates what would have been two separate funding cycles into one. Recent rounds employing this scheme include Aaru’s Series A. The synthetic-customer research startup raised a round led by Redpoint, which invested a large portion of its check at a $450 million valuation, TheWall Street Journal reported. Redpoint then invested a smaller portion at a $1 billion valuation, and otherVCs joined at that same $1 billionprice point, according to our reporting. TechCrunch was the first to reportAaru’s financing, including its multi-tiered valuation. The approach allows desirable startups like Aaru to call themselves a unicorn — valued at more than $1 billion — even though a significant portion of the equity was acquired at a lower price. “It is a sign that the market is incredibly competitive for venture capital firms to win deals,” said Jason Shuman, a general partner at Primary Ventures. “If the headline number is huge, it’s also an incredible strategy to scare away other VCs from backing the number two and number three players.” The massive “headline” valuation creates the aura of amarket winner, even though the lead VC’s average price was significantly lower. Multiple investors told TechCrunch that until recently, they had never encountered a deal where a lead investor splits their capital between two different valuation tiers in a single round. Wesley Chan, co-founder and managing partner at FPV Ventures, views this valuation tactic as a symptom of bubble-like behavior. “You can’t sell the same product at two different prices. Only airlines can get away with this,” he said. In most cases, founders offer a discount to top-tier VCs because their involvement serves as a powerful market signal that helps attract talent and future capital. But since these rounds are frequently oversubscribed, startups have found a way to accommodate the excess interest: Rather than turning away eager investors, they allow them to participate immediately, but at a significantly higher price. These investors are willing to pay that premium because it is the only way to secure a spot on a high-demand cap table. Another startup that gave preferential pricing to its lead investor is Serval, an AI-powered IT help desk startup, according to The Wall Street Journal. While Sequoia’s lowest entry price was at a $400 million valuation, Serval announced in December that its $75 million Series B valued the company at $1 billion. While the high “headline” valuation can help recruit talent and attract corporate customers who may view the company as having a stronger market position than its competitors, the strategy is not without its risks. Even though the true, blended valuation for these startups is lower than $1 billion, they are expected to raise their next round at a valuation that is higher than the headline price; otherwise it will be a punitive down round, Shuman said. These companies are in high demand now, but they may face unexpected challenges that will make it very hard for them to justify their high valuations. In a down round, employees and founders end up with a smaller ownership percentage of the company; they can also erode the confidence of partners, customers, future investors, and potential new hires. Jack Selby, managing director at Thiel Capital and founder of Copper Sky Capital, warns founders that chasing extreme valuations is a dangerous game, pointing to the painful market reset of 2022 as a cautionary tale. “If you put yourself on this high-wire act, it’s very easy to fall off,” he said.

2 months ago

View

Claude Code rolls out a voice mode capability

Claude Code rolls out a voice mode capability

Anthropic is bringing Voice Mode to Claude Code, the company’s AI coding assistant for developers. The launch of voice mode marks a significant step toward more hands-free, conversational coding workflows. Thariq Shihipar, an engineer at Anthropic,announcedthe feature’s gradual release on X on Tuesday. According to Shihipar, voice mode is now live for about 5% of users, with a broader rollout planned in the coming weeks. Voice mode is designed to streamline the coding experience by letting users interact with Claude Code through spoken commands. To enable it, type /voice to toggle it on, then speak your command and Claude Code will execute the request. For instance, “refactor the authentication middleware.” Voice mode is rolling out now in Claude Code. It’s live for ~5% of users today, and will be ramping through the coming weeks.You’ll see a note on the welcome screen once you have access. /voice to toggle it on!pic.twitter.com/P7GQ6pEANy— Thariq (@trq212)March 3, 2026 Voice mode is rolling out now in Claude Code. It’s live for ~5% of users today, and will be ramping through the coming weeks.You’ll see a note on the welcome screen once you have access. /voice to toggle it on!pic.twitter.com/P7GQ6pEANy It remains unclear what the limitations of the new capability are, including whether there are caps on voice interactions or specific technical constraints. It’s also unknown if this feature was built in collaboration with a third-party AI voice provider like ElevenLabs, with whom Anthropic wasreportedlyin talks. The company has not yet responded to requests for comment from TechCrunch. Anthropiclaunched Voice Modefor its standard Claude chatbot last May, allowing users to interact with the model via voice for a variety of general-purpose tasks. The competition in AI coding assistants is fierce, with Microsoft’s GitHub Copilot, Cursor, Google, and OpenAI all vying for developers’ attention. Yet, Claude Code stands out as one of the most widely adopted tools in the market today. In February, Anthropicreportedthat Claude Code’s run-rate revenue surpassed $2.5 billion, more than doubling since the beginning of 2026. Plus, weekly active users have doubled since January. Meanwhile, Claude’s mobile app has seen adramatic jump in user growthafter the company refused to allow the Department of Defense to use its AI for domestic surveillance or autonomous weapons. In the aftermath, the appsoaredto the top of the U.S. App Store charts,overtaking ChatGPT.

2 months ago

View

ChatGPT’s new GPT-5.3 Instant model will stop telling you to calm down

ChatGPT’s new GPT-5.3 Instant model will stop telling you to calm down

Take a breath, stop spiraling. You’re not crazy, you’re just stressed. And honestly, that’s okay. If you felt immediately triggered reading these words, you’re probably also sick of ChatGPT constantly talking to you as if you’re in some sort of crisis and need delicate handling. Now, things may be improving. OpenAI says its new model, GPT-5.3 Instant, will reduce the “cringe” and other “preachy disclaimers.” According to the model’s release notes, the GPT-5.3 update will focus on the user experience, including things like tone, relevance, and conversational flow — areas that may not show up in benchmarks, but can make ChatGPT feel frustrating, the company said. Or, as OpenAIput it on X,“We heard your feedback loud and clear, and 5.3 Instant reduces the cringe.” In the company’s example, it showed the same query with responses from the GPT-5.2 Instant model compared with the GPT-5.3 Instant model. In the former, the chatbot’s response starts, “First of all — you’re not broken,” a common phrase that’s been getting under everyone’s skin lately. In the updated model, the chatbot instead acknowledges the difficulty of the situation, without trying to directly reassure the user. We heard your feedback loud and clear, and 5.3 Instant reduces the cringe.pic.twitter.com/WqO0XzLcVu The insufferable tone of ChatGPT’s 5.2 model has been annoying users to the point that some have even canceled their subscriptions, according to numerous posts on social media. (Itwasahugepointofdiscussionon theChatGPT Reddit,for instance, before the Pentagon deal stole the focus.) People complained that this type of language, where the bot talks to you as if it assumes you’re panicking or stressed when you were just seeking information, comes across as condescending. Often, ChatGPT replied to users with reminders to breathe and other attempts at reassurance, even when the situation didn’t warrant it. This made users feel infantilized, in some cases, or as if the bot was making assumptions about the user’s mental state that just weren’t true. As one Reddit user recentlypointedout, “no one has ever calmed down in all the history of telling someone to calm down.” It’s understandable that OpenAI would attempt to implement guardrails of some kind, especially as itfacesmultiplelawsuitsaccusing the chatbot of leading people to experience negative mental health effects, which sometimes included suicide. But there’s a delicate balance between responding with empathy and providing quick, factual answers. After all, Google never asks you about your feelings when you’re searching for information.

2 months ago

View

AI companies are spending millions to thwart this former tech exec’s congressional bid

AI companies are spending millions to thwart this former tech exec’s congressional bid

Loading the player… If you’ve seen the recent ads attacking New York assembly member Alex Bores, you’ll know he used to work for Palantir, the AI company that’spowering the controversial raidsand high-volume deportation efforts from U.S. Immigration and Customs Enforcement. The ads even accuse Bores of having made hundreds of thousands of dollars building the tech for ICE and “powering their deportations.” But that’s not quite the whole story. “I quit Palantir specifically over its work with ICE in 2019,” Bores told TechCrunch on last week’s episode ofEquity. Now he’s running for New York’s 12th congressional district, with Big Tech billionaires funding outside groups targeting his campaign. The ads are fundedby a super PAC calledLeading the Future,which, ironically, has the backing of Palantir co-founder Joe Lonsdale, as well as OpenAI President Greg Brockman, VC firm Andreessen Horowitz, AI search startup Perplexity, and other Silicon Valley heavy-hitters. The PAC hasraised $125 millionto go after candidates in state elections that are introducing AI legislation and to support candidates with a light-to-no-touch approach to regulating AI. “They have committed to spending at least $10 million against me…because they know I am their biggest threat in their quest for unbridled control over the American worker, over our kids’ minds, climate, and our utility bills,” Bores said. “They’re targeting me to make an example of me.” He said his background working in tech, including at Palantir and several startups, is exactly why Leading the Future made him its first target. “I actually deeply understand the technology and I can’t be dismissed as ‘this person just doesn’t understand it,’” Bores said, adding that if elected, he would be only the second Democrat in Congress with a computer science degree. Bores incurred the ire of Silicon Valley after sponsoring theRAISE Act,an AI transparency bill that was signed into law in December. The law requires large AI labs — specifically those making more than $500 million in revenue — to have a publicly available safety plan in place, to stick to it, and to report when a catastrophic safety incident has occurred. It’s the sort of light-touch law that other industries might kill for — more disclosure and planning than proactive oversight. Bores says he doesn’t believe Leading the Future wants to see any AI regulation, unless, as the PAC has said, it’s at the federal level. Over the last year, states have beenfightingagainst industry to protect their rights to regulate AI in the absence of a federal standard. In December, President Trump signed anexecutive orderdirecting federal agencies to challenge “onerous” state AI laws, like Bores’ RAISE Act. Bores pointed to his campaign’sproposed national AI governance blueprint— spanning eight issue areas and 43 policy recommendations — adding that anyone serious about federal AI regulation should be supporting him. He has also introduced legislation that would force companies to disclose what goes into their training data and to embed metadata standards that would make synthetic content easier to trace. Leading the Future isn’t the only Silicon Valley-backed PAC getting involved in the midterms. Meta has put$65 millioninto two super PACs — American Technology Excellence Project and Mobilizing Economic Transformation Across (Meta) California — to elect state-level candidates who are friendly to the AI and tech industry. And AI companies, industry groups, and top executives donated at least$83 millionin 2025 to federal campaigns and committees. “This is not a ‘We want to have a piece of the conversation,’” Bores said. “This is: ‘We want to intimidate elected officials and browbeat anyone who doesn’t agree with us.’” “The average assembly race in New York raises maybe $100,000 total, maybe less,” Bores continued. “For one company (Meta) to be spending $65 million on state races, let alone everything they’re doing in Congress — I think it’s tough for people to understand how much that is above the norm.” For his part, Bores has garnered the support of a separate Anthropic-backed PAC called Public First Action, which isspending $450,000on the New Yorker. Public First Action also describes itself as pro-AI, but with a focus on transparency, safety, and public oversight. Leading the Future, he says, represents “an extremely small minority of voices” who see any regulation as a threat to AI progress and who just “want to let it rip.” Among Bores’base of supportersare tech workers at the very firms whose leaders want to thwart his campaign — part of abroader patternof grassrootsorganizing inside tech companiesover howAI is deployedand who it serves. On the other end of the spectrum are the minority of people who “want to pretend AI never existed and put the genie back in the bottle and burn all the data centers,” Bores said. He thinks most Americans are somewhere in the middle — they use AI and see its potential but are concerned at how fast it’s moving. “[They] wonder if the government is up to the task of ensuring we have a future that benefits the many instead of the few,” Bores said.

2 months ago

View

X says it will suspend creators from revenue-sharing program for unlabeled AI posts of ‘armed conflict’

X says it will suspend creators from revenue-sharing program for unlabeled AI posts of ‘armed conflict’

X says it’s going to take action against creators who post AI videos of armed conflict without disclosure that the content is AI-generated. On Tuesday, X’s head of product, Nikita Bier,announcedthat people who use AI technology to mislead others in this way will be booted from the company’s Creator Revenue Sharing Program for a three-month period (90 days). If they continue to post misleading AI content after the suspension lifts, they’ll be permanently suspended from the program. “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Bier wrote on X. “Starting now, users who post AI-generated videos of an armed conflict — without adding a disclosure that it was made with AI — will be suspended from Creator Revenue Sharing for 90 days.” X says it will identify the misleading posts through a combination of tools that are used to detect generative AI content, as well as through its crowdsourced fact-checking system,Community Notes. X’sCreator Revenue Sharing Programoffers creators the ability to generate income by posting on the platform and sharing in advertising revenue if their posts are popular. While designed to boost the amount of engaging content found on X,criticsof the program say itincentivizescreators to post sensationalized content, like clickbait or other posts designed to spark outrage. Some have also criticized its lax content controls and its requirement that creators be paid X subscribers to participate. Given how easy it is for AI to be used to make misleading photos and videos, X’s ban on financially rewarding creators for this type of content is only a limited fix. Outside of war, AI media is often used to create political misinformation or push deceptive products in theinfluencer economy— all of which will still be allowed under the new policy.

2 months ago

View

PreviousPage 125 of 155Next