Latest AI News

Why Adobe’s Future Depends on Dismantling Its Past

Why Adobe’s Future Depends on Dismantling Its Past

The company faces its toughest transition yet as leadership change collides with AI disruption, reshaping how creativity is produced, priced, and controlled.

1 month ago

View

Kyndryl, Gloplax Partner to Build & Scale GCCs for Enterprises

Kyndryl, Gloplax Partner to Build & Scale GCCs for Enterprises

The companies’ joint capabilities will help enterprises establish and modernise GCCs with integrated advisory, technology and operational excellence.

1 month ago

View

Anthropic wins injunction against Trump administration over Defense Department saga

Anthropic wins injunction against Trump administration over Defense Department saga

A federal judge has sided with Anthropic in its twisty legal battle with the Trump administration, awarding the tech company an injunction against the government’s recent order that labeled it a “supply chain risk,” the Wall Street Journalreports. On Thursday, Judge Rita F. Lin of the Northern District of California ordered the Trump administration to rescind itsrecent designationof Anthropic as a security risk, as well as to back off its order that federal agencies cut ties with the company. “It looks like an attempt to cripple Anthropic,” Linreportedly saidduring the court proceedings. Lin ultimately argued that the government’s orders had flouted freespeech protections for the company. The drama between the Pentagon and Anthropic erupted last last month over a dispute concerning guidelines for the government’s usage of the AI company’s software. Anthropic had reportedlysought to enforce certain limitson how the government could use its AI models, such as banning their use in autonomous weapons systems or mass surveillance. The government disagreed with those limitations, ultimatelylabeling the company a supply chain risk—a designation typically reserved for foreign actors. President Trump furtherorderedfederal agencies to cut ties with the company. Not long afterward,Anthropic sued the agency, along with Hegseth. The White House has spent recent weeks attacking the company,characterizingit as “a radical-left, woke company” that is jeopardizing America’s “national security.” Anthropic CEO Dario Amodei, meanwhile, hascalledthe Defense Department’s actions “retaliatory and punitive.” On the heels of Judge Lin’s ruling, Anthropic sent TechCrunch the following statement: “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.” TechCrunch has separately reached out to the White House for comment.

1 month ago

View

Sora is Gone. Here’s What Creators are Using Instead in 2026

Sora is Gone. Here’s What Creators are Using Instead in 2026

With Sora no longer available, a new wave of AI video generators, from Kling to Veo 3.0, is redefining how creators produce cinematic content.

1 month ago

View

Wikipedia cracks down on the use of AI in article writing

Wikipedia cracks down on the use of AI in article writing

As AI makes inroads into the worlds of editorial and media, websites are scrambling to establish ground rules for its usage. This week, Wikipedia banned the use of AI-generated text by its editors — although it stopped short of banning AI outright from the site’s editorial processes. Ina recent policy change, the site now states that “the use of LLMs to generate or rewrite article content is prohibited.” This new language updates and clarifies previous,vaguer languagethat stated that LLMs “should not be used to generate new Wikipedia articles from scratch.” AI in Wikipedia articles hasbecome a contentious issueamong the site’s sprawling, volunteer-driven community of editors. 404 Mediareports thatthe new policy, which was put to a vote by the site’s editors, garnered majority support — 40 to 2. That said, the new policy still makes room for continued AI use in some editorial processes. “Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own,” the new policy states. “Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

1 month ago

View

You can now transfer your chats and personal information from other chatbots directly into Gemini

You can now transfer your chats and personal information from other chatbots directly into Gemini

When it comes to AI chatbots, there’s currently a war on for consumer attention. All the big chatbot providers are looking to increase their user count and, in a minor coup for itself, Google just made it significantly easier for users of those other chatbots to defect to Gemini. On Thursday, the companyannouncedwhat it calls “switching tools,” new widgets that are designed to allow users to transfer “memories” (basically chunks of personal information) and even entire chat histories from other chatbots directly into Gemini. Users can easily share “key preferences, relationships, and personal context” in this way, the company says. The idea is to make it significantly easier to adopt Google’s AI asststant, as users won’t have to spend large amounts of time re-training Gemini on who they are and what they want. The memory feature works like this: Gemini will suggest a prompt that the user can enter into their current chatbot, which will then generate a response that can be copied and pasted back into Gemini. In this fashion, Gemini coaches the user on what kinds of information it would be helpful to know about them, while also helping facilitate the transmission of that information back into its own archive. “Once you import these memories, Gemini will understand the same key facts you’ve shared with other apps, like your interests, your sibling’s name, or where you grew up,” the company says. “Instead of starting over from scratch, you can quickly get Gemini up to speed on what matters most to you.” When it comes to importing chat histories, Google says that all you need is to upload them in a zip file. It’s relatively easy to export chat logs via zips from most chatbots—including fromChatGPTandClaude. This allows users to “seamlessly pick up right where you left off,” the company says. Google says users also have the ability to search through those old chats. ChatGPT remains the big kahuna in the consumer chatbot market, with OpenAI announcing last month that it has reached900 million weekly active users. Gemini — despite Google’s vast distribution advantages, including its default placement across Android devices and the Chrome browser — has lagged in consumer mindshare. Last month, it shared its own numbers during Alphabet’s fourth-quarter earnings call, saying Gemini had surpassed750 million monthly active users. This move is clearly aimed at helping Google catch up.

1 month ago

View

Data centers get ready — the Senate wants to see your power bills

Data centers get ready — the Senate wants to see your power bills

Two U.S. senators on Thursday fired the latest salvo in an increasingly active front against data centers and their energy use. Senators Josh Hawley and Elizabeth Warren sent a letter to the U.S. Energy Information Administration (EIA) asking it to collect details on energy use from data centers — and how that use is affecting the grid. The senators urged the EIA “to establish a mandatory annual reporting requirement for data centers and other large loads,” they wrote in the letter, which TechCrunch has viewed. “As electricity demand growth continues to accelerate after years of relative stagnation, the lack of reliable, standardized data on large load energy consumption poses significant risks to effective grid planning and oversight.” Wired was first toreporton the letter. The letter isn’t the first move by politicians to try and place new regulatory requirements on data centers. Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez said Wednesday they wouldintroduce legislationthat would halt new data center construction until Congress could come to an agreement on how to regulate AI. Energy use by data centers has exploded in recent years. Google’s data centers, for example,doubled their consumptionbetween 2020 and 2024. The trend isn’t likely to change in the near future. By 2035, planned new data centers willnearly triplethe sector’s energy demand. The EIA is a government agency tasked with collecting and analyzing data related to the energy system — sort of like a Census bureau for the grid. It was established in 1977 under the Department of Energy in the wake of the oil shocks of the early 1970s. For decades, the EIA has gathered a wealth of information about energy use in the U.S., including costs, generating sources, and energy-efficiency programs. It also tracks how different sectors use energy, though it only focuses on four very broad categories: residential, commercial, industrial, and transportation. Hawley and Warren are also asking the EIA to collect more granular information on data centers, including how energy consumption differs between AI computing tasks and general cloud services. The senators have very specific requests regarding what that data should look like, including hourly, annual, and peak energy loads and the rates companies pay. They also want to know about any grid upgrades required by the addition of new large loads, how those upgrades are paid for, and whether data center customers participate in demand response programs, in which utilities pay heavy users to reduce their use for a period of time. The letter calls out the EIA administrator Tristan Abbey, who in December said the agency will be an “essential player” in collecting data regarding energy demand from data centers. Hawley and Warren requested the agency reply to their letter by April 9. It’s possible the process is already underway, though the EIA hasn’t publicly shared if it is. Changes to the EIA surveys must go through the Office of Management and Budget process, which requires a public comment period. “We get requests for analysis very often. We get requests for an actual new product less frequently,” Abbeysaidat the public event in December. “It takes probably about two years to launch a new survey from scratch. But there are authorities that exist where you can avoid the two-year process by conducting surveys of smaller scope, but potentially a sharper signal.”

1 month ago

View

OpenAI abandons yet another side quest: ChatGPT’s erotic mode

OpenAI abandons yet another side quest: ChatGPT’s erotic mode

OpenAI has put the kibosh on yet another project — at least for the time being. On Thursday, the Financial Timesreported thatthe AI company would be “indefinitely” pausing plans to develop an “erotic” mode for ChatGPT. The proposed “adult mode,” which CEO Sam Altman first floated in October, had inspired considerable controversy fromtech watchdog groupsas well as fromOpenAI’s own staff. In January, a meeting between company executives and its council of advisers got heated, with one of the advisers cautioning that OpenAI could be in the process of developing a “sexy suicide coach,” The Wall Street Journalpreviously reported. Amidst all of the criticism, the release of the feature wasdelayed multiple times. FT notes that the erotic feature now has no timeline for release. When reached for comment by TechCrunch, an OpenAI spokesperson said the company had “nothing further to add.” Adult mode is only the latest side quest that OpenAI has abandoned over the past week as the AI giant consolidates its focus. On Tuesday, the company quietly announced that it would bedeprioritizing Instant Checkout, a feature within ChatGPT that had sought to make the chatbot a purchase portal where users could buy items from e-commerce websites. Then, on Wednesday, the company surprisinglyannounced thatit would be shutting down Sora, its AI video generator. Sora had been criticized forinspiring the deluge of AI “slop”that has flooded the internet since its launch in 2024. All of the changes come approximately a week after The Wall Street Journal reported that OpenAI would be engaging in a “major strategy shift” to pivot the company away from distractions so that it could zero in on its primary focuses: business users and coders. Why has OpenAI chosen this particular moment to do away with the distractions and lock in? Perhaps it’s because it’s been feeling the heat from Anthropic, which has been tenaciously releasing a series of coding and business tools over the past few months — and has seensubstantial success in wooing customersas a result. The two companies have also been openlyfeudingover Pentagon contracts — a battle OpenAI appears to have won. Three weeks ago, itannounced a $200 million agreementwith the Department of Defense, while Anthropic is nowlocked in a legal battlewith the agency. In short, it would appear that, if recent developments tell us anything, the future of AI is probably less about porn and memes and more about business and war.

1 month ago

View

A ‘pound of flesh’ from data centers: one senator’s answer to AI job losses

A ‘pound of flesh’ from data centers: one senator’s answer to AI job losses

The signs that AI could lead to mass job displacement are already piling up: entry-level job postings in the U.S. havesunk 35% since 2023,masslayoffshavesweptacross Big Tech, and evenAI leaders themselvesare warning about what’s coming. Backstage at the Axios AI Summit in Washington on Wednesday, Sen. Mark Warner (D-VA) said a venture capitalist recently told him he’s writing software investments down to zero in large part due to the strides of Anthropic’s Claude, and a major law firm told him it’s not hiring first-year associates because AI can now handle much of the work once assigned to junior lawyers. Warner says the fear of AI-related job loss is “palpable,” even asdata from one AI companysuggests AI hasn’t yet started taking jobs. As those fears grow, they’re bleeding over into a different fight, which is who should foot the bill. Warner has a proposal: tax the data centers powering the AI boom and use that revenue to help workers through the transition. He hasn’t introduced legislation yet, but the idea is gaining urgency as public anger toward AI and data centers grows. Across the U.S., there’s beenpushback on data centers, including a bill on Wednesday introduced by Sen. Bernie Sanders (D-VT) and Rep. Alexandria Ocasio-Cortez (D-NY), calling for adata center moratorium. The loudest concerns are about noise, pollution, and rising electricity costs. But there’s a bubbling resentment underneath those concerns, a resistance to suffering the potential ill effects of having a data center in your backyard that powers the technology some fear will replace workers. Warner doesn’t plan to support his colleagues’ bill. On stage at the event, he said: “A data center moratorium simply means China is gonna move quicker, and this is one where we can’t lose.” There’s no stuffing the genie back into the bottle when it comes to AI and data centers, he added. And while Warner believes in strict requirements that ensure data centers don’t pass their water and power costs to residents, he told TechCrunch he thinks there’s another way for communities to extract their “pound of flesh” in a way that addresses the underlying job loss fears. “I’ve thought for a long time there’s an obligation from the industry to help figure this out and help pay for it, but one of the questions I was asking was, Who should pay?” Warner told TechCrunch. “Should it be the chip makers, Jensen [Huang, Nvidia’s CEO]? Should it be the large language model companies? Should it be the Goldman Sachs of the world who are using these tools to cut back on a number of first-year associates?” Ultimately, he said, he thinks the “easiest place to extract the pound of flesh is probably going to be from the data centers.” That could look like putting data center tax revenue toward training for new nurses or funding AI upskilling programs — so long as there’s a “tangible benefit to communities” as they navigate this economic transition AI companies have foisted on them. Warner sees it as a way to balance the need to build data centers with some obligation to the communities bearing their costs The idea is not without precedent. Warner pointed to Henrico County, Virginia which used the tax revenue from a local data center tokickstart a new affordable housingproject. Finding a way to connect data centers to a tangible benefit to the community will be essential, he says, because otherwise, “the pitchforks are coming out.” The public mood suggests he could be on to something. According to a recentNBC News poll, AI has a lower public approval rating than Immigration and Customs Enforcement (ICE), with 46% of registered voters viewing AI negatively compared to only 26% viewing it positively. In Virginia, that is playing out in a proposal torepeal the state’s tax breaksfor data center buildouts, which cost the state and localities nearly $2 billion a year in lost tax revenue in one of the world’s largest data center markets. Warner says other states might follow suit. AI and data centers, he said, are “easy to demonize.”

1 month ago

View

Cohere launches an open-source voice model specifically for transcription

Cohere launches an open-source voice model specifically for transcription

Enterprise AI company Cohere on Thursday launched its first voice model: Transcribe is an open-source automatic speech recognition model that can be used for tasks like note-taking and speech analysis. Relatively light at just 2 billion parameters, the model is meant for use with consumer-grade GPUs for those who want to self-host it. It currently supports 14 languages: English, French, German, Italian, Spanish, Portuguese, Greek, Dutch, Polish, Chinese, Japanese, Korean, Vietnamese and Arabic. Cohere says Transcribe beats models such as Zoom Scribe v1, IBM Granite 4.0 1B, ElevenLabs Scribe v2, and Qwen3-ASR-1.7B Speech onthe Hugging Face Open ASR leaderboard, achieving an average word error rate (WER) of 5.42, lower than any other model on the benchmark. The company claims Transcribe had an average win rate of 61% over other models when human evaluators assessed its transcriptions for accuracy, coherence and usability. However, the model fell behind its rivals when it had to transcribe Portuguese, German and Spanish. Cohere says Transcribe can process 525 minutes of audio in a minute, which is high for its class of model. The company is planning to integrate Transcribe into its enterprise agent orchestration platform,North, and is making the model available through itsAPIfor free. The model will also be available onModel Vault, Cohere’s managed inference platform. Speech recognition models are growing increasingly popular as demand grows for note-taking and dictation apps like Granola andWispr Flow. Earlier this year, Cohere reportedlytoldinvestors that it was generating annual recurring revenue of $240 million in 2025, and its CEO, Aidan Gomez, was cited as saying that the startupmay go public “soon”.

1 month ago

View

Conntour raises $7M from General Catalyst, YC to build an AI search engine for security video systems

Conntour raises $7M from General Catalyst, YC to build an AI search engine for security video systems

The surveillance tech industry today is in the spotlight, but not for the best reasons. With controversy around the U.S. Immigration and Customs Enforcementtapping into Flock’s camera networkto surveil people, and home camera maker Ringdrawing criticismfor building new features that would enable law enforcement to ask homeowners for footage of their neighborhoods, there’s currently a broad debate around safety, privacy, and who gets to watch whom. But controversy doesn’t erase markets, and the continued improvement of vision-language models has only blown more wind in the sails of companies building new ways to help companies monitor what goes on in their premises. According to Matan Goldner, co-founder and CEO of video surveillance startupConntour, the ethics around this topic are important enough that he says his company is quite picky about which clients to sell to. That may not come off as sound business sense for a startup barely two years in, but Goldner says he can afford to do this because Conntour already has several large government and publicly-listed customers, one of which is Singapore’s Central Narcotics Bureau. “The fact that we have such big customers allows us to select them and to stay in control […] We’re really in control of who is using it, what is the use case, and we can select what we think is moral and, of course, legal. We use all our judgment, and we make decisions based on specific customers that we’re okay [to work with] because we know how they will use it,” Goldner told TechCrunch in an exclusive interview. That traction has helped Conntour with more than being selective. Investors have taken note: The startup recently raised a $7 million seed round from General Catalyst, Y Combinator, SV Angel, and Liquid 2 Ventures. Goldner said the round closed within 72 hours. “I think I scheduled around 90 meetings in like eight days, and just after three days — we started on Monday and by Wednesday afternoon, we were done,” he said. Regardless, Conntour may be right in being picky, especially given how powerful AI tools in this space have become. The company’s own video platform uses AI models to let security personnel query camera feeds using natural language to find any object, person, or situation in the footage, in real-time — a Google-like search engine made specifically for security video feeds. It can also monitor and detect threats on its own based on preset rules, and surface alerts automatically. Unlike legacy systems that depend on preset definitions or parameters to detect specific objects, motion patterns or behaviors, Conntour claims its system uses natural and vision language models, which lends it a high degree of flexibility and usability. A user may ask, “Find instances of someone in sneakers passing a bag in the lobby,” and Conntour’s system will quickly search all the recorded footage or live video feeds to return relevant results. And because the platform bakes in AI models, users can simply ask questions about the footage and get answers in text, accompanied by the relevant video feeds, as well as generate incident reports. The company’s selling point, however, is its scalability. Goldner explained that the platform mainly differs from other AI video search services because it is designed to efficiently scale to systems comprising thousands of camera feeds. In fact, he said, Conntour’s system can monitor up to 50 camera feeds off a single consumer GPU like Nvidia’s RTX 4090. The company does this by using multiple models and logic systems, and then identifying which models and systems the algorithm should use for each query to require the lowest amount of computing power to give users the best results. Conntour claims its system can be deployed fully on premises, completely on the cloud, or a mix of both. It can plug into most security systems already in use, or can serve as a full surveillance platform on its own. But there’s been a long-running problem in the video surveillance industry: The quality of surveillance is only as good as the footage captured. It’s hard to make out details from the footage of a poorly-lit parking lot that was recorded by a low-resolution camera with a dirty lens, for example. Goldner says Conntour hedges for this inevitability by providing a confidence score along with its search results. If the source of a camera feed isn’t good enough quality, the system will return results with low confidence levels. Going forwards, Goldner says the biggest technical problem to solve is bringing the full level of LLM capability to its system while maintaining its efficiency. “We have two things that we want to do at the same time, and they contradict each other. One one hand, we want to provide full natural language flexibility, LLM-style, to let you ask anything. And on the other hand there’s efficiency, so we want to make it use very few resources, because again, processing [thousands] of feeds is just insane. This contradiction is the biggest technical barrier and technical problem in our space, and what we’re working really, really hard to solve.”

1 month ago

View

ByteDance’s new AI video generation model, Dreamina Seedance 2.0, comes to CapCut

ByteDance’s new AI video generation model, Dreamina Seedance 2.0, comes to CapCut

OpenAI may be dialing back its efforts in the video generation market with theshutdown of its Sora app, but ByteDance on Thursday confirmed that its new audio and video model,Dreamina Seedance 2.0, is now rolling out in its editing platform,CapCut. ByteDance says the model allows creators to draft, edit, and sync video and audio content by using prompts, images, or reference videos. The phased rollout will begin with CapCut users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam, with more markets added over time. The news of the launch in CapCut follows a recent report that said themodel’s global rollout would be paused, while it worked to address intellectual property issues that drewcriticism from Hollywoodoveralleged copyright infringement.That likely explains the limited number of markets where the model is currently available within CapCut. In China, the model is available to users of ByteDance’s Jianying app. The video generation model works without reference images, even if the creator only uses a few words to describe the scene they have in mind, ByteDance says in itsannouncement. CapCut is also good at rendering realistic textures, movement, and lighting across a range of visual perspectives and angles, which the company notes could be used to edit, enhance, or correct creators’ own footage. Another use case would be allowing creators to test potential ideas based on early concepts or sketches before filming the real video. In addition, Dreamina Seedance 2.0 can be used for a wide range of content, including cooking recipes, fitness tutorials, business or product overviews, and videos with motion or action-focused content, where AI video models have historically faced challenges, the company explains. At launch, the model supports clips of up to 15 seconds long across six aspect ratios. In CapCut, the model will roll out across different areas, including editing features such as AI Video and generation tools like Video Studio. It will also come to ByteDance’s AI generation platform, Dreamina, and its marketing platform, Pippit. Given its ability to create realistic content, ByteDance says it has added safety restrictions, so the model won’t have the ability to make videos from images or videos that contain real faces. CapCut will also block the use of unauthorized generation of intellectual property. (However, if the restrictions were working properly, the model would be available now in the United States. Likely, more tweaks are still being made.) The content produced by Dreamina Seedance 2.0 will also include an invisible watermark, which will help to identify content made with the model when it’s shared off-platform, ByteDance added. This could aid in things like takedown requests from rights holders in the event that the model allowed copyright content through. ByteDance says it will partner with experts and creative communities as the model rolls out to iterate and improve upon the model’s capabilities.

1 month ago

View

PreviousPage 76 of 151Next