Latest AI News

Runway CEO says AI could help Hollywood make 50 films instead of one $100M blockbuster
Cristóbal Valenzuela, the co-founder and CEO of AIvideo-generation startup Runway, now valued atnorth of $5 billion, may not be winning over more hearts and minds inthe anti-AI, creative crowd with his recent comments about AI’s potential in Hollywood. AtSemafor World Economythis week, the AI executive suggested that studios should take the $100 million they spend on a single film and put it toward 50 films, in order to increase their output and their chances of getting a hit. “If you’re spending a hundred million dollars on making one feature film, which is 90 minutes, imagine taking a hundred million dollars and spending it on, like, 50 movies, Valenzuela said. “Same quality. Same amount of output, visually. But you make way more content. So you have way better chances of hitting something. It’s a quantity problem.” That bumps up against the notion that a film represents a studio’s investment in a piece of art, and that the movie business is one where studios win if they back the right creative team. With AI, Valenzuela is suggesting the whole industry can be boiled down to a numbers game — and if you produce enough content, you’ll eventually succeed. In his interview, the founder acknowledged therehas been controversyabout bringing AI into a creative market like film and TV production, but stated that “things are changing fast.” He said he believes much of the early skepticism around AI came more from a place of fear and misunderstanding, but now most people understand what these powerful AI tools can do. The company has beendeveloping its AI world modelsto help the creative class do “more work better and faster,” he said. Runway works with a large number of studios and creators, and the technology is already helping to bring production costs down, the founder claimed. This is already happening. Take, for example, the soon-to-arrive $70 million “Bitcoin: Killing Satoshi” movie, which will be the first studio-quality AI feature film on the market. Its use of AI brought down production costs from an estimated $300 million,TheWrap reported. Amazon hasalso turned to AIto cut production costs for film and TV, ashave studios in India. Sony Pictures saidit’s planning to usethe technology. EvenJames Cameron has come out in support of AIas a way to keep blockbuster movies in production without layoffs. Asked which side of the business is seeing costs decline because of AI, Valenzuela said, “It’s everywhere. It’s in the pre-production side, it’s in scripting, it’s in planning, it’s in execution, visual effects — this is already beginning to be deployed at scale.” AI may make it easier to produce more content. But critics dispute the tech industry’s belief that scaling creativity with AI will automatically result in more great art. But Runway believes this to be true. “There’s a crisis of creativity in the industry because of the economic incentives of how the content is made,” Valenzuela said. He compared the production of video to something like books, where now, he said, there are some 25 million books produced yearly — more than anyone could read. “Of course, I don’t read 25 million books…but the world is in a much better place because there’s more people who manage to tell a story or say something [to] the world,” he said. (For what it’s worth, Valenzuela’s figure appears to be wrong.Data from UNESCO[the UN’s Educational, Scientific and Cultural Organization] indicates that 2.2 million new titles are published every year. But he could be counting self-published e-books and things like Wattpad stories, many of which are now also produced with AI, and are often left out of traditional estimates.) In any event, the idea is to flood the market with content, even if only some will become hits. That’s what he hopes the movie industry will now do, thanks to AI. “We have this internal saying at Runway that the best movies are yet to be made because we haven’t heard from probably, like, the billions of people who haven’t had access to this…technology,” Valenzuela said.
View

Google is now targeting bad ads over bad actors
Google said Thursday it blocked a record 8.3 billion ads globally in 2025 — up from5.1 billion the year before. But the company suspended far fewer advertiser accounts than that surge might suggest, raising questions about how it polices its platform. The search giant attributed the disparity to its growing use of AI, particularly its Gemini models — Google’s family of AI systems — which Google says allow it to detect and block policy-violating ads earlier and with greater precision. Its AI-driven systems caught more than 99% of such ads last year before they were shown to users, the company said. Both findings come from Google’s 2025 Ads Safety Report and together reflect a broader change in enforcement. While more problematic ads are being stopped, fewer advertiser accounts are being suspended — suggesting a growing emphasis on blocking individual ads alongside broader account-level enforcement. Google said the rise in blocked ads also reflects the growing use of generative AI by scammers to produce deceptive content at scale, with its Gemini models helping detect patterns across large campaigns and block them earlier. The shift also mirrors a wider push by Google to integrate its Gemini modelsmore deeply into its core productsand infrastructure,including advertising, where the company is increasingly using AI to automate campaign creation, detect policy violations, and respond to emerging threats in real time. Among the blocked ads and suspended accounts, 602 million ads and 4 million advertiser accounts were linked to scams, the company said. Google removed over 1.7 billion ads and suspended 3.3 million advertiser accounts in the U.S. in 2025, with ad network abuse, misrepresentation, and sexual content among the most common violations. In India, Google’s largest market by users, it blocked 483.7 million ads — nearly double the previous year — even as account suspensions fell to 1.7 million from 2.9 million, with trademarks, financial services, and copyright issues among the top violations. At a virtual briefing, Keerat Sharma, VP and general manager of ads privacy and safety at Google, told reporters the company has shifted toward more targeted, AI-driven enforcement “at a much more granular level, on a creative level, as opposed to using a much more blunt instrument, like advertiser suspensions.” He added that the approach has helped reduce incorrect suspensions by 80% year over year. Google’s layered defenses, including advertiser verification (a process that requires businesses to confirm their identity before running ads), are designed to prevent bad actors from creating accounts in the first place, Sharma said, adding that this has contributed to the decline in suspensions. The numbers, Sharma said, are likely to fluctuate over time as Google rolls out new defenses and bad actors adapt, with the company aiming to stop harmful ads as early in the pipeline as possible.
View

Roblox’s AI assistant gets new agentic tools to plan, build, and test games
Roblox is introducing new agentic features to help developers plan, build, and test games on its platform, the company told TechCrunch exclusively. Roblox is revamping Roblox Assistant, its plain-language AI tool for game development, to help creators throughout the entire development process. The company says that AI tools that take in a prompt and output a solution in one step can often fail to truly capture a creator’s original intent. That’s why it’s introducing an enhanced “Planning Mode” that transforms Assistant into a collaborative partner that can analyze a game’s code and data model, ask clarifying questions, and translate prompts into editable action plans. Planning Mode helps developers create a plan for their game, get feedback to refine details, finalize the approach, and then implement that plan. Creators can tweak the plan and add context to ensure their intent is clearly reflected before any changes are made. For example, if a creator tells Assistant to “create a park mini game with a fountain and foliage where characters have to collect coins,” the Assistant may ask what visual style they want the park to have, with options like cartoony, realistic, and fantasy. Or, the Assistant may ask how they want the park’s assets, like a fountain and foliage, to be created, offering options such as building from scratch, using models from the Creator Store, or a mix of both. Once there is a plan in place, Planning Mode will leverage Roblox’s other AI tools while it creates the game. These AI tools include two new ones announced today: Mesh and Procedural Model Generation, which are designed to speed up development. Mesh Generation makes it easy to add fully textured meshes, or 3D objects, directly into the game world. Roblox says that during the early stages of development, developers often create placeholder assets to understand how the player will interact with the world. With Mesh Generation, creators can quickly create 3D models instead of having to rely on low-quality placeholders. For example, creators can ask Assistant to generate a campfire, then add light to make it more realistic, and then set the scene at night. Roblox will also soon introduce “Procedural Models” to allow developers to create editable 3D models with code and Assistant. Since Assistant understands 3D space and physical relationships, creators can use prompts to place and scale objects based on other objects in the scene. Attributes like the number of shelves in a bookcase or the height of a staircase can be adjusted dynamically, creating editable building blocks that can be refined and reused elsewhere. “The launch of our agentic features in Roblox Studio reduces barriers between creative vision and execution, said Nick Tornow, Senior Vice President of Engineering, in a statement to TechCrunch. “Creating with Planning Mode and our Procedural Generation tools is a powerful new method for creators to turn their concepts into gameplay. Assistant works as a multi-step, collaborative development partner — accelerating the process of planning, building, and testing, so creators can get from idea to reality faster.” As Planning Mode executes against the plan, it will use playtesting tools to read output logs, capture screenshots, use inputs like a keyboard and mouse to check design and gameplay, and identify bugs and provide feedback to the Assistant so it can fix them automatically. “With the new capabilities across planning, building, and testing, Assistant is better at using agentic loops to test different aspects of the game, surface suggested solutions, and then incorporate the results into future planning loops, creating a self-correcting system that becomes more accurate over time,” Roblox explained in a blog post. Roblox also announced that it’s working on enabling multiple AI agents to work together in parallel, run long, complex workflows in the cloud, and handle tasks like coding, testing, and creating more realistic game characters. It also wants to ensure creators can seamlessly use Claude, Cursor, Codex, and other third-party tools with Roblox Studio.
View

AI traffic to US retailers rose 393% in Q1, and it’s boosting their revenue too
As of March, AI traffic to U.S. retailers’ websites rose by 269% over the previous 12 months, continuing the momentum during the holiday shopping season when AI traffic was up by 693%, according to new data released on Thursday by Adobe. And in the first three months of 2026, AI traffic had risen 393% compared to a year earlier, as more consumers used AI assistants for online shopping. The change in traffic sources isn’t the only impact. AI visitors are converting better, engaging at higher rates, spending more time on sites, and driving higher revenue per visit, the data shows, often reversing trends from only a year ago, when regular customers were worth more to retailers. Adobe’s insights are based on its analysis of online transactions, via its Adobe Analytics division, which covers over 1 trillion visits to U.S. retail sites. The analysis also relied on a survey of over 5,000 U.S. respondents about their use of AI when shopping, as well as the company’s new AI Content Visibility Checker tool, designed to test retail websites for accessibility by LLMs. In Adobe’s survey, 39% people said they used AI for online shopping, and 85% said it improved their experience. These findings are likely due to how AI helps people narrow down products to find what they need, and tap into discounts. In addition, 66% of those surveyed said they now believe AI tools provide accurate results when shopping. Unlike publishers, whereAI is causing referral traffic to decline, retailers are incentivized to make their sites AI-friendly. Adobe’s data found that AI traffic converted 42% better than living, breathing customers in March 2026, setting a new record. Notably, it’s a reversal of a trend that told a different story only a year ago: in March 2025, AI traffic converted 38% worse than regular people. In addition, Adobe found that when a consumer lands on a retail site via an AI source, their engagement rate tends to be 12% higher than those who used non-AI sources. Shoppers also spend more time on the website (48% longer) and browse more pages (13% more pages per visit), the data shows. In terms of the top line, AI-driven revenue per visit (RPV) was 37% higher than non-AI traffic as of March. Just 12 months ago, regular human traffic was worth 128% more than AI. However, not all sites are ready for AI, Adobe warned. It found that roughly a quarter of the content on retailers’ homepages has not been optimized for LLMs, nor has the content on category pages. Individual product pages fare even worse: around 34% of pages can’t be properly accessed by AI. The company suggests that retailers work to make their sites more accessible to LLMs if they want to stay top-of-mind with online shoppers going forward.
View

InsightFinder raises $15M to help companies figure out where AI agents go wrong
The role of observability tools has evolved once again. While the market for solutions to ensure tech systems’ reliability has grown over the years, the center of gravity has steadily shifted from “track everything” to “control complexity and costs.” Meanwhile, the rapid influx and adoption of AI agents within enterprises have only added a brand new category of workload that needs to be observed. InsightFinder AI, a startup based on 15 years of academic research, is no stranger to this problem. The company has beenusing machine learningto monitor, identify, and proactively fix IT infrastructure issuessince 2016, and is now attacking today’s AI model reliability issue with an AI agent solution that can do everything from detection and diagnosis to remediation and prevention. The company, founded by CEO Helen Gu, a computer science professor at North Carolina State University who previously worked at IBM and Google, recently raised $15 million in a Series B round led by Yu Galaxy, TechCrunch has exclusively learned. According to Gu, the biggest problem facing the industry today is not just monitoring and diagnosing where AI models go wrong; it’s diagnosing how the entire tech stack operates now that AI is a part of it. “In order to diagnose these AI model problems, you need to actually monitor and analyze the data, the model, and the infrastructure together,” Gu told TechCrunch. “It’s not always a model problem or a data problem; it’s a combination. Sometimes, it’s simply your infrastructure.” Gu explained how that looks in real life with an anecdote: One of its customers, a major U.S. credit card company, saw that one of its fraud detection models was drifting. Because InsightFinder was monitoring all of the company’s infrastructure, it was able to identify that the model drift was caused by outdated cache in some server nodes. “The biggest misconception is that AI observability is limited to LLM evaluation during the development and testing phases. On the contrary, a sound AI observability platform should provide end-to-end feedback loop support covering the development, evaluation, and production stages,” she said. InsightFinder’s newest product, dubbed Autonomous Reliability Insights, can do all this by using a combination of unsupervised machine learning, proprietary large and small model language models, predictive AI, and causal inference. This base layer is data agnostic, per Gu, which lets the system ingest and analyze entire data streams to gather signals that can then be correlated and cross-validated to arrive at a root cause. Now, the observability space is crowded with contenders for a share of the new market that’s been opened up by the influx of AI tools. Nearly a decade into its journey, InsightFinder has been going up against the likes of Grafana Labs, Fiddler, Datadog, Dynatrace, New Relic, and BigPanda, who are all building capabilities to deal with the new problems presented by AI tools. But Gu isn’t fazed. On the contrary, she claims the InsightFinder’s expertise, experience, and customizability act as a sufficient moat. “We actually rarely lose [customers] to anybody so far […] This is about the insights, right? The problem is that a lot of data scientists understand AI, but they don’t understand the system. And a lot of SRE [site reliability engineering] developers understand the system, but not the AI […] They don’t look at it, and they don’t understand the intrinsic relationships.” Today, InsightFinder’s roster of customers includes UBS, NBCUniversal, Lenovo, Dell, Google Cloud, and Comcast, and Gu attributes the success to 10 years of working to understand what large enterprise customers need. “It has come down to working with our Fortune 50 customers to polish and understand the enterprise environment requirements to deploy these kinds of models,” she said. “We have been working with Dell to deploy our AI systems across the world at some of the largest customers we have. This is not something that you can take a foundational AI and just slap on the machine data to do.” Gu said the company’s revenue stream is “strong,” having grown “over threefold” in the past year. In fact, she says the company wasn’t looking to raise this Series B at all, and investors approached the company after the company won a seven-figure deal with a Fortune 50 company within three months. InsightFinder will use the fresh capital to make its first sales and marketing hires to expand its team of fewer than 30 people, and invest in its go-to-market motion. The company has so far raised a total of $35 million.
View

Canva Wants to Scale AI Design. But Will it Lose Its Simplicity?
With Canva AI 2.0, the platform is adding memory and agentic orchestration features.
View

Anthropic Releases Claude Opus 4.7 as It Holds Back Mythos
Opus 4.7 can handle complex, long-running tasks, follows instructions closely, and verify its own outputs.
View

Microsoft's Recall Feature Faces Criticism After TotalRecall Reloaded Tool Regains Access to Data
TotalRecall Reloaded, a tool developed by cybersecurity researcher Alexander Hagenah, has raised fresh concerns about Microsoft's Windows Recall feature and how it handles sensitive user data. The research points to potential issues in how information is accessed after authentication, even though Microsoft had redesigned Recall with stronger protections. The company reportedly views the behaviour as part of its existing system design, but the findings highlight ongoing concerns about whether features that record user activity can remain both useful and secure.
View

Adobe’s New Firefly AI Assistant Can Perform Complex Design Tasks With Text Prompts
Adobe introduced the Firefly AI Assistant on Wednesday, an agentic conversational chatbot that can perform complex design tasks. Available inside the Firefly platform and connected to the Creative Cloud apps, the assistant can both analyse and act on natural language prompts. The company said the tool will help creators automate creative workflows while maintaining context. It is scheduled to be released in beta soon, and will be showcased by the company at its upcoming Adobe Summit 2026.
View

Amazon Launches AI Store to Help Users Discover and Shop AI-Powered Devices
Amazon launched an AI Store microsite within its e-commerce website and app on Thursday. The new space is dedicated to consumer tech devices that come equipped with artificial intelligence (AI) features and tools. The Seattle-based tech giant said that the AI Store is aimed at helping users discover and make informed decisions when shopping for AI-powered smartphones, laptops, smartwatches, smart TVs, and more. Apart from listing devices, the microsite also lets users browse products based on use cases.
View

India’s vibe-coding startup Emergent enters OpenClaw-like AI agent space
Emergent, an Indian startup known for its vibe-coding platform, has launchedWingman, a messaging-first autonomous AI agent, as it expands into a growing category of software that runs in the background to complete tasks — popularized by tools like OpenClaw and Claude from Anthropic. The Bengaluru-based startup initially gained attention for its vibe-coding platform, which competes with tools like Cursor and Replit and lets users without technical backgrounds build full-stack applications via natural-language prompts. With Wingman, Emergent is now pushing beyond creation into execution, aiming to let AI agents handle routine tasks across tools and workflows. “The obvious next step for us was, can we help them not just build the software, but actually operate more autonomously through it?” said Mukund Jha, co-founder and CEO of Emergent. “You move from software that supports the business to software that can actively help run it.” Emergent said more than 8 million builders have used its vibe-coding platform to create and deploy software, with over 1.5 million monthly active users. Founded in 2025, the startupraised $70 millionin January at a valuation of $300 million, with backing from investors including SoftBank, Khosla Ventures, and Lightspeed Venture Partners. Wingman is designed to operate through messaging platforms like WhatsApp and Telegram, allowing users to assign and monitor tasks through chat. At the same time, the agent runs in the background across connected tools such as email, calendars, and workplace software. It can carry out routine actions autonomously but seeks user approval for more consequential steps, the startup said. The launch comes as autonomous AI agentsemerge as a key battlegroundin the industry, with a growing number of companies racing to build tools that can complete tasks on behalf of users. Projects likeOpenClaw— previously known as Clawdbot and Moltbot — have gained traction among early adopters, while players includingAnthropicandMicrosoftare working toward addressing this space with their own agent-based systems. Emergent is attempting to differentiate by embedding Wingman into messaging platforms such as WhatsApp, Telegram, and Apple’s iMessage, allowing users to interact with the agent via chat rather than adopting a new interface. The startup also introduced what it calls “trust boundaries,” enabling the agent to carry out routine tasks autonomously while requiring user approval for more consequential actions. This aims to address concerns around fully autonomous systems. Jha told TechCrunch the decision to build Wingman inside messaging platforms was driven by how people already work. “A lot of real work already happens through chat, voice, and email — asking for something, following up, sharing context, making a decision,” Jha said. “Increasingly, they’ll be the main ways we work with agents too.” Like many emerging AI agents, Wingman still faces limitations. Jha said the system struggles “around consistency in really ambiguous situations, messy edge cases, unclear goals, or workflows where a lot of human judgment is needed.” Wingman is being rolled out with a limited free trial, after which access will be paid, with existing Emergent users able to use the agent through their accounts.
View

The musician-turned-biotech-founder waiting to fundraise
When Grammy-nominated singer-songwriterAloe Blaccgot COVID despite being vaccinated and boosted, he tried to fund research for a better solution. What he quickly found out? You can’t just write a check in biotech. Regulators require a commercialization plan, and philanthropy doesn’t move science through clinical trials or get you a license on university IP. Now, he’s bootstrapping a cancer drug platform targeting pancreatic cancer, a disease that kills 90% of its patients, and intentionally waiting to raise from his network until peer-reviewed papers can make his case. On this episode of TechCrunch’sEquitypodcast, Rebecca Bellan sits down with Aloe Blacc to talk about what happens when a creator decides to build instead of just invest, how Aloe is watching AI reshape both the biotech and music industries in real time, and his thoughts on who actually wins. Listen to the full episode to hear: Subscribe to Equity onYouTube,Apple Podcasts,Overcast,Spotifyand all the casts. You also can follow Equity onXandThreads, at @EquityPod.
View
