Latest AI News

Karnataka to Provide ₹2 Lakh Loans for AI training to Backward Class Engineering Graduates
As many as 250 students can avail this facility, CM Siddaramiah said in the Karnataka Budget 2026.
View

Anthropic to challenge DOD’s supply-chain label in court
Dario Amodeisaid Thursdaythat Anthropic plans to challenge the Department of Defense’s decision to label the AI firm asupply-chain riskin court, a designation he has called “legally unsound.” The statement comes a few hours after the DOD officially designated Anthropic a supply-chain risk following a weeks-long dispute over how much control the military should have over AI systems. A supply-chain risk designation can bar a company from working with the Pentagon and its contractors. Amodei drew a firm line that Anthropic’s AI will not be used for mass surveillance of Americans or for fully autonomous weapons, but the Pentagon believed it should have unrestricted access for “all lawful purposes.” In his statement, Amodei said the vast majority of Anthropic’s customers are unaffected by the supply-chain risk designation. “With respect to our customers, it plainly applies only to the use of Claude by customersas a direct part ofcontracts with the Department of War, not all use of Claude by customers who have such contracts,” he said. As a preview of what Anthropic will likely argue in court, Amodei said the Department’s letter labeling the firm a supply-chain risk is narrow in scope. “It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use theleast restrictive means necessaryto accomplish the goal of protecting the supply chain,” Amodei said. “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.” Amodei reiterated that Anthropic had been having productive conversations with the DOD over the last several days, conversations that some suspect got derailed when aninternal memohe sent to staff was leaked. In it, Amodei characterized rival OpenAI’s dealings with the Department of Defense as “safety theater.” OpenAI has signed a deal to work with the DOD in Anthropic’s place, a move that has sparked backlash among OpenAI staff. Amodei apologized for the leak in his Thursday statement, claiming that the company did not intentionally share the memo or direct anyone else to do so. “It is not in our interest to escalate the situation,” he said. Amodei said the memo was written within “a few hours” of a series of announcements, including a presidential Truth Social post saying Anthropic would be removed from federal systems, then Defense Secretary Pete Hegseth’s supply-chain risk designation, and finally the Pentagon’s deal announcement with OpenAI. He apologized for the tone, calling it “a difficult day for the company” and said the memo didn’t reflect his “careful or considered views.” Written six days ago, he added, it’s now an “out-of-date assessment.” He finished by saying Anthropic’s top priority is to ensure American soldiers and national security experts maintain access to important tools in the middle of ongoing major combat operations. Anthropic is currently supporting some of the U.S.’s operations in Iran, and Amodei said the company would continue to provide its models to the DOD at “nominal cost” for “as long as necessary to make that transition.” Anthropic could challenge the designation in federal court, likely in Washington, but the law behind the decision makes it harder to contest because it limits the usual ways companies can challenge government procurement decisions and gives the Pentagon broad discretion on national security matters. Or as Dean Ball — a former Trump-era White House adviser on AI who has spoken out against Hegseth’s treatment of Anthropic — put it: “Courts are pretty reluctant to second-guess the government on what is and is not a national security issue … There’s a very high bar that one needs to clear in order to do that. But it’s not impossible.”
View

It’s official: The Pentagon has labeled Anthropic a supply-chain risk
The Department of Defense (DOD) has officially notified Anthropic leadership that the company and its products have been designated a supply-chain risk,Bloombergreports, citing a senior department official. The designation comes after weeks ofconflictbetween the AI lab and the DOD. Anthropic CEO Dario Amodei hasrefusedto allow the military to use its AI systems for mass surveillance of Americans or to power fully autonomous weapons with no humans assisting in the targeting or firing decisions. The Department has argued that its use of AI should not be limited by a private contractor. Supply-chain-risk designations are typically reserved for foreign adversaries. The label requires any company or agency that does work with the Pentagon to certify that it doesn’t use Anthropic’s models. The Pentagon’s finding threatens to disrupt both the company and its own operations. Anthropic has been the only frontier AI lab with classified-ready systems. The U.S. military is currently relying on Claude in its Iran campaign, where American forces are using AI tools to quickly manage the data for their operations. Claude is one of the main tools installed in Palantir’s Maven Smart System, which military operators in the Middle East rely on, according to Bloomberg. Labeling Anthropic a supply-chain risk over this disagreement is an unprecedented move from the Department, several critics say. Dean Ball, a former Trump White House AI adviser, hasreferred to the designationas a “death rattle” of the American republic, arguing government has abandoned strategic clarity and respect in favor of “thuggish” tribalism that treats domestic innovators worse than foreign adversaries. Hundreds of employees from OpenAI and Google haveurged the DOD to withdrawits designation and called on Congress to push back on what could be perceived as an inappropriate use of authority against an American technology company. They have also urged their leaders tostand togetherto continue to refuse the DOD’s demands to use their AI models for domestic mass surveillance and “autonomously killing people without human oversight.” TechCrunch has reached out to Anthropic for comment. In the midst of the dispute, OpenAI forged its own deal with the Department to allow the military to use its AI systems for “all lawful purposes.” Some of the company’semployeeshave expressed concern about the ambiguous phrasing of the deal, which could lead to exactly the type of uses Anthropic was trying to avoid. Amodei has called the actions of the DOD “retaliatory and punitive,” andreportedlysaid his refusal to praise or donate to President Trump contributed to the dispute with the Pentagon. OpenAI president Greg Brockman has been a staunch backer of Trump, recently donating$25 million to the MAGA Inc. Super PAC.
View

US reportedly considering sweeping new chip export controls
How, and if, the Trump administration plans to regulate the export of semiconductors has remained unclear since Donald Trump took office last year. Now, we have an idea of what the administration is thinking. U.S. regulators have allegedly drafted rules that would require U.S. government approval to ship AI chips anywhere outside the U.S.,according to Bloomberg, citing sources. This would give the U.S. significantly more control over companies like AMD and Nvidia. TechCrunch reached out to AMD and Nvidia for comment. A spokesperson for the U.S. Department of Commerce provided the following: “The Commerce Department is committed to promoting secure exports of the American tech stack. We successfully advanced exports through our historic Middle East agreements, and there are ongoing internal government discussions about formalizing that approach. Today there was reporting that we were returning to the AI diffusion rule. We will not. It was burdensome, overreaching, and disastrous.” In these drafted rules, companies and governments outside the U.S. would have to be granted approval by the U.S. Department of Commerce to purchase these chips. The review process would vary based on the size and scale of the potential purchase, Bloomberg reported. For example, a small order by a company outside the U.S. may warrant a basic review while a sizable order could require the company’s corresponding government to get involved. This could, of course, all change before a final announcement or ruling, but the proposal would represent significantly more government involvement thanthe AI Diffusion ruleinstituted under President Joe Biden. The Trump administrationformally rescindedBiden’s diffusion regulation last May, less than a week before it was set to go into effect. While this is the first inkling of what broad export restrictions would look like, it isn’t fully surprising that the Trump administration is looking for more government involvement as opposed to less based on how it has handled Nvidia’s potential exports to China. The Trump administration hasflip-flopped multiple timeson whether or not the company could send its advanced AI chips to the Chinese market before deciding to allow exports if theU.S. Department of Commerce was able to approve the customers. However, this oversight approach may end up hurting U.S. chip companies and the U.S.’s current dominance in the global AI market. If it becomes harder to source chips from the U.S., companies may increasingly turn to other sources, especially as chip companies outside the U.S. continue to develop more advanced chips. In Nvidia’s case, the export regulations already are hurting them. The semiconductor gianthas not seen the return of its customers in Chinaafter nearly a year of uncertainty of whether or not they would keep access to the AI technology.
View

AWS launches a new AI agent platform specifically for healthcare
Amazon Web Services announced Thursday the launch ofAmazon Connect Health. This AI agent-powered platform is meant to help healthcare organizations automate repetitive administrative tasks, including appointment scheduling, documentation, and patient verification, among other things. Amazon Connect Health is HIPAA-eligible and connects with electronic health record (EHR) software. The platform is currently partnered with EHR software providers, data integrators, and patient engagement companies, the company said. This move is not the cloud giant’s first in the healthcare space, and it comes at a time when AWS is increasingly looking to grow its footprint in the $5 trillion U.S. healthcare industry. The company launched Amazon Comprehend Medical, a HIPAA-eligible natural language processor for unstructured medical data, in 2018, and it launched Amazon HealthLake in 2021 which is HIPAA-eligible Fast Healthcare Interoperability Resources (FHIR) infrastructure used to organize health data. The company also launched HealthOmics, a bioinformatics workflow, in 2022. Still, it is its first major product offering AI agents — software that completes complex tasks on behalf of a human — within a regulatory compliant platform. Amazon Connect Health works with existing clinician software to manage the administrative workflow of providers, like medical history reviews, medical coding, and clinical documentation, the company said. Amazon Connect Health currently offers patient verification and ambient documentation. Appointment scheduling and patient insights are in preview, and medical coding and other features are set to roll out to customers later. The software costs $99 a month per user for up to 600 encounters a month — AWS said most primary care physicians have up to 300 encounters a month. An Amazon Web Services spokesperson did not immediately respond to TechCrunch’s requests for additional information regarding testing and timeline. Outside of its cloud business, Amazon has made several large moves into the healthcare space in recent years. The retail giantpurchased online pharmacy PillPack in 2018for around $1 billion andprimary care company One Medicalin 2022 for $3.9 billion. The company has since integrated parts of those businesses into its larger retail and brick-and-mortar operations, includingsame-day prescription deliveryandsame-day virtual doctor visits for kids. Using AI to reduce administrative burden in the healthcare industry — where Amazon Connect Health is focusing — has been a popular target for startups even before the current AI wave. For example,Regard, founded in 2017, uses AI to take notes for doctors during sessions and goes through patient data to help reduce administrative burnout.Notableis another startup founded in 2017 that uses AI to reduce burnout by automating intake and scheduling. Larger AI companies have recently moved quickly into that space. In January, OpenAI releasedChatGPT Health, a version of its chatbot tailored to answer health questions. Anthropic announced its own healthcare-focused product,Claude for Healthcare, just one week later. Like OpenAI’s product, Claude for Healthcare gives medical advice to consumers but more like Amazon Connect Health, it also includes tools for medical professionals. Claude for Healthcare and OpenAI’senterprise healthcare servicesare built to work with HIPAA-compliant products, while ChatGPT Health is consumer-facing and not HIPAA-compliant, according to the companies.
View

DiligenceSquared uses AI, voice agents to make M&A research affordable
A typical merger-and-acquisition process is time consuming and expensive, even for the largest, well-staffed private equity firms. In addition to spending countless hours meeting with senior executives of potential targets and modeling financial outcomes, these groups spend millions of dollars on external advisers: accountants, lawyers, and management consultants. Since expenses for external advisers are not reimbursed if a deal falls through, PE firms wait until they are certain of their interest before engaging costly specialists such as consultants from McKinsey, BCG, or Bain to perform extensive commercial research on the market and the target company. DiligenceSquared, a startup that was part of YC’s fall 2025 cohort, says that with the help of AI, it can provide top-tier consultancy-quality commercial research at a fraction of the traditional cost. The startup’s co-founders, Frederik Hansen and Søren Biltoft, possess deep expertise in private equity due diligence. Hansen was formerly a principal at Blackstone, where he commissioned these reports for multiple billion-dollar buyouts. Meanwhile, Biltoft spent seven years in BCG’s private equity practice leading these types of diligence efforts. Since launching in October, Hansen’s and Biltoft’s industry experience has helped DiligenceSquared complete multiple projects for several of the world’s largest PE firms and mid-market funds, Hansen tells TechCrunch. That early traction convinced Damir Becirovic, a former Index Ventures partner, to lead DiligenceSquared’s $5 million seed round out of his new VC firm,Relentless. Instead of relying on expensive management consultants, the startup uses AI voice agents to conduct interviews with customers of the companies the PE firms are considering buying. DiligenceSquared is applying the same AI-interview model seen in consumer research startups like Keplar, Outset, andListenLabs, which in January raised $69 million at a $500 million valuation. But Hansen and Biltoft argue that their due diligence process and final outputs are fundamentally different from the consumer research produced by these startups. PE firms can pay $500,000 to $1 million for McKinsey, Bain, or BCG to interview dozens of corporate customers, including C-suite executives, and produce 200-page reports synthesizing those insights with proprietary market data, Hansen said. To ensure the quality of the analysis, DiligenceSquared involves senior human consultants who verify the accuracy and commercial insights of the final output. Since AI is doing a lot of the groundwork, the startup claims it can provide the analysis for just $50,000. “We are taking these great insights that were previously reserved for the very big decisions, and now we make them more accessible,” Hansen said. Because of the lower price point, PE firms are now far more willing to engage DiligenceSquared earlier in the process, well before they have high conviction in a deal. DiligenceSquared isn’t the only company trying to disrupt the diligence market. Its main competitor,Bridgetown Research, raised a $19 million Series A co-led by Accel and Lightspeed in February 2026. In addition to Hansen and Biltoft, DiligenceSquared was co-founded by Harshil Rastogi, a former Google engineer.
View

Netflix buys Ben Affleck’s AI filmmaking company InterPositive
Netflix on Thursday morningsaidit is acquiring InterPositive, a filmmaking technology company founded in 2022 by actor Ben Affleck. The acquisition aligns with Netflix’s approach to the use of generative AI in filmmaking: The company has alreadyused generative AIfor special effects in some original content and hasassured investorsthat it is “very well positioned to effectively leverage ongoing advances in AI.” Affleck wrote in a statement that he began thinking about how AI would impact the future of filmmaking in 2022. He says he wanted to “preserve what makes human storytelling human, which is judgement,” and sought to “protect the power of human creativity.” InterPositive isn’t trying to makeAI actorsor synthetic performances. Rather, the company has created a model that helps production teams work with footage from their own productions to help make edits in post-production, like addressing continuity issues or making lightning adjustments or enhancements to the environment. “Intensive research and development led to our first model, trained to understand visual logic and editorial consistency, while preserving cinematic rules under real-world production challenges such as missing shots, background replacements or incorrect lighting,” Affleck wrote. “We also built in restraints to protect creative intent, so the tools are designed for responsible exploration while keeping creative decisions in the hands of artists — and ensuring that the benefits of this technology flow directly back to the story they’re trying to tell.” Affleck is joining Netflix as a senior adviser as part of the deal. Financial terms of the deal were not disclosed. “Our approach to AI has always been focused on meaningfully serving the needs of the creative community and our members,” Elizabeth Stone, Netflix’s chief product and technology officer, said in a statement. “The InterPositive team is joining Netflix because of our shared belief that innovation should empower storytellers, not replace them.”
View

Anthropic CEO Dario Amodei could still be trying to make a deal with Pentagon
Anthropic’s $200 million contract with the Department of Defense (DOD) broke down last week after the two parties failed to come to an agreement over the degree to which the military could obtainunrestricted accessto Anthropic’s AI. When the DOD made a deal with OpenAI instead, it seemed that the military’s relationship with Anthropic would come to a close — but new reporting from theFinancial TimesandBloombergsay that Amodei resumed negotiations with Pentagon official Emil Michael. These talks are reportedly part of an attempt to compromise on a contract that outlines how the Pentagon can continue to access Anthropic’s AI models. It would be a surprise to see Anthropic eek out a new deal, given how much vitriol has been exchanged among the parties involved. But a compromise could still hold appeal for both sides — the Pentagon already relies on Anthropic’s technology, and an abrupt switch to OpenAI’s systems would be disruptive. The dispute began when Anthropic CEO Dario Amodei voiced concern over a clause that allowed the military to use Anthropic’s AI for “any lawful use.” Amodei asserted that the company would not allow for its technology to be used for domestic mass surveillance or autonomous weaponry and wanted the contract to more clearly prohibit those uses. When Anthropic refused to comply, the DOD turned around andstruck a dealwith OpenAI instead. Since then, figures on both sides have been open about their frustrations. Michael called Amodei a “liar” with a “God complex.” Amodei threw some jabs of his own at the DOD and OpenAI CEO Sam Altman in amessagereportedly sent to Anthropic staff this week, calling the OpenAI deal “safety theater” and the messaging around it “straight up lies.” “The main reason [OpenAI] accepted [the DOD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses,” Amodei wrote in the memo. Defense Secretary Pete Hegseth has pledged to declare Anthropic a “supply-chain risk,” essentially blacklisting the company from working with any other company that has business with the U.S. military — although he has yet to take any legal action to that effect. This sort of designation is typically reserved for foreign adversaries, and it’s unclear whether it would survive a court challenge.
View

Meta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage
Meta is facing a new lawsuit over its AI smart glasses and their lack of privacy, afteran investigationby Swedish newspapers found that workers at a Kenya-based subcontractor are reviewing footage from customers’ glasses, which included sensitive content, like nudity, people having sex, and using the toilet. Meta claimed it was blurring faces in images, but sources disputed that this blurring consistently worked,reports noted. The news prompted the U.K. regulator, the Information Commissioner’s Office, to investigate the matter. Now, the tech giant is facing a lawsuit in the United States, as well. In the newly filedcomplaint, plaintiffs Gina Bartone of New Jersey and Mateo Canu of California, represented by the public interest-focused Clarkson Law Firm, allege that Meta violated privacy laws and engaged in false advertising. The complaint alleges that the Meta AI smart glasses are advertised using promises like “designed for privacy, controlled by you,” and “built for your privacy,” which might not lead customers to assume their glasses’ footage, including intimate moments, was being watched by overseas workers. The plaintiffs believed Meta’s marketing and said they saw no disclaimer or information that contradicted the advertised privacy protections. The suit charges Meta and its glasses manufacturing partner Luxottica of America with conduct that violates consumer protection laws. Meta does not have a comment on the litigation at this time. Clarkson Law Firm, which over the years has filed other major lawsuits against tech giants, includingApple,Google, andOpenAI, points to the scale of the issues at hand. In 2025, over seven million people bought Meta’s smart glasses, which means their footage is fed into a data pipeline for review, and they can’t opt out. Meta told the BBC that when people share content with Meta AI, it uses contractors to review the information to improve people’s experience with the glasses, which is explained in its privacy policy, and pointed toSupplemental Meta Platforms Terms of Service, without specifying where this was noted. The news outlet, however, found that a mention of human review could be found inMeta’s U.K. AI terms of service. Aversion of that policythat applies to the U.S. states “In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).” The complaint mainly points to how the glasses were marketed, showing examples of ads that touted the privacy benefits, describing their privacy settings, and “added layer of security.” “You’re in control of your data and content,” one ad read, explaining that the smart glasses owners got to choose which content was shared with others. The rise of smart glasses and other “luxury surveillance” tech, like always-listening AI pendants, have prompted a broad backlash. One developer published an appcapable of detecting when smart glasses are nearby. Meta did not have a comment on the litigation itself, as it was just filed. However, spokesperson Christopher Sgro offered the following statement on the overall issue, saying, “Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.” Updated after publication with Meta’s statement.
View

Cursor is rolling out a new kind of agentic coding tool
As agentic coding spreads, the working life of a software engineer has become dazzlingly complex. A single engineer might oversee dozens of coding agents at once, launching and guiding different processes as necessary. It’s a lot to keep track of, and human engineers’ attention has quickly become the limiting resource. Cursor launched a new tool Thursday aimed at keeping that chaos in check. Called Automations, the new system gives users a way to automatically launch agents within their coding environment, triggered by a new addition to the codebase, a Slack message, or a simple timer. As Cursor describes it, it’s a way to review and maintain all the new code created by agentic tools — without tracking dozens of agents at once. At the most basic level, Automations are a way for engineers to break out of the “prompt-and-monitor” dynamic that defines most agent-based engineering. Instead of launching agents with a human prompt, Cursor’s Automation framework lets you launch agents automatically — and loop humans in whenever they’re needed. “It’s not that humans are completely out of the picture,” Jonas Nelle, Cursor’s engineering chief for asynchronous agents, told TechCrunch in an interview. ”It’s that they aren’t always initiating. They’re called in at the right points in this conveyor belt.” One early example isBugbot, a long-standing Cursor feature that the team sees as a predecessor to the broader Automation system. The Bugbot system is triggered every time an engineer makes an addition to the codebase and reviews the new code for bugs and other issues. Using Automations, Cursor has been able to expand that system to more involved security audits and more thorough reviews. “This idea of thinking harder, spending more tokens to find harder issues, has been really valuable,” said engineering lead Josh Ma. Cursor estimates that it runs hundreds of automations per hour, reaching far beyond simple code review. The system is also used for incident response, with PagerDuty incidents initiating an agent that can immediately query server logs through an MCP connection. A separate automation offers weekly summaries of changes to the codebase on Cursor’s company Slack. “In the abstract, anything that an automation kicks off, a human could have also kicked off,” said Nelle. “But by making it automatic, you change the types of tasks that models can usefully do in a codebase.” The new system comes amid intense competition in the agentic coding space, with bothOpenAIandAnthropichaving made significant updates to their agentic coding tools in the past month. Ramp datashows Cursor’s market share holding steady since May, with roughly 25% of generative AI clients subscribing to Cursor in some capacity. Still, the overall growth of the agentic coding space has kept the company’s revenue increasing at a stunning pace.Earlier this week, Bloomberg reported that Cursor’s annual revenue had grown to more than $2 billion, doubling over the past three months.
View

OpenAI launches GPT-5.4 with Pro and Thinking versions
On Thursday,OpenAI released GPT-5.4, a new foundation model billed as “our most capable and efficient frontier model for professional work.” In addition to the standard version, GPT-5.4 is also available as a reasoning model (GPT-5.4 Thinking) or optimized for high performance (GPT-5.4 Pro). The API version of the model will be available with context windows as large as 1 million tokens, by far the largest context window available from OpenAI. OpenAI also emphasized improved token efficiency, saying GPT-5.4 was able to solve the same problems with significantly fewer tokens than its predecessor. The new model comes with significantly improved benchmark results, including record scores in computer use benchmarks OSWorld-Verified and WebArena Verified. The new model also scored a record 83% on OpenAI’s GDPval test for knowledge work tasks. GPT-5.4 also took the lead onMercor’s APEX-Agents benchmark, designed to test professional skills in law and finance, according to a statement from Mercor CEO Brendan Foody. “[GPT-5.4] excels at creating long-horizon deliverables such as slide decks, financial models, and legal analysis,” Foody said in the statement, “delivering top performance while running faster and at a lower cost than competitive frontier models.” GPT-5.4 continues the company’s efforts to limit hallucinations and factual errors. OpenAI said the new model was 33% less likely to make errors in individual claims when compared to GPT 5.2, and overall responses were 18% less likely to contain errors. As part of the launch, OpenAI has reworked how the API version of GPT-5.4 manages tool calling, introducing a new system called Tool Search. Previously, system prompts would lay out definitions for all available tools when calling the model — a process that could consume a lot of tokens as the number of available tools grew. The new system allows models to look up tool definitions as needed, resulting in faster and cheaper requests in systems with many available tools. OpenAI has also includeda new safety evaluationto test its models’ chain-of-thought, the running commentary given by the models to show thought process through multi-step tasks. AI safety researchers have long worried that reasoning models could misrepresent their chain-of-thought, andtesting showsit can happen under the right circumstances. OpenAI’s new evaluation shows that deception is less likely to happen in the Thinking version of GPT-5.4, “suggesting that the model lacks the ability to hide its reasoning and that CoT monitoring remains an effective safety tool.”
View

EXCLUSIVE: Luma launches creative AI agents powered by its new ‘Unified Intelligence’ models
AI video-generation startup Luma on Thursday launched Luma Agents, designed to handle end-to-end creative work across text, image, video, and audio. Luma Agents are powered by the startup’s Unified Intelligence family of models, with architecture trained on a single multimodal reasoning system. Luma Agents are being pitched as a new way of doing work for ad agencies, marketing teams, design studios, and enterprises. Luma says its agents are capable of planning and generating text, image, video, and audio while coordinating with other AI models, including Luma’s Ray 3.14, Google’s Veo 3 and Nano Banana Pro, ByteDance’s Seedream, and ElevenLabs’ voice models. Luma’s agents are built on the startup’s Uni-1 model, the first of its Unified Intelligence family of AI models. It has been trained on audio, video, image, language, and spatial reasoning, according to Amit Jain, chief executive officer and co-founder of Luma. Jain told TechCrunch that the Uni-1 model can “think in language and imagine and render in pixels or images … we call it ‘intelligence in pixels.’” Other output capabilities like audio and video will come in subsequent model releases, he added. “Our customers aren’t buying the tool; they’re redoing how business is done,” Jain said. Luma has already started rolling out its new agentic platform with existing customers, including global ad agencies Publicis Groupe and Serviceplan, as well as for brands like Adidas, Mazda, and Saudi AI company Humain. Jain said the Luma Agents are a game changer because they can maintain persistent context across assets, collaborators, and creative iterations. They can also evaluate and refine outputs, improving their own results through an iterative self-critique, according to Jain. This sort of check-your-work capability is what has made coding agents so useful, Jain said. “You need that ability to evaluate your work, fix it, and do that loop until the solution is good and accurate.” Jain said the current workflow for using AI tools in creative environments doesn’t have the same acceleration of benefits people in the creative industry expect from AI. Instead, it’s more like: “Here are 100 models. Learn how to prompt them,” he said. He said what makes Luma Agents different is that you don’t need to prompt back and forth for each iteration on an image or idea — the system instead generates large sets of variations and lets users steer the direction through conversation. “With Unified Intelligence, because these models understand in addition to being able to generate, we are able to build a system that is able to do this sort of end-to-end work,” Jain said. Take, for instance, a human architect designing a building. As they draw the lines, they are creating an internal mental representation of the structure, light, spatial dynamics, and lived experience. This, Jain says, is the same principle upon which Unified Intelligence is built. Jain said the system could significantly speed up creative workflows. In a demonstration, he showed how a 200-word brief and an image of a product (a tube of lipstick) led the system to generate various ideas for locations, models, and color schemes for an ad campaign. In another example, Luma Agents turned a brand’s $15 million, year-long ad campaign into multiple localized ads for different countries in 40 hours for under $20,000, passing the brand’s internal quality controls and accuracy checks, Jain said. While Luma Agents is now publicly available via API, Jain said the startup plans to roll out access gradually to ensure users maintain reliable access and avoid workflow disruptions.
View
