Latest AI News

Luma launches AI-powered production studio with faith-focused Wonder Project
AI video generation startup Luma has launched Innovative Dreams, a production company built in partnership with Wonder Project, a streaming service that produces religious films and TV on Amazon Prime. The tie-up’s first show will be called “The Old Stories: Moses,” starring British actor Ben Kingsley and set to launch this spring on Prime Video. “Innovative Dreams is a production services company where seasoned filmmakers from director Jon Erwin’s team and Luma’s creative technologists work with great studios and filmmakers to help them realize ambitious ideas,” Luma said Thursday in asocial media post. The company envisages creative teams collaborating in real time with Luma Agents to make changes to sets, props, and lighting, as well as bring in footage of human actors. Luma Agents are the company’srecently launched toolsdesigned to handle end-to-end creative work across text, image, video, and audio. “This is a significant improvement over the current virtual production and performance capture processes where things come together only in post,” Luma’s post said. “This is the leverage of AI — not just faster or cheaper, but better than what came before.” Luma isn’t the only startup to move from tooling to production. AI startup Higgsfield last week launched anoriginal series, starting with a 10-minute sci-fi episode, and London-based creative studioWonder Studiosis working on a documentary with Campfire Studios. The launch comes the same week that competitor Runway’s co-founder and co-CEOCristóbal Valenzuela saidfilm studios should take the $100 million they spend on a single film and instead use AI to produce 50 films in order to increase their chances of making a blockbuster. Luma founder and CEO Amit Jain has made a similar case, telling TechCrunch that Hollywood’s soaring production costs have made filmmaking increasingly constrained. Generative AI, he argues, could make filmmaking faster, cheaper, and more efficient without sacrificing quality. That thinking underpins Luma’s new partnership with Wonder Project. Wonder Project, launched in 2023, is run by director Jon Erwin and former Netflix executive Kelly Hoogstraten with the goal of serving the faith and values audience globally. Their first project, “House of David,” a Biblical drama series about the life of King David, was released on Amazon Prime in 2025. It’s unclear whether Innovative Dreams will focus solely on religious and faith-based content or expand beyond Wonder’s remit. TechCrunch has reached out for clarification. In avideopromoting the partnership, Erwin said Innovative Dreams will use a new “real-time hybrid filmmaking” process that combines performance capture (as in “Avatar”) and virtual production (as in “The Mandalorian”), done live and more cheaply using Luma’s tools. Performance capture is a technique where actors perform in a green-screen environment wearing suits and facial markers so their movements and expressions can be digitally captured and turned into animated characters. Virtual production involves actors performing on set, often in front of massive LED screens instead of a green screen while real-time game-engine graphics create the environment around them, blending the physical and digital worlds during the shoot. Luma’s tools, Erwin said, allow them to film a human actor anywhere and then transport that to a photorealistic scene, or go even further by generating a new face so it looks like a completely different person but still maps onto the actor’s movements and facial expressions.
View

Factory hits $1.5B valuation to build AI coding for enterprises
More than three years after the emergence of generative AI, AI-assisted coding remains by far the most popular and lucrative use case for the technology. Although multiple companies — including Anthropic, maker of Claude Code, as well as Cursor and Cognition — are already vying for dominance, investors believe there is room for at least one more player. On Wednesday, Factory, a startup developing AI agents for enterprise engineering teams, announced it had raised $150 million at a $1.5 billion valuation. The round was led by Khosla Ventures, with participation from Sequoia Capital, Insight Partners, and Blackstone. Keith Rabois, a managing director at Khosla Ventures, joined the startup’s board. Factory founder Matan Grinberg told theWall Street Journalthat the company’s key differentiator is its ability to switch between different foundation models, such as Anthropic’s Claude or Chinese AI startup DeepSeek. However, startups like Cursor also don’t rely on a single model to generate code. Factory’s customers include engineering teams at Morgan Stanley, Ernst & Young, and Palo Alto Networks. The startup was founded in 2023 after Grinberg, then a PhD student at UC Berkeley, cold-emailed Sequoia partner Shaun Maguire. The two bonded over mutual academic interest. (Maguire’s PhD from Caltech is in the same area of physics Grinberg was studying.) Maguire convinced Grinberg to drop out and launch Factory, with Sequoia backing the startup at the seed stage.
View

Google now lets you explore the web side-by-side with AI Mode
Google announced on Thursday that it’s rolling out a new way to explore the web withAI Mode, its conversational search experience. Now, when you’re using AI Mode on Chrome desktop, clicking a link will open the web page side-by-side with AI Mode. The goal is to make it easier to explore relevant websites, compare details, and ask follow-up questions while preserving the context of your search, the tech giant says. For example, if you want to purchase a new a coffee maker, you can describe what you’re looking for in AI Mode and get a range of options. Once you click on one, you can open the retailer’s website alongside AI Mode and ask specific questions, like “how easy is this to clean?” AI Mode will then use context from the page and from across the web to answer your questions. “Our early testers loved that they didn’t have to constantly switch tabs to get help with a comprehensive article or a long video,” Google explained in a blog post. “And they found that having both Search and the web side-by-side helped them stay focused on their tasks while exploring useful web pages.” Google also announced a new way to search across the Chrome tabs you’re already looking at. On Chrome desktop or mobile, you can tap the new “plus” menu in the search box on the “New Tab” page or in AI Mode, then select recent tabs to include them in your search. This means you can mix and match multiple tabs, images, or files and bring that context into your AI Mode searches. For example, if you’re researching local hiking trails and already have a few tabs open, you can add them to your search and ask for similar trails in a different location. Or, if you’re studying for a statistics exam, you can bring in context from open tabs, class notes, lecture slides, and more to ask for examples to illustrate a concept. The new updates to AI Mode are now available in the U.S. Google plans to expand them to additional regions in the future.
View

Anthropic CPO leaves Figma’s board after reports he will offer a competing product
Mike Krieger, Anthropic’s chief product officer, resigned from the board of interface design company Figma on April 14. His departure was disclosed to the U.S. Securities and Exchange Commission by the publicly traded $10 billion company the same day that The InformationreportedAnthropic’s next model, Opus 4.7, will include design tools that could compete with Figma’s primary offering. Figma is the developer of a popular tool for user experience designers who build interfaces for websites and apps. The company has collaborated closely with Anthropic to integrate the frontier lab’s AI models into its products as assistants for its users. Krieger, who previously co-founded Instagram and the AI-powered news app Artifact, became the top product executive at Anthropic in 2024 and joined the board of Figma less than a year ago. Krieger’s departure and any forthcoming design tools will be another data point for investors who fear theSaaSpocalypse— that the largest AI labs will come to dominate software businesses, a thesis that has rocked public markets at times this year. For example, iShares’s primary software ETF, IGV, is down nearly 18% this year. Anthropic, meanwhile, isturning down investorswho want to buy into the company at $800 billion — more than double the valuation from its most recent round at the beginning of the year. But companies like Anthropic and OpenAI still have to prove their ultra-capable models can truly replicate the domain experience and relationships of established software brands. Figma’s stock price is up 5% since Krieger’s departure was disclosed, though we’ll see what happens with the next Opus release.
View

OpenAI takes aim at Anthropic with beefed-up Codex that gives it more power over your desktop
There is currently a low-grade war between OpenAI and Anthropic over who can release the most convenient and powerful AI-coding tools and, so far, Anthropic seems to be winning.Claude Codehas been dubbed the tool of choice for many businesses, as TechCrunch reported last week, but OpenAI isn’t giving up yet. This week, OpenAI announced a revamp ofCodex, its own automated tool, with a variety of new updates designed to give it significantly expanded powers. On Thursday, the company announced a plethora of new features and updates, perhaps the most notable of which is that Codex can now operate in the background on your computer — opening any app on your desktop and carrying out operations with a cursor that clicks and types. Functionally, what this does is allow Codex to deploy multiple agents, all of which work on a user’s Mac “in parallel, without interfering with your own work in other apps,” the company saidin a blog post. In other words, because of the way Codex runs in the background, a user can still be using the machine as the agent goes about its own work. The agent will then function, according to the company, as a kind of coding buddy that does auxiliary tasks while you work on topline projects. OpenAI’s lists “iterating on frontend changes, testing apps, or working in apps that don’t expose an API” as potential use-cases for this kind of agentic assistance. Overall, this agentic update and other new additions demonstrate OpenAI’s desire to not only make Codex a competitive coding assistant but also a more multifaceted tool that can be integrated into a variety of corporate workflows. Watchers of the AI coding space will also note that some of the powers OpenAI is now adding to Codex seem to resemble those previously released by Anthropic for Claude Code. Last month, Anthropicannounced thatClaude and Cowork could remotely control your Mac and desktop on a user’s behalf while they were away from their keyboard. In addition to the agentic tools, OpenAI’s Codex now has an in-app browser, which allows a user to issue commands to the agentic tool, which it will then ostensibly carry out on specific web applications. OpenAI says this function will be useful for frontend and game development, and that it plans to eventually expand the capability so that Codex can “fully command the browser beyond web applications on localhost.” There are other updates. A new feature in preview called “memory” allows Codex to recall previous work sessions and generate important context about how a particular user works. The agent has also been given a new image-generation ability, which OpenAI says can be used to create product concepts, slide visuals, mockups, placeholder images, and other corporate paraphernalia. Finally, to expand Codex’s ability to get things done, the company has announced 111 plugin integrations from apps like CodeRabbit and Gitlab Issues, which allows Codex to carry out tasks involving those tools. The way OpenAI has framed it, these plugins give Codex the ability to carry out minor clerical work to organize your work life. For example, if you want Codex to take a look at your Slack channels and Google calendar and give you a to-do list for a given day, OpenAI says that it can now do that for you. A new pay-as-you-go Codex pricing option for ChatGPT enterprise and business customers has also been announced in an apparent effort to give users more flexibility when it comes to procuring the coding tool’s services. Once considered the undisputed leader of its industry, OpenAI has more fiercely competed with Anthropic in recent months, with a focus on enterprise capabilities and a retreat from consumer tools like its social video appSora 2. The company has also battled various controversies in recent months, includinglawsuitsover ChatGPT’s alleged mental health impact on some users.
View

Physical Intelligence, a hot robotics startup, says its new robot brain can figure out tasks it was never taught
Physical Intelligence, the two-year-old, San Francisco-based robotics startup that has quietly become one of the most closely watched AI companies in the Bay Area, publishednew researchThursday showing that its latest model can direct robots to perform tasks they were never explicitly trained on — a capability the company’s own researchers say caught them off guard. The new model, called π0.7, represents what the company describes as an early but meaningful step toward the long-sought goal of a general-purpose robot brain: One that can be pointed at an unfamiliar task, coached through it in plain language, and actually pull it off. If the findings hold up to scrutiny, they suggest that robotic AI may be approaching an inflection point similar to what the field saw with large language models — where capabilities begin compounding in ways that outpace what the underlying data would seem to predict. But first: The core claim in the paper is compositional generalization — the ability to combine skills learned in different contexts to solve problems the model has never encountered. Until now, the standard approach to robot training has been essentially rote memorization — collect data on a specific task, train a specialist model on that data, then repeat for every new task. π0.7, Physical Intelligence says, breaks that pattern. “Once it crosses that threshold where it goes from only doing exactly the stuff that you collect the data for to actually remixing things in new ways,” says Sergey Levine, a co-founder of Physical Intelligence and a UC Berkeley professor focused on AI for robotics, “the capabilities are going up more than linearly with the amount of data. That much more favorable scaling property is something we’ve seen in other domains, like language and vision.” The paper’s most striking demonstration involves an air fryer the model had essentially never seen in training. When the research team investigated, they found only two relevant episodes in the entire training dataset: One where a different robot merely pushed the air fryer closed, and one from an open-source dataset where yet another robot placed a plastic bottle inside one on someone’s instructions. The model had somehow synthesized those fragments, plus broader web-based pretraining data, into a functional understanding of how the appliance works. “It’s very hard to track down where the knowledge is coming from, or where it will succeed or fail,” says Ashwin Balakrishna, a research scientist at Physical Intelligence and a Stanford computer science PhD student. Still, with zero coaching, the model made a passable attempt at using the appliance to cook a sweet potato. With step-by-step verbal instructions — essentially, a human walking the robot through the task the way you might explain something to a new employee — it performed successfully. That coaching capability matters because it suggests robots could be deployed in new environments and improved in real time without additional data collection or model retraining. So what does it all mean? The researchers aren’t shy about the model’s limitations and are careful not to get ahead of themselves. In at least one case, they point the finger squarely at their own team. “Sometimes the failure mode is not on the robot or on the model,” Balakrishna says. “It’s on us. Not being good at prompt engineering.” He describes an early air fryer experiment that produced a 5% success rate. After spending about half an hour refining how the task was explained to the model, it jumped to 95%, he says. The model also isn’t yet capable of executing complex multi-step tasks autonomously from a single high-level command. “You can’t tell it, ‘Hey, go make me some toast’,” Levine says. “But if you walk it through — ‘for the toaster, open this part, push that button, do this’ — then it actually tends to work pretty well.” The team also acknowledged that standardized benchmarks for robotics don’t really exist, which makes external validation of their claims difficult. Instead, the company measured π0.7 against its own previous specialist models — purpose-built systems trained on individual tasks — and found that the generalist model matched their performance across a range of complex work including making coffee, folding laundry, and assembling boxes. What may be most notable about the research — if you take the researchers at their word — is not any single demo but the degree to which the results surprised them, people whose job it is to know exactly what is in the training data and therefore what the model should and shouldn’t be able to do. “My experience has always been that when I deeply know what’s in the data, I can kind of just guess what the model will be able to do,” Balakrishna says. “I’m rarely surprised. But the last few months have been the first time where I’m genuinely surprised. I just bought a gear set randomly and asked the robot, ‘Hey, can you rotate this gear?’ And it just worked.” Levine recalled the moment researchers first encountered GPT-2 generating a story aboutunicorns in the Andes. “Where the heck did it learn about unicorns in Peru?” he says. “That’s such a weird combination. And I think that seeing that in robotics is really special.” Naturally, critics will point to an uncomfortable asymmetry here: Language models had the entire internet to learn from. Robots don’t, and no amount of clever prompting fully closes that gap. But when asked where he expects the skepticism, Levine points somewhere else entirely. “The criticism that can always be leveled at any robotic generalization demo is that the tasks are kind of boring,” he says. “The robot is not doing a backflip.” He pushes back on that framing, arguing that the distinction between an impressive robot demo and a robotic system that actually generalizes is precisely the point. Generalization, he suggests, will always look less dramatic than a carefully choreographed stunt — but it is considerably more useful. The paper itself uses careful hedging language throughout, describing π0.7 as showing “early signs” of generalization and “initial demonstrations” of new capabilities. These are research results, not a deployed product, and Physical Intelligence has been restrained from the start about commercial timelines. When asked directly when a system based on these findings might be ready for real-world deployment, Levine declines to speculate. “I think there’s good reason to be optimistic, and certainly it’s progressing faster than I expected a couple of years ago,” he says. “But it’s very hard for me to answer that question.” Physical Intelligence has raised over $1 billion to date and was most recently valued at $5.6 billion. A significant part of the investor enthusiasm around the company traces to Lachy Groom, a co-founder who spent years as one of Silicon Valley’s most well-regarded angel investors — backing Figma, Notion, and Ramp, among others — before deciding that Physical Intelligence was the company he’d been looking for. That pedigree has helped the startup attract serious institutional money even as it has refused to offer investors a commercialization timeline. The company is now said to be in discussions for a new round that would nearly double that figure to$11 billion. The team declined to comment.
View

Meta’s Planned Facial Recognition Feature for Smart Glasses Faces Opposition From Privacy Orgs
Meta's purported development of an artificial intelligence (AI)-powered facial recognition technology for its future smart glasses has raised concerns among privacy advocates. An open letter signed by 77 organisations working in the privacy and civil liberties space has been published, urging the Menlo Park-based tech giant to stop the development of such a feature. Notably, earlier this year, reports had claimed that Meta was developing a facial recognition feature that would allow its future smart glasses to detect and identify people around the wearer.
View

OpenAI updates its Agents SDK to help enterprises build safer, more capable agents
Agentic AI is the tech industry’s newest success story, and companies like OpenAI and Anthropic are racing to give enterprises the tools they need to create these automated little helpers. To that end,OpenAI has now updated its agents software development toolkit (SDK), introducing a number of new features designed to help businesses create their own agents that run on the backs of OpenAI’s models. The SDK’s new capabilities include a sandboxing ability, which allows the agents to operate in controlled computer environments. This is important because running agents in a totally unsupervised fashion can beriskydue to their occasionally unpredictable nature. With the sandbox integration, agents can work in a siloed capacity within a particular workspace, accessing files and code only for particular operations, while otherwise protecting the system’s overall integrity. Relatedly, the new version of the SDK also provides developers with an in-distribution harness for frontier models that will allow those agents to work with files and approved tools within a workspace, the company said. (In agent development, the “harness” is a term that refers to the other components of an agent besides the model that it’s running on. An in-distribution harness often allows companies to both deploy and test the agents running on frontier models, whichare considered to bethe most advanced, general-purpose models available.) “This launch, at its core, is about taking our existing Agents SDK and making it so it’s compatible with all of these sandbox providers,” Karan Sharma, who works on OpenAI’s product team, told TechCrunch. The hope is that this, paired with the new harness capabilities, will allow users “to go build these long-horizon agents using our harness and with whatever infrastructure they have,” he said. Such “long-horizon” tasks are generally considered to be more complex and multi-step work. OpenAI said it will continue to expand the Agents SDK over time, but initially, the new harness and sandbox capabilities are launching first in Python, with TypeScript support planned for a later release. The company said it’s also working to bring more agent capabilities, like code mode and subagents, to both Python and TypeScript. The new Agents SDK capabilities are being offered to all customers via the API, and will use standard pricing.
View

DeepL, known for text translation, now wants to translate your voice
DeepL, a translation company best known for its text tools, released a voice-to-voice translation suite today that covers use cases like meetings, mobile and web conversations, and group conversations for frontline workers through custom apps. The company is also releasing an API that lets outside developers and businesses build on top of DeepL’s tech for customized use cases, such as call centers. “After spending so many years in text translation, voice was a natural step for us,” DeepL CEO Jarek Kutylowski told TechCrunch in an interview. “We have come a long way when it comes to text translation and document translation. But we thought there wasn’t a great product for real-time voice translation.” Kutylowski said that the challenges in creating a real-time translation product center on striking a balance between reducing latency — the delay between someone speaking and the translated audio playing back — and maintaining accurate results. DeepL is releasing add-ons for platforms like Zoom and Microsoft Teams, where listeners can either hear real-time translation while others are speaking in native languages or follow real-time translated text on screen. This program is currently under early access, and the company is invitingorganizations to join a waitlist. The company also has a product for mobile and web-based conversations that can take place in person or remotely. DeepL also lets allows users participate in a group conversation in settings like a setting like training sessions or workshops, allowing participants to join through a QR code. DeepL said that its voice-to-voice tech can also learn and adapt to custom vocabulary, such as industry-specific terms and company and personal names. Kutylowski said that AI is reimagining what customer service will look like in the coming years. He noted that a translation layer helps companies provide support in languages where qualified staff are scarce and expensive to hire. The company said that it controls the entire voice-to-voice stack. However, the current system converts the speech to text, applies translation, then converts that back to speech. DeepL believes that since it has worked on text translation for years, it has an edge in translation quality. Going forward, the company wants to develop an end-to-end voice translation model that skips the text step entirely. DeepL faces competition from several well-funded startups working in adjacent corners of the space. Sanas, which last year raised$65 millionfrom Quadrille Capital and Teleperformance, uses AI to modify a speaker’s accent in real time — a tool aimed primarily at call center agents. Dubai-based Camb.AI focuses on speech synthesis and translation for media and entertainment companies Amazon Web Services, helping themdub and localize video contentat scale. Palabra, backed by Reddit co-founder Alexis Ohanian’s firm Seven Seven Six, is building a real-time speech translation engine designed to preserve both the meaning and thespeaker’s original voice, putting it in more direct competition with what DeepL is now building.
View

This simulation startup wants to be the Cursor for physical AI
The promise of physical AI is that engineers will be able to program physical agents the same way they do digital ones. We’re not there yet. Robotics is still held back by a paucity of data from physical spaces. To train their machines, companies need to build mock-up warehouses to test their machines, while an entire industry is springing up around surveilling factory lines and gig workers to train deep learning models to operate robots. Another option is simulation; detailed virtual replicas of real-world environments could provide the data and workspaces that roboticists need to do this work in a scalable way. Antioch, a startup building simulation tools for robot developers, wants to close what the industry calls the sim-to-real gap — the challenge of making virtual environments realistic enough that robots trained inside them can operate reliably in the physical world. “How can we do the best possible job reducing that gap, to make simulation feel just like the real world from the perspective of your autonomous system?” Antioch cofounder Harry Mellsop said. To do that, the company told TechCrunch today that it has raised an $8.5 million seed round that values it at $60 million, led by venture firm A* and Category Ventures, with additional participation from MaC Venture Capital, Abstract, Box Group, and Icehouse Ventures. Mellsop started the New York-based company with four cofounders in May of last year. Two of the other founders, Alex Langshur and Michael Calvey, joined him to cofound Transpose, a security and intelligence startup, andsell it to Chainalysisfor an undisclosed amount. The other two — Collin Schlager and Colton Swingle — previously worked at Meta Reality Labs and Google DeepMind, respectively. The need for better simulation is at the heart of what many major autonomy companies are doing. In the self-driving car space, for example, Waymo uses Google DeepMind’s world model to test and evaluate its driving model. In theory, that technique will make deploying Waymo vehicles in new areas require less data collection, a key cost in scaling up autonomous vehicle technology. Building and using those models to test robots is arguably a different set of skills than creating a self-driving car, and Antioch wants to build the platform that solves that problem for newer companies without the capital to do it all themselves. Those smaller companies also don’t have the capital to build physical testing arenas or drive sensor-studded cars for a few million miles. “The vast majority of the industry doesn’t use simulation whatsoever, and I think we’re now just really understanding clearly that we need to move faster,” Mellsop said. Antioch executives compare their product to Cursor, the popular AI-powered software development tool. Antioch allows robot builders to spin up multiple digital instances of their hardware and connect them to simulated sensors that mimic the same data the robot’s software would receive in the real world. These environments allow developers to test edge cases, perform reinforcement learning, or generate new training data. If, that is, the simulation is sufficiently high fidelity. The challenge here is making sure the physics in the simulation matches reality so that when the model is put in charge of a real machine, nothing goes wrong. The company starts with models built by Nvidia, World Labs, and others, and builds domain-specific libraries to make them easy to use. Working with multiple customers, executives say, gives Antioch a depth of context for refining its simulations that no single physical AI company could match on its own. “What happened with software engineering and LLMs is just starting to happen with physical AI,” Çağla Kaymaz, a partner at Category Ventures, told TechCrunch. “We do a lot of work on dev tools, and we love that vertical, but the challenges are different. With software, you can have these bad coding tools, and the risk is generally pretty contained to the digital world. In the physical world, the stakes are much higher.” Antioch’s focus now is mainly on sensor and perception systems, which account for the bulk of the need in automated cars and trucks, farm and construction machinery, or aerial drones. Aspirations for physical AI to power generalized robots to replicate human tasks are further away. While Antioch’s pitch is to startups, some of its earliest engagements have been with huge multinationals that are already investing heavily in robotics. Adrian Macneil has a solid understanding of this space. As an executive at the self-driving startup Cruise, he built the company’s data infrastructure, and in 2021 founded Foxglove, a company that offers the same kind of data pipelines to physical AI startups. Macneil is backing Antioch as an angel investor. “Simulation is really important when you’re trying to build a safety case or dealing with very high-accuracy tasks,” he said at the Ride.AI conference in San Francisco on Wednesday. “It’s not possible to drive enough miles in the real world.” Macneil would like to see the same kind of tools that drove the SaaS revolution—platforms like Github, Stripe, and Twilio—emerging to support physical AI. “We need a lot more of the entire toolchain to be available off the shelf,” he told TechCrunch. “We genuinely all think that anyone building an autonomous system for the real world is going to do so in software primarily in two to three years,” Mellsop said. “It’s the first time you can have autonomous agents iterate on a physical autonomy system, and actually close the feedback loop.” There are already experiments in this direction. David Mayo, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory, is using Antioch’s platform to evaluate LLMs. In one experiment, Mayo has AI models design robots, then use Antioch’s simulator to test them. It can even pit the models against each other in simulated contests, like pushing a rival bot off a platform. Giving the LLMs a realistic sandbox could help provide a new paradigm for benchmarking them. Before a world of AI engineers arrives, however, there is still more work ahead to close the gap between the digital models and the real world. If it can be done, developers will be able to create the kind of data flywheel that Macneil believes is the key to the success of category leaders like Waymo, where engineers are increasingly confident that next month’s model will be more capable than the last. If other companies want to replicate that success, they’ll need to build those tools themselves—or buy them.
View

Canva’s AI assistant can now call various tools to make designs for you
The core promise of new AI platforms is that you can describe your task to the AI assistant, let it plan the task and use the relevant tools for you, and keep your preferences in mind for future tasks. This is especially important for design professionals, as they want to have a predictable, automated workflow for creating content and media assets. Canva is leaning into this paradigm in the latest version of its Canva AI assistant, which uses its AI model to let users create editable designs with text prompts. Users can describe what they want it to make, and the bot will call the required tools and come up with a few options. The assistant uses layers to make designs, which gives users the flexibility to tweak different aspects of the final product as they see fit. The update comes as Canva has been working on making itsAI assistant central to users’ workflowsand adding more features such as image generation and website generation. Canva’s competitors also seem to be working toward a similar goal. This week, Adobe launcheda Firefly AI assistantthat can use the company’s various apps to do tasks, and Figma last month baked in support for AI agents in its platformswith an MCPserver. Canva’s co-founder and COO, Cliff Obrecht, noted that while many companies are trying to merge workflows, businesses prefer to execute the final steps of editing and publishing on Canva. “I think a lot of small businesses start and end their day, and they’ll do a lot of their workflows completely, in Canva,” Obrecht said. “We also work incredibly well with Anthropic, Google, and OpenAI, so if someone is doing their agentic workflows in those products, they can call Canva, get content, and they can get it back into those LLMs. But they always need to end up doing the final mile of editing, collaboration, and deployment. That’s where we really are strong,” Obrecht added. While a large chunk of Canva’s revenue comes from individual and small teams, its enterprise business is showing promising growth of 100% year-on-year, Obrecht said. He added that the company, most recently valued at $42 billion,per PitchBook, will likely go public next year. As part of this update, Canva is also adding integrations with Slack, Gmail, Google Drive, Calendar, and Zoom, so users can choose to allow the AI bot to build context by reading email, conversations, files, and meeting data. The company is adding a web research skill, too, so the AI bot can browse the internet to do tasks for you. The update also adds scheduling as a feature, so you can tell the AI bot to schedule repeatable tasks to run in the background. This feature will only create a draft that you can review and post, though. Canva is refining its existing AI tools, too. Its AI code generator can now import HTML, and users can use text prompts to describe the kind of spreadsheets they want to generate. The company says that it has improved its AI models’ efficiency, claiming that its Lucid Origin image-generation model is now 5x faster and 30x cheaper, and its 12V image-to-video model is 7x faster and 17x cheaper. Canva AI 2.0 is launching in research preview this week, and the company plans to make it available to all users in the coming weeks.
View

Meta raises Quest 3 and Quest 3S prices due to RAM shortage
Meta is raising the prices of its virtual reality headsets due to the rising cost of memory chips, the companyannouncedon Thursday. Starting April 19, the price of the Meta Quest 3S (128GB) and Meta Quest 3S (256GB) will go up by $50 to $349.99 and $449.99, respectively. The price of the Meta Quest 3 is going up by $100 to $599.99. “We’re making this change because the cost of building high-performance VR hardware has risen significantly,” Meta wrote in its blog post. “The global surge in the price of critical components — specifically memory chips — is impacting almost every category of consumer electronics, including VR. To keep delivering the quality of hardware, software, and support you expect from the Quest platform, we need to adjust our pricing.” Updated pricing will also apply to Meta Quest refurbished units, the company says, but all Meta Quest accessories will stay at their current prices. Meta is the latest tech company to raise hardware prices in response to the RAM shortage, joining peers likeSamsung,Microsoft, andSony.
View
