Latest AI News

Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project
Mercor, a popular AI recruiting startup, has confirmed a security incident linked to a supply chain attack involving the open-source project LiteLLM. The AI startup told TechCrunch on Tuesday that it was “one of thousands of companies” affected by a recent compromise of LiteLLM’s project, which was linked to a hacking group called TeamPCP. Confirmation of the incident comes as extortion hacking group Lapsus$ claimed it had targeted Mercor and gained access to its data. It’s not immediately clear how the Lapsus$ gang obtained the stolen data from Mercor as part of TeamPCP’s cyberattack. Founded in 2023, Mercor works with companies including OpenAI and Anthropic to train AI models by contracting specialized domain experts such as scientists, doctors, and lawyers from markets including India. The startup says it facilitates more than $2 million in daily payouts and wasvalued at $10 billionfollowing a $350 million Series C round led by Felicis Ventures in October 2025. Mercor spokesperson Heidi Hagberg confirmed to TechCrunch that the company had “moved promptly” to contain and remediate the security incident. “We are conducting a thorough investigation supported by leading third-party forensics experts,” said Hagberg. “We will continue to communicate with our customers and contractors directly as appropriate and devote the resources necessary to resolving the matter as soon as possible.” Earlier, Lapsus$ claimed responsibility for the apparent data breach on its leak site and shared a sample of data allegedly taken from Mercor, which TechCrunch reviewed. The sample included material referencing Slack data and what appeared to be ticketing data, as well as two videos purportedly showing conversations between Mercor’s AI systems and contractors on its platform. Hagberg declined to answer follow-up questions on whether the incident was connected to claims by Lapsus$, or whether any customer or contractor data had been accessed, exfiltrated, or misused. The compromise of LiteLLMoriginally surfacedlast week after malicious code was discovered in a package associated with the Y Combinator-backed startup’s open-source project. While the malicious code was identified and removed within hours, the incident drew scrutiny due to LiteLLM’s widespread use around the internet, with the library downloaded millions of times per day, per security firm Snyk. The incident also prompted LiteLLM to make changes to its compliance processes, includingshifting from controversial startup Delveto Vanta for compliance certifications. It remains unclear how many companies were affected by the LiteLLM-related incident or whether any data exposure occurred, as investigations continue.
View

OpenAI Raises $122 Bn at $852 Bn Valuation, Unveils AI Superapp
OpenAI said it is generating $2 billion in revenue per month.
View

With Sora Gone, Google Launches Cheaper Video Model
Google Veo 3.1 Lite costs less than half as much as Veo 3.1 Fast.
View

India’s National Informatics Centre Risks Missing the AI Moment
From shared AI infrastructure to cloud redesign, NIC must rethink systems, scale, and governance to stay central to India’s AI push.
View

OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise
OpenAI has closed a deal to raise $122 billion at an $852 billion valuation, its largest funding round to date as the company is expected to hit the public markets this year. The round will add to OpenAI’s war chest as it spends enormous amounts of money on AI chips, data center buildouts, and hiring top talent. SoftBank co-led the round alongside Andreessen Horowitz, D.E. Shaw Ventures, MGX, TPG, and T. Rowe Price Associates, with participation from Amazon, Nvidia, and Microsoft. About $3 billion came from individual investors via bank channels. OpenAI is also going to be included in several ETFs managed by ARK Invest, giving more people access to the private company’s stock to broaden its shareholder base in advance of its reportedlyupcoming IPO. OpenAI also said it expanded its revolving credit facility to about $4.7 billion, supported by several of the top global banks. The facility remains undrawn, the company said, which suggests it’s bolstering its financial flexibility as it ramps spending on compute and infrastructure, rather than responding to near-term liquidity needs. The company’spress releaseon the raise reads less like a typical blog post than a draft of an S-1; it’s heavy on the flywheel metaphors, digs into revenue per compute unit, and offers the kind of TAM-justifying language that institutional investors drool over. OpenAI included updates on revenue and user numbers, claiming it’s generating $2 billion in revenue per month and taking a shot at competitors: “At this stage, we are growing revenue four times faster than the companies who defined the Internet and mobile eras, including Alphabet and Meta.” The company also said it has more than 900 million weekly active users in consumer AI and over 50 million subscribers, with search usage nearly tripling in the last year. OpenAI said its ads pilot is bringing in more than $100 million in annual recurring revenue in under six weeks, opening up a serious potential revenue stream for the company that built its user base without ads. The AI giant claims momentum is mirrored on the business side, which now makes up 40% of its revenue (up fromaround 30% last year) and is “on track to reach parity with consumer by the end of 2026.” Its growth across agentic workflows, the company said, is driven by its newest model GPT-5.4. Finally, OpenAI also called itself an “AI superapp,” making it clear that it wants to own the primary interface for how people use AI. All of it adds up to a single message: OpenAI is building its public market narrative in real time, and this round is as much about anchoring IPO expectations as it is about the capital itself.
View

Salesforce announces an AI-heavy makeover for Slack, with 30 new features
Salesforce, the cloud software giant, has beenremaking its businessaround AI, and at a small gathering in San Francisco on Tuesday, CEO Marc Benioff and his team unveiled the latest results of those efforts: an updated version of Slack, with a plethora of new AI features. The most significant of these is a serious glow-up for its AI agent, Slackbot. The30 new features, which will be available in the coming months, followa January updatethat gave Slackbot agentic capabilities — including the ability to draft emails, schedule meetings, and sift through your inbox for specific information. Perhaps the most notable feature announced Tuesday is what the company calls reusable AI-skills — which allow users to define specific tasks for Slackbot that, once created, can be applied in a variety of different scenarios and contexts. Slackbot comes with a built-in library of AI-skills, Salesforce says, but users can also create their own custom versions. Once these skills are set up, they significantly reduce the work an employee might need to do. For example, a user can trigger a skill using a simple command in Slack — say, “create a budget” for an upcoming event — prompting Slackbot to pull together all relevant information from a company’s Slack channels, as well as any connected apps or data sources, to create an actionable plan. The bot will then automatically set up a meeting to discuss the plan, inviting relevant employees based on their titles. Slackbot now also functions as an MCP (Model Context Protocol) client — meaning it can connect to and coordinate with outside services and tools. Among those is Agentforce, Salesforce’s AI agent development platformlaunched in 2024. Through that connection, it can “route work or prompt questions to Agentforce or any agent or app in your enterprise” the company says, with the agent finding the most relevant and efficient path for the information, without human intervention. According to Rob Seaman, Slack’s interim CEO and former chief product officer, Slackbot can also now transcribe meetings and summarize them. If a meeting participant happens to zone out, thus missing critical details, they can just ask Slackbot to produce a recap of the meeting, including any action items assigned to them. The agent can also now operate outside of Slack and monitor your desktop activities — Salesforce lists “your deals, your conversations, your calendar, and your habits” as the kinds of data it draws on. Based on that context, the bot will make actionable suggestions or draft follow-ups for critical tasks. Seamanhas said thatprivacy protections are built into this design and that users have the ability to adjust permissions as needed. In short: Salesforce is clearly trying to take Slack beyond its roots as an enterprise communication tool and position it as a more versatile platform that can handle a wider variety of business tasks. The hope seems to be that, by flooding it with AI, Slack can become an indispensable part of enterprise users’ core business processes. Benioff let his team walk through the major features on Tuesday but remarked, during his keynote, that the five years since Salesforce acquired Slack had been an “incredible journey,” one that had delivered “two and a half times revenue growth.” He added: “We have about a million businesses running on Slack. It’s been a huge growth story.”
View

Anthropic is having a month
Anthropic has built its public identity around the winning idea that it’s the careful AI company. It publishes detailed research on AI risk, employs some of the best researchers in the field, and has been vocal about the responsibilities that come with building such powerful technology — so vocal, of course, that it’s right nowbattling it outwith the Department of Defense. On Tuesday, alas, someone there forgot to check a box. It is, notably, the second time in a week. Days earlier, Fortunereportedthat Anthropic had accidentally made nearly 3,000 internal files publicly available, including a draft blog post describing a powerful new model the company had not yet announced. Here’s what happened on Tuesday: When Anthropic pushed out version 2.1.88 of its Claude Code software package, it accidentally included a file that exposed nearly 2,000 source code files and more than 512,000 lines of code — essentially the full architectural blueprint for one of its most important products. A security researcher named Chaofan Shou noticed almost immediately andposted about it on X. Anthropic’s statement to multiple outlets was nonchalant as these things go: “This was a release packaging issue caused by human error, not a security breach.” (Internally, we’d guess things were less measured.) Claude Code isn’t a minor product. It’s a command-line tool that lets developers use Anthropic’s AI to write and edit code and has become formidable enough to unsettle rivals. According to the WSJ, OpenAIpulled the plugon its video generation product Sora just six months after launching it to the public to refocus its efforts on developers and enterprises — partly in response to Claude Code’s growing momentum. What leaked was not the AI model itself but the software scaffolding around it — the instructions that tell the model how to behave, what tools to use, and where its limits are. Developers began publishing detailed analyses almost immediately, with one describing the product as “aproduction-grade developer experience, not just a wrapper around an API.” Whether this turns out to matter in any lasting way is a question best left to developers. Competitors may find the architecture instructive; at the same time, the field moves fast. Either way, somewhere at Anthropic, you can imagine that one very talented engineer has spent the rest of the day quietly wondering if they still have a job. One can only hope it’s not the same engineer, or engineering team, from earlier this week.
View

Alexa+ gets new food ordering experiences with Uber Eats and Grubhub
Amazon’slatest upgradeto Alexa+, its next-generation AI assistant, allows you to order food from popular delivery services Uber Eats and Grubhub in a conversational manner, just as if you were chatting with a waiter at a restaurant or placing an order at a drive-thru, according to Amazon. The aim of this new Alexa+ feature is to create a more natural and interactive ordering experience. Starting today, users can begin requesting a cuisine, explore menu options, ask questions, and customize their meal all within a single conversation. If users change their mind halfway through, want to add a dessert, or adjust quantities, they can make those changes instantly. To activate the feature, users need to link their Grubhub or Uber Eats account through the Alexa app. Once linked, previous orders will sync automatically, making it easy to reorder favorite meals or discover new restaurants. Users can then say something like, “I want to order Italian for delivery,” and Alexa+ will guide them to various restaurant options. Once the order is set, users get a comprehensive summary of the items in their cart, including quantities and prices. This new food ordering experience is starting to roll out to Alexa+ customers with Echo Show 8 devices and larger. Amazon explains that this development is a significant advancement in the company’s goal to establish adaptive interaction models. This initiative lays the groundwork for expanding similar features into other areas, such as grocery shopping and travel arrangements. The Alexa+ upgrade comes amid the broader implications of AI in the food industry. Fast food chains have already begun using AI assistants at drive-thrus, but challenges remain, especially around order accuracy. In 2024,McDonald’spaused its initiative after a few missteps, such as an AI cashier accidentally adding nine sweet teas to a customer’s order. Additionally,Taco Bellfaced its share of mishaps, with viral videos showcasing its AI making errors. Since therollout of Alexa+ in the U.S.and itsrecent expansionto the U.K., the features continue to grow, including new personality styles for Alexa, such as a“Sassy” optiondesigned for adults, as well asother styleslike Brief, Chill, and Sweet.
View

Yupp.ai shuts down after raising $33M from a16z crypto’s Chris Dixon
Sometimes an apparently good idea, a big raise from a big-name VC, and a sea of well-connected angel investors is not enough. Less than a year after launching, Yupp.ai is closing its business, co-founders Pankaj Gupta and Gilad Mishneannouncedon Tuesday. Yupp offered a crowdsourced AI model-picking service. It allowed consumers to test and compare results from a supply of 800 AI models for free, including the state-of-the-art ones from OpenAI, Google, Anthropic. Yupp would return multiple replies from the prompt request, including information or images, and users would offer feedback on which models worked best for them and why. The idea was to generate anonymized data on what people actually need from AI that the model makers would then pay for. Yupp said it signed up 1.3 million users and collected millions of preferences every month. It even had a leaderboard. The company said it also had a few AI labs as customers. But alas, “we didn’t reach a strong enough product-market fit” to survive, in part because AI models improved by such leaps and bounds these past few months, the founders said. While labs are paying big bucks for feedback, the current model — pioneered by companies like Scale AI and Mercor — is to hire specialty experts, like PhDs, and tuck them into the reinforcement learning loop. On top of that, Silicon Valley is already looking 10 miles down the road, when AI is built for, and being used by, other AIs. Model makers might want some consumer feedback now, but they are largely building for the day when agents, not humans,rule the online world. “The AI model capability landscape has changed dramatically in the last year alone and will continue to change quickly,” Gupta, Yupp.ai’s CEO, wrote ina post on Xabout the plans to shutter. “The future is not just models but agentic systems.” Yupp.ai raised a $33 million seed round in 2024 led by a16z crypto’s Chris Dixon, a giant seed round for its day. In addition, Yupp.ai raised checks from more than 45 angels and small investors,it said.This included luminaries like Google DeepMind chief scientist Jeff Dean; Twitter co-founder Biz Stone; Pinterest co-founder Evan Sharp; and Perplexity CEO Aravind Srinivas. Gupta said some of Yupp’s employees are joining a “well known” AI company, and others are looking for their next gig. Yupp.ai did not immediately respond to TechCrunch’s request for comment.
View

With its new app store, Ring bets on AI to go beyond home security
With now more than 100 million cameras in the field, Amazon-owned Ring is ready to take advantage of its sizable footprint with the launch of a new app store that will expand its cameras’ capabilities. Focused initially on areas like elder care, workforce analytics, rental management, and more, the store will allow developers of all sizes to tap into Ring’s ecosystem to reach customers. First announcedat the Consumer Electronics Show in January, the app store arrives alongside Ring’s expansion beyond smart doorbells and cameras for people’s homes to thoseaimed at businesses. But the new store is also enabled by the leaps being made inAI technology, which can take advantage of Ring’s ability to see and hear things in the real world and translate that for users in specific situations. For instance, one launch partner, the SoftBank-backed companyDensity, has an app called Routines focused on elder care, which can leverage Ring cameras to help families keep an eye on their loved ones, like their aging parents, and be alerted to concerns like falls or changes in routines. An app fromQueueFlowcan help businesses better understand what wait times and congestion are like at any place where people need to wait their turn, like events, restaurants, service desks, waiting rooms, and more. An app fromMinutcan help Airbnb hosts monitor their accommodations, which is tied to its other camera-less sensors that track things like excessive noise and temperature. The idea, explains Ring founder and CEO Jamie Siminoff, is to expand the capabilities of what Ring cameras can do beyond providing homeowners’ security. “With AI, there’s just an incredible amount of long tail use cases,” he told TechCrunch. “We are unlocking value that our customers have invested in, in things that…all of us together never thought we could do.” However, there will be areas that are restricted, given the growing consumer backlash against surveillance technology,which has also impacted Ring. After the company launched features that couldfind lost petsor watch for wildfires, customers became aware of how much these cameras could do — and how that could lead to a world where people couldn’t go anywhere without being tracked, recorded, and potentially even recognized by AI-powered camera systems. Aware of the potential for similar bad PR with its app store, Siminoff notes that the terms will not permit apps that offer certain types of privacy-invasive features, like facial recognition tools or license plate readers. “We’re trying to be careful to make sure that it is being used for…apps that deliver value to the customer,” he said of the Ring app store. “Certainly, we have to listen to what’s happening out in the market and the scrutiny.” Following the backlash from customers, Ringcanceled its partnershipwithFlock Safety, a maker of AI-powered cameras that share footage with law enforcement. The partnershipwould have allowedagencies using Flock to request footage from Ring doorbell and camera owners. Ring itself has a long history ofsharing data with police, and has receivedcriticism from privacy advocatesin recent months for new partnerships with law enforcement andcompanies like Axon. Ring’s new app store will be discoverable within the Ring app for iOS and Android devices, and will initially be limited to customers in the U.S. before rolling out more broadly. However, adding apps to your Ring set-up won’t involve using the platform’s in-app purchase payment systems. That means Ring won’t be paying Apple or Google commissions when customers decide to expand their Ring experience with a partner’s tools. Siminoff says this is because Ring isn’t the one actually distributing the apps — users will still likely need to download the partner’s app from the app store to access the new functionality. Meanwhile, the Ring app itself isn’t changing to incorporate the partners’ new features. Still, this represents an interesting way to build an app ecosystem that’s outside the phone’s app stores, while still benefiting from Ring’s distribution on iOS and Android. “It’s not just that Ring is doing an app store. It’s that Ring has a lot of cameras out there, and so therefore it is a big enough surface area that if [developers] do write something, [they] can get a decent number of customers and have a hopefully successful business,” Siminoff said. In terms of monetization, when Ring directs a customer to one of its partners, it will be taking a commission on those sales. For now, that’s a 10% fee, but Ring says it’s open to apps offering other business models beyond subscriptions, like one-time fees or even free, ad-supported apps, if that’s something customers actually want. At launch, there are around 15 apps available, but many more are in the pipeline, the company said. Developers are able to submit their apps for consideration throughRing’s developer site. Other apps available now include a bird-identification app, WhatsThatBird.AI; a risk and security detection app (for fires, smoke, falls, leaks, etc.) memories.ai; an app for businesses offering alerts and people counting, Lumeo; a lawn health monitoring, LawnWatch; loitering detection for businesses, ProxView; a traffic and line monitoring app, StoreTraffic; package delivery tracking from Package Protect; and Amazon’s own app, Cheer Chime, that chimes when a person tips at checkout. “I would say that the goal by the end of the year is that there’s hundreds of apps in tens of verticals,” Siminoff said.
View

Exclusive: Runway launches $10M fund, Builders program to support early stage AI startups
Runway is moving beyond buildingAI video modelsand into shaping what gets built on top of them. The AI video generation startup has launched a $10 million venture fund to invest in early-stage companies building across AI, media, and world simulation, the company’s founders told TechCrunch. It’s also rolling out a Builders program offering seed to series C startups free API credits, a move that suggests Runway wants to create an ecosystem around what it calls “video intelligence.” Runway has become one of the leading players in AI video generation, with its tools used across film, advertising, and marketing. But with thelaunch of its “general world models”last December, the company is now pushing beyond creative tooling into broader applications. And it’s looking to tap startups as a way to explore use cases it can’t pursue alone. “We think that through video, we’re going to get to video intelligence, and it’s going to open a wider set of use cases in different industries that we can’t double down on today, but that maybe we can support with our research,” Alejandro Matamala-Ortiz, Runway’s co-founder and chief innovation officer, told TechCrunch. Runway’s thesis for the fund is divided into three buckets: For the past year and a half, Runway has quietly backed a handful of early-stage founders and companies, Matamala-Ortiz said. Those includeLanceDB, which builds databases for AI applications, andTamarind Bio, which uses AI to design new proteins for drug discovery. Some startups, like real-time audio generation firmCartesia, are working on products that complement its own. “The next generation of AI models will be built on multimodal data – video, audio, images, text together,” Chang She, co-founder and CEO of LanceDB, told TechCrunch in a statement. “LanceDB is building the infrastructure layer that makes that possible, and Runway is one of the few investors who understands why that matters.” Runway has raised close to $860 million to date from backers like Nvidia and Qatar Investment Authority, and is valued at around$5.3 billion post-money. It seeded the $10 million fund with existing investors and close partners, with plans to write checks of up to $500,000 for pre-seed and seed-stage startups. Runway isn’t the only AI startup that’s turning around to invest in companies just starting out on their journeys. OpenAI is the OG with its Startup Fund, and AI search startup Perplexity launched its own$50 million venture fundlast year for seed-stage startups. CoreWeave also launchedCoreWeave Venturesin September to back AI companies. “Many companies like ours are investing heavily on the primitives that will unlock a new set of applications or new types of companies,” Matamala-Ortiz said. “Companies like ours that are still fairly small with only 150 people can’t focus on everything. But we do see opportunities in partnering very early with new teams that can benefit from what we’re doing.” That same philosophy is what is driving Runway’s new program for builders. Eligible early-stage startups can startapplyingfor the program to get 500,000 API credits and access toCharacters, Runway’s recently released real-time video agent API that’s powered by its new family of general world models. Characters lets users interact with generative AI agents in real time, giving them a face and a voice that can range from cartoonish to photorealistic. The Builders program is designed, in part, to see what startups build with the technology. “Until [recently], we didn’t have the possibilities of talking to a real-time video agent, so we are really trying to see which teams see the potential and positive impacts of this technology,” Matamala-Ortiz said. The program is already live, with a founding cohort that includes Cartesia, MSCHF, Oasys Health, Spara, Subject, and Supersonik. They’re using Characters to power things like AI customer support agents, interactive brand characters, personalized onboarding experiences, real-time sales assistants, and synthetic media tools. Matamala-Ortiz said he’s excited about the potential for telemedicine and education. And since entertainment is Runway’s bread and butter, Matamala-Ortiz said he expects Characters to be used in gaming and new kinds of entertainment experiences. “This is part of our general world models, which is what we’re pushing for next: a set of models that are interactive, real-time, and immersive,” Matamala-Ortiz said. “When you start combining all of these pieces, you can imagine that you will be able to generate and simulate entire environments, and participate and have conversations with the characters in these worlds.” Other startups likeInworldandCharismaare also building interactive AI characters for games and storytelling, while companies likeStoReelare experimenting with AI-generated shows users can engage with directly. Some, likeCharacter AI, are already popular for their AI characters you can talk to. “We do really believe that there’s a new kind of internet that’s going to be more personalized, more immersive, and in real-time,” Matamala-Ortiz said. Correction: An earlier version of this article misstated the title and surname of Alejandro Matamala-Ortiz. He is the Chief Innovation Officer, not the Chief Design Officer. Additionally, his last name is hyphenated; he should be referred to as Matamala-Ortiz, not Ortiz.
View

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles
To build the autonomous machines of the future, sometimes your model needs a model. Companies developing self-driving cars, robots manipulating the physical environment, or autonomous construction equipment collect thousands, if not millions, of hours of video data for evaluation and training. Organizing and cataloging that video is now a job for humans, who have to watch all of it. Even fast-forwarding, that doesn’t scale.NomadicML, a startup founded by CEO Mustafa Bal and CTO Varun Krishnan, wants to solve problems for customers who have 95% of their fleet data sitting in archives. The challenge becomes harder when looking for edge cases — the most valuable data depicts events that rarely occur and can befuddle inexperienced physical AI models. Nomadic is working to solve that problem with a platform that turns footage into a structured, searchable dataset through a collection of vision language models. That, in turn, allows for better fleet monitoring and the creation of unique datasets for reinforcement learning and faster iteration. The company announced an $8.4 million seed round Tuesday at a post-money valuation of $50 million. The round was led by TQ Ventures, with participation from Pear VC and Jeff Dean, and will allow the company to onboard more customers and continue refining its platform. Nomadic alsowon first prizeat Nvidia GTC’s pitch contest last month. The two founders, who met as Harvard computer science undergrads, “kept running into the same technical challenges again and again at our jobs” at companies like Lyft and Snowflake, Bal told TechCrunch. “We are providing folks insight on their own footage, whatever drives their own AVs [and] robots,” he said. ”That is what moves these autonomous systems builders forward, not random data.” Imagine, for example, trying to fine-tune an AV’s understanding that it can run a red light if a police officer is directing it to do so, or isolating every time that vehicles drive under a specific type of bridge. Nomadic’s platform allows these incidents to be identified both for compliance purposes, and to be fed directly into training pipelines. Customers like Zoox, Mitsubishi Electric, Natix Network, and Zendar are already using the platform to develop intelligent machines. Antonio Puglielli, the VP of Engineering at Zendar, said that Nomadic’s tool allowed the company to scale up its work much faster than the alternative of outsourcing, and that its domain expertise set it apart from other competitors. This kind of model-based, auto-annotation tool is emerging as a key workflow for physical AI. Established data labeling firms like Scale, Kognic, and Encord are developing AI tools to do this work, while Nvidia has released a family of open source models,Alpamayo, that can be adapted to tackle the problem. Varun argues that his company’s tool is more than a labeler; it is an “agentic reasoning system: you describe what it needs and it figures out how to find it,” using multiple models to understand action taking place and put it in context. Nomadic’s backers expect the startup’s focus on this specific infrastructure to win out. “It’s the same reason Salesforce doesn’t build its own cloud and Netflix doesn’t build its own [content distribution facilities],” Schuster Tanger, a partner at TQ Ventures who led the round, told TechCrunch. “The second an autonomous vehicle company tries to build Nomadic internally, they’re distracted from what makes them win, which is the robot itself.” Tanger praises Nomadic’s talent, noting that Krishnan is an international chess master ranked as the world’s 1,549th-best player. Krishnan, meanwhile, brags that all of the company’s dozen or so engineers have published scientific papers. Now, they’re hard at work developing specific tools, like one that understands the physics of lane changes from camera footage, or another that derives more precise locations for a robot’s grippers in a video. The next challenge, from the point of view of Nomadic and its customers, is to develop similar tools for non-visual data like lidar sensor readings, or to integrate sensor data across multiple modes. “Juggling around terabytes of video, slamming that against hundreds of 100 billion-plus parameter models, and then extracting their accurate insights, is really insanely difficult,” Bal said.
View
