AI NewsNomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles

9:33 PM IST · March 31, 2026

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles

To build the autonomous machines of the future, sometimes your model needs a model. Companies developing self-driving cars, robots manipulating the physical environment, or autonomous construction equipment collect thousands, if not millions, of hours of video data for evaluation and training. Organizing and cataloging that video is now a job for humans, who have to watch all of it. Even fast-forwarding, that doesn’t scale.NomadicML, a startup founded by CEO Mustafa Bal and CTO Varun Krishnan, wants to solve problems for customers who have 95% of their fleet data sitting in archives. The challenge becomes harder when looking for edge cases — the most valuable data depicts events that rarely occur and can befuddle inexperienced physical AI models. Nomadic is working to solve that problem with a platform that turns footage into a structured, searchable dataset through a collection of vision language models. That, in turn, allows for better fleet monitoring and the creation of unique datasets for reinforcement learning and faster iteration. The company announced an $8.4 million seed round Tuesday at a post-money valuation of $50 million. The round was led by TQ Ventures, with participation from Pear VC and Jeff Dean, and will allow the company to onboard more customers and continue refining its platform. Nomadic alsowon first prizeat Nvidia GTC’s pitch contest last month. The two founders, who met as Harvard computer science undergrads, “kept running into the same technical challenges again and again at our jobs” at companies like Lyft and Snowflake, Bal told TechCrunch. “We are providing folks insight on their own footage, whatever drives their own AVs [and] robots,” he said. ”That is what moves these autonomous systems builders forward, not random data.” Imagine, for example, trying to fine-tune an AV’s understanding that it can run a red light if a police officer is directing it to do so, or isolating every time that vehicles drive under a specific type of bridge. Nomadic’s platform allows these incidents to be identified both for compliance purposes, and to be fed directly into training pipelines. Customers like Zoox, Mitsubishi Electric, Natix Network, and Zendar are already using the platform to develop intelligent machines. Antonio Puglielli, the VP of Engineering at Zendar, said that Nomadic’s tool allowed the company to scale up its work much faster than the alternative of outsourcing, and that its domain expertise set it apart from other competitors. This kind of model-based, auto-annotation tool is emerging as a key workflow for physical AI. Established data labeling firms like Scale, Kognic, and Encord are developing AI tools to do this work, while Nvidia has released a family of open source models,Alpamayo, that can be adapted to tackle the problem. Varun argues that his company’s tool is more than a labeler; it is an “agentic reasoning system: you describe what it needs and it figures out how to find it,” using multiple models to understand action taking place and put it in context. Nomadic’s backers expect the startup’s focus on this specific infrastructure to win out. “It’s the same reason Salesforce doesn’t build its own cloud and Netflix doesn’t build its own [content distribution facilities],” Schuster Tanger, a partner at TQ Ventures who led the round, told TechCrunch. “The second an autonomous vehicle company tries to build Nomadic internally, they’re distracted from what makes them win, which is the robot itself.” Tanger praises Nomadic’s talent, noting that Krishnan is an international chess master ranked as the world’s 1,549th-best player. Krishnan, meanwhile, brags that all of the company’s dozen or so engineers have published scientific papers. Now, they’re hard at work developing specific tools, like one that understands the physics of lane changes from camera footage, or another that derives more precise locations for a robot’s grippers in a video. The next challenge, from the point of view of Nomadic and its customers, is to develop similar tools for non-visual data like lidar sensor readings, or to integrate sensor data across multiple modes. “Juggling around terabytes of video, slamming that against hundreds of 100 billion-plus parameter models, and then extracting their accurate insights, is really insanely difficult,” Bal said.

read more

Latest AI News

View All News →
We’re feeling cynical about xAI’s big deal with Anthropic

We’re feeling cynical about xAI’s big deal with Anthropic

Anthropic and xAIannounced a big partnershipthis week, with Anthropic buying all the compute capacity at xAI’s Colossus 1 data center in Tennessee. On the latest episode ofTechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what the deal might mean for xAI’s parent company SpaceX, as SpaceX prepares to go public andapparently plans to dissolve xAIas a separate organization. Kirsten did her best to offer “a positive view” on the partnership — after all, it’s a new way for xAI to make money. But she also noted that this also suggests xAI isn’t doing much when it comes to training its own frontier AI models, and it’s harder for the company to position itself as a “forward-looking, innovative” business when that’s the case. Then Sean asked: “Why be positive when you can be cynical?” In his view, this seems like “a major heat check before the IPO.” Yes,becoming a neocloudmight be “a more believable business in the near term,” but it’s less likely to get outside investors excited in the long term. (And then there’sthe environmental lawsuitthat xAI is facing over Colossus 1.) Keep reading for a preview of our conversation, edited for length and clarity. Sean O’Kane:I always love a surprise, especially when everybody’s eyes [are] on another ball,a major trialthat’s happening. Seemingly out of nowhere this week, SpaceX and therefore its AI subsidiary xAI — which apparently no longer exists now, or is imminently not about to exist, which we can get to — struck a deal with Anthropic. Basically, the real version of the deal is that Anthropic’s essentially taking over all of the compute at the data center known as Colossus 1 in Memphis, Tennessee, to focus on Anthropic’s more enterprise-focused AI products. There’s been a lot of reporting about how [Anthropic’s] been looking for more compute […] and it seems like an escape valve for them to be able to strike this deal and get access to all this compute. In the near term, for xAI and for SpaceX, yes, they are a neocloud now, in the sense that they had to do something with all this compute that they were building, because it certainly seems like they were not going to need it for Grok — which, outside of X, is not burning up the world as far as becoming the new hot consumer chat bot. Kirsten Korosec:And we should say that in terms of what a neocloud is, for those who don’t know, this is the idea of buying GPUs from Nvidia and the like, and renting those out as opposed to using those for their own AI, training their own AI models. So this is a different kind of business, andthe point that our AI editor, Russell Brandom, makesis that a lot of companies are building out data centers, but if given a choice between, do they rent them out [or using them to train their own models], they are still prioritizing using this compute for their own internal AI model training. I think that’s an important point and one that suggests that maybe xAI isn’t doing so much on the AI model training [side] Anthony Ha:Right, and as Sean was alluding to, most people would not necessarily think of Grok as — not only that it’s known for some pretty unpleasant, if notdownright illegal, content, but also it’s not necessarily super cutting edge. Especially if we start talking about enterprise AI, which I know we’re gonna be getting into later in this episode, you don’t hear a lot about people using Grok for work-critical tasks. And so the question becomes: How can xAI actually make money? And apparently just selling the infrastructure could be one of the main ways to do it. Kirsten:And you could take a positive view on that, right? They figured out a way to make money. But I think that when you are positioning your company — in this case, SpaceX-slash-xAI — as a forward-looking, innovative company, that’s tougher to sell if you are simply just renting out your GPUs and not using them for that innovation. Sean:But why be positive when you can be cynical? Which is to say that this seems like a major heat check before the IPO that we’re about to see get rammed into the markets with SpaceX. Anthony, you mentioned not only is Grok not being used for big enterprise tasks, there’s been reporting that xAI employees wereusing other models, they weren’t even using [Grok] internally, and that caused this big shakeup inside of xAI, postacquisition from SpaceX, that involved essentiallyall the co-founders leaving other than Elon Musk, [and] him basically saying he’s starting from scratch on xAI, despite the fact that SpaceX paid $250 billion for it in the run up to this mega-IPO. And now he’s saying thatthey’re going to dissolve xAIas a separate entity inside SpaceX altogether. He’s starting to call the whole thing SpaceXAI, because this man loves nothing but to ruin a brand that has some value to it — see Twitter. This may be a more believable business in the near term, and so on some level, I could see this being maybe more attractive to investors come IPO time, because it’s like a bit more reliable and certainly more real than them being a frontier lab developer. But it’s also not the kind of business that’s going to draw the same — at least, in a normal environment — outside investment that we’re seeing go into all the frontier labs. That’s maybe one of the biggest tension points we’ve seen develop during this IPO process. Loading the player…

2 hours ago

View

‘We Have Swarms of Agents’: Yasmeen Ahmad on Google’s Future of Enterprise AI

‘We Have Swarms of Agents’: Yasmeen Ahmad on Google’s Future of Enterprise AI

Google has introduced Knowledge Catalog, a context engine to enhance data interpretation in multi-cloud environments.

6 hours ago

View

How to Use Netflix's New AI Voice Search Feature: A Step-by-Step Guide

How to Use Netflix's New AI Voice Search Feature: A Step-by-Step Guide

Netflix recently began rolling out a new way for viewers to search for shows and movies on its platform. While we can search for content online via voice dictation, it merely presents results based on keywords. However, the new native AI-based voice search tool will provide contextual search results, taking the intent of the user's query into account. Currently available to a small set of users in beta, the content streaming company is asking users to test the new functionality and provide feedback on how it can be refined, while also pointing out the bugs and issues. The company has yet to announce when the stable version of the AI search tool will be rolled out to a wider global user base.

14 hours ago

View

Voice AI in India is hard. Wispr Flow is betting on it anyway.

Voice AI in India is hard. Wispr Flow is betting on it anyway.

India’s internet users already rely heavily on voice notes, voice search, and multilingual messaging. Turning those habits into a scalable AI business, however, remains difficult because of the country’s linguistic complexity, mixed-language usage, and uneven monetization patterns.Wispr Flowis betting the opportunity is worth the challenge. The Bay Area-headquartered startup, which builds AI-powered voice input software, says India is now its fastest-growing market, even though voice-based AI products remain early and fragmented in the South Asian nation. That growth has pushed Wispr Flow to expand more aggressively for Indian users,beginning with Hinglish— a hybrid mix of Hindi and English commonly spoken by locals. The startup is also planning broader multilingual voice support, a local hiring push, and, eventually, lower pricing as it looks to expand beyond white-collar users and into Indian households. Earlier waves of voice technology in India —from digital assistantstoWhatsApp voice notes— largely revolved around convenience. AI startups such as Wispr Flow are now betting that generative AI can turn those habits into a broader computing layer. To make the product more relevant for Indian users, Wispr Flow began beta testing a Hinglish voice model earlier this year andlaunched on Android— India’sdominant mobile operating system— after initially debuting on Mac and Windows beforeexpanding to iOSin 2025. Co-founder and CEO Tanay Kothari told TechCrunch that the startup initially saw adoption in India largely among white-collar professionals such as managers and engineers, but it’s increasingly seeing broader usage patterns emerge, including among students and older users being onboarded by younger family members. India has emerged as Wispr Flow’s second-largest market after the U.S. in terms of both users and revenue, Kothari said, with growth accelerating following the startup’s recent India-focused push. The startup has seen faster growth following the rollout of Hinglish support, benefiting from the widespread habit among Indian users of mixing Hindi and English in everyday conversations, particularly as users began expanding beyond work-focused use cases into more personal communication. “The biggest thing is people are starting to use it more in personal apps,” Kothari said, pointing to messaging platforms such as WhatsApp and social media apps where users frequently switch between Hindi and English while speaking. Wispr Flow, Kothari said, was growing about 60% month over month in India earlier this year, but growth accelerated to around 100% following its recent India launch campaign. The startup last month rolled out abroader marketing pushin the country, including a launch video from Kothari and offline campaigns in Bengaluru aimed at introducing the product to more mainstream users. Kothari told TechCrunch that Wispr Flow plans to expand its multilingual voice support over the next 12 months, allowing users to switch between English and other Indian languages beyond Hindi while speaking. In December, the startupintroduced India-specific pricingat ₹320 (around $3.4) per month for annual plans, significantly lower than its standard $12 monthly pricing globally. The startup eventually wants to bring costs down even further — potentially to as low as ₹10–20 (around 10–20 cents) per month — as it looks to expand beyond white-collar and urban users. “I want every single person in the country to be able to use Wispr Flow, and that’s what we’re really building for,” Kothari said. “That’s going to happen slowly and steadily.” Earlier this year, Wispr Flow hired Nimisha Mehta to lead its India operations as it looks to expand its local presence. Kothari told TechCrunch the startup plans to grow to around 30 employees in India over the next year, building out consumer growth, partnerships, and enterprise teams alongside existing engineering and support functions. The startup currently has about 60 employees globally. Wispr Flow is not alone in viewing India as a key market for voice-based AI products. Companies including ElevenLabs have highlighted India as animportant growth marketforsome time. Similarly, local startups such as Gnani.ai, Smallest AI, and Bolna havecontinued attracting investor interestas voice-based AI tools gain wider adoption across consumer and business use cases. Nevertheless, turning voice AI into a mainstream consumer product in India remains challenging despite growing interest from startups and investors. “India is the ultimate stress test for voice AI,” Neil Shah, vice president of research at Counterpoint Research, told TechCrunch, adding that “linguistic, accent, and contextual friction” continue to slow wider adoption. Data shared with TechCrunch from Sensor Tower shows Wispr Flow was downloaded more than 2.5 million times globally between October 2025 and April 2026, with India accounting for 14% of installs during the period, making India its second-largest market by downloads (after, as mentioned, the U.S.). India, however, contributed only around 2% of Wispr Flow’s in-app purchase revenue during the same period, according to Sensor Tower. However, the startup remains largely desktop-driven globally. Wispr Flow’s usage in India, Kothari said, is currently split roughly 50:50 between desktop and mobile, compared with an 80:20 desktop-heavy mix in the U.S. Kothari said Wispr Flow sees strong repeat usage among its users, claiming roughly 70% retention after 12 months globally and in India. Moreover, the startup currently employs two full-time linguistics PhDs as it continues refining multilingual voice models and expanding support for additional Indian language combinations.

14 hours ago

View

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles