Latest AI News

The Death of OpenAI’s Sora Gives Birth to a New Hope
The shutdown makes it clear that even OpenAI can’t focus on everything. But does this ease the anxiety of startup founders?
View

As more Americans adopt AI tools, fewer say they can trust the results
Americans are increasingly turning to artificial intelligence to help with things like research, writing, school or work projects, and analyzing data — but they’re not exactly happy about it. Even as AI use and adoption rises, Americans continue to lack trust in the new tool, according to aQuinnipiac University pollpublished Monday. Of the nearly 1,400 Americans surveyed, more than three-quarters said they don’t trust AI — 76% say they trust it rarely or only sometimes, compared to just 21% who trust it most or almost all of the time. That comes even as an increasing number of Americans adopt AI in their daily lives; only 27% said they’ve never used AI tools, down from 33% in April 2025. “The contradiction between use and trust of AI is striking,” said Chetan Jaiswal, a computer science professor at Quinnipiac. “Fifty-one percent say they use AI for research, and many also use it for writing, work, and data analysis. But only 21 percent trust AI-generated information most or almost all of the time. Americans are clearly adopting AI, but they are doing so with deep hesitation, not deep trust.” Part of that lack of trust might come from a feeling of dread about the future AI will bring. The poll found only a paltry 6% were “very excited” about AI while 62% were either not so excited or not at all excited. Those numbers are basically flipped when we talk about concern: 80% are either very concerned or somewhat concerned about AI, with millennials and baby boomers taking the mantle of most worried, and Gen Z following not far behind. A solid half (55%) say AI will do more harm than good in their day-to-day lives, while only a third say AI will do more good than harm, according to the poll. More people have negative views about AI compared to last year’s survey, according to the researchers — which may not be surprising after a year of Big Tech layoffs, life-endingAI psychosis cases, and energy-grid-straining data centers. Americans across the board oppose building AI data centers in their communities, with 65% saying they wouldn’t want one built, primarily citing high electricity costs and water use. A majority (70%) think AI advancements will cut the number of job opportunities, whereas only 7% think AI will lead to more job opportunities. That’s a shift from the 56% of Americans who last year thought advancements in AI would lead to a decrease in jobs and the 13% who thought AI would increase job opportunities. Members of Gen Z, born between 1997 and 2008, are the most pessimistic, with 81% foreseeing a decrease in jobs. They’re not exactly imagining it, either. Entry-level job postings in the U.S. havesunk 35% since 2023,and AI leaders like Anthropic CEO Dario Amodei havewarnedthat the tech will wipe out jobs. “Younger Americans report the highest familiarity with AI tools, but they are also the least optimistic about the labor market,” Tamilla Triantoro, a professor of business analytics and information systems at Quinnipiac, said in a statement. “AI fluency and optimism here are moving in opposite directions.” Interestingly, even though most Americans are worried about AI’s effect on the labor market as a whole, most don’t think it’s coming for their jobs specifically. Among employed Americans, 30% are concerned AI will make their jobs obsolete. Still, that’s up from 21% last year. “Americans are more worried about what AI may do to the labor market than about what it may do to their own jobs,” Triantoro said. “People seem more willing to predict a tougher market than to picture themselves on the losing end of that disruption — a pattern worth watching as the technology moves deeper into the workplace,” Perhaps a big reason Americans have trust issues with AI is because they don’t believe the companies behind the technology are telling the truth. Two-thirds of respondents said businesses aren’t doing enough to be transparent about their AI use. That same percentage also says the government isn’t doing enough to regulate AI. The sentiment comes as states push to maintain their authority over AI rules, even as federal officials — including under Trump’s latest, largelylight-touch AI framework— and industry leaders advocate for limiting state-level regulation. “Americans are not rejecting AI outright, but they are sending a warning,” Triantoro said. “Too much uncertainty, too little trust, too little regulation, and too much fear about jobs.”
View

Popular AI gateway startup LiteLLM ditches controversial startup Delve
LiteLLM, makers of popular AI gateway used by millions of developers, haspublicly announcedthat it is ditching compliance startup Delve and will redo its security certifications with another company and auditor. The announcement comes after LiteLLM’s open source version fell victim to some horrificcredential-stealing malwarelast week. Prior to the incident, LiteLLM had obtained two security compliance certifications by hiring AI compliance startup Delve. Such certifications are intended to verify that a company has procedures in place to minimize potential incidents. Delve has beenaccused of misleading its customers about their true complianceby allegedly generating fake data and using auditors that rubber-stamped their reports. Delve’s founder hasdenied those allegationsand offered free re-tests and audits to all of its customers. That denial encouraged the anonymous Delve whistleblower to double down,including releasing alleged receipts over the weekend. On Monday, LiteLLM CTO Ishaan Jafferpostedon X that his company will be using Delve competitor Vanta to re-certify and will find its own, independent third-party auditor to verify its compliance controls. After such a harsh week, LiteLLM is voting with its feet.
View

15% of Americans say they’d be willing to work for an AI boss, according to new poll
Would you trade your manager for a chatbot? A growing number of Americans are saying yes. According to aQuinnipiac University pollpublished Monday, 15% of Americans say they’d be willing to have a job where their direct supervisor was an AI program that assigned tasks and set schedules. Quinnipiac surveyed 1,397 adults in the United States and conducted the poll — which included questions aboutAI adoption, trust, and job fears— between March 19 and 23, 2026. Of course, the majority of respondents said they wouldn’t be willing to swap their human boss for an AI people manager. But the use of AI as a supervisor is gaining in popularity, even if one isn’t directly in charge of steering entire teams of people. Companies like Workday have launched AI agents that canfile and approve expense reportson employees’ behalf. Amazon hasdeployed new AI workflowsto replace some of the responsibilities of middle management, laying offthousands of managersin the process. Engineers at Ubereven built an AI model of CEO Dara Khosrowshahito field pitches before meetings with their actual boss. Across organizations, AI is being used to replace layers of management in what some are calling “The Great Flattening.” Soon, we may start to seeentire billion-dollar companies of one, with fully automated employees andexecutives. Americans are wary about what that means for their job prospects. The majority of respondents in Quinnipiac’s survey — 70% — said they believe advances in AI will lead to a decrease in the number of job opportunities for people. Among employed Americans, 30% were either very concerned or somewhat concerned that AI would make their job specifically obsolete.
View

Airtel’s Nxtra Raises $1 Bn to Build AI Data Centres
The company plans to scale capacity from about 300 MW to 1 GW in the coming years.
View

Qodo raises $70M for code verification as AI coding scales
As AI coding tools generate billions of lines of code each month, a new bottleneck is emerging: ensuring that software works as intended.Qodo, a startup building AI agents for code review, testing and governance, is betting that verification will define the next phase of software development. The New York-headquartered startup has raised a $70 million Series B round led by Qumra Capital, bringing its total funding to $120 million. Maor Ventures, Phoenix Venture Partners, S Ventures, Square Peg, Susa Ventures, TLV Partners, Vine Ventures, Peter Welender (OpenAI), and Clara Shih (Meta) also joined in the round. Qodo is aiming to serve as a layer focused on improving trust in AI-generated code as enterprises accelerate adoption of tools like OpenClaw and Claude Code. Many are discovering that faster code output doesn’t necessarily translate into reliable or secure software. While most AI review tools focus on what changed, Qodo focuses on how code changes affect entire systems, factoring in organizational standards, historical context, and risk tolerance to help companies better manage AI-generated code more confidently. Itamar Friedman, who previously co-foundedVisualeadand led the machine vision business at Alibaba (which acquired Visualead), founded Qodo in 2022. He told TechCrunch that two key moments in his career — his time at Mellanox, which was later acquired by Nvidia, and building Visualead — inspired him to start Qodo, just months before the launch of ChatGPT. At Mellanox, where he worked on automating hardware verification using machine learning, he realized that “generating systems and verifying systems require very different approaches (different tools, different thinking).” Later, at Alibaba’s Damo Academy, he saw AI evolve toward systems capable of reasoning over human language. By 2021–2022, just ahead of GPT-3.5, it became clear to him that AI would generate a large share of the world’s content—especially code—reinforcing his view that code generation and verification would require fundamentally different systems. A recent survey showsthat while 95% of developers don’t fully trust AI-generated code, only 48% consistently review it before committing, highlighting a gap between awareness and practice. “Code generation companies are largely built around LLMs. But for code quality and governance, LLMs alone aren’t enough,” Friedman said. “Quality is subjective. It depends on organizational standards, past decisions, and tribal knowledge. An LLM can’t fully understand that context. It’s like taking a great engineer from one company and asking them to review code at another — they lack the internal context.” Companies such as OpenAI and Anthropic are helping shape the broader AI narrative, including in adjacent areas like code review, but they are largely focused on building features rather than end-to-end solutions, Friedman explained. Although there are other startups in the space, many remain early stage and have yet to see widespread enterprise adoption, the CEO noted. Qodo is leaning into performance to stand out in a crowded market. The startup recently ranked No. 1 onMartian’s Code Review Bench, scoring 64.3% — more than 10 points ahead of the next competitor and 25 points ahead of Claude Code Review. The benchmark highlights its ability to catch tricky logic bugs and cross-file issues without overwhelming developers with noise. In the past month, it has launched Qodo 2.0, a multi-agent code review system now leading current benchmarks, and introduced tools that learn each organization’s definition of code quality. The company is already working with major enterprises such as NVIDIA, Walmart, Red Hat, Intuit and Texas Instruments, as well as high-growth firms like Monday.com and JFrog. “Every year has had a defining moment — from Copilot to ChatGPT to full task automation,” Friedman said. “Now we’re entering a new phase: moving from stateless AI to stateful systems — from intelligence to ‘artificial wisdom.’ That’s what Qodo is built for.”
View

Mistral AI raises $830M in debt to set up a data center near Paris
French lab Mistral AI has raised $830 million in debt to build a new data center near Paris that will be powered by Nvidia chips, according to reports fromReutersandCNBC. Mistral first announced plans to builda data center last year, when its CEO Arthur Mensch said it would explore different financing options in February 2025. It plans to complete building the data center in Bruyeres-le-Chatel and make it operational in the second quarter of 2026, Reuters reported on Monday. Mistral did not immediately return a request seeking confirmation. Last month, the company said it would invest$1.4 billion in Swedento build out AI infrastructure, including data centers. Mistral said it aims to deploy 200 megawatts of compute capacity across Europe by 2027. “Scaling our infrastructure in Europe is critical to empower our customers and to ensure AI innovation and autonomy remain at the heart of Europe. We will continue to invest in this area, given the surging and sustained demand from governments, enterprises, and research institutions seeking to build their own customized AI environment, rather than depend on third-party cloud providers,” Mensch said in a statement to CNBC. Mistral has raised over €2.8 billion ($3.1 billion) in funding to date from investors including General Catalyst, ASML, a16z, Lightspeed, and DST Global, according to data from Crunchbase.
View

AI chip startup Rebellions raises $400 million at $2.3B valuation in pre-IPO round
Fresh off a successfulSeries C fundinground in November, the South Korean fabless AI chip startup Rebellions has raised an additional $400 million. The latest funding infusion, which comes before a planned IPO later this year, was led by Mirae Asset Financial Group and the Korea National Growth Fund. It also comes at the same time that the company is engaging inan aggressive expansion effort— with recentlyannounced plansto grow its presence not only in Asia but also in the Middle East and the U.S. Founded in 2020, Rebellions develops and designs AI chips while outsourcing their fabrication. The startup’s chips are designed for inference — the compute necessary for AI models to respond to user queries. Inference has grown in importance as LLMs have matured and begun to see widespread commercial deployment. The company closed $124 million in aSeries B in 2024. Then, in November, Rebellions raised an additional $250 million during its Series C. As of today, the company’s total fundraising haul now stands at $850 million — $650 million of which was raised in the last six months. Meanwhile, the startup’s valuation sits at approximately $2.34 billion, the company said Monday. In addition to the funding round, Rebellions also announced the release of two new products: RebelRack and RebelPOD, which are described as AI infrastructure platforms. POD represents a production-ready unit of inference compute, while Rack “integrates multiple racks into a scalable cluster designed for large-scale AI deployment,” the company said. In a conversation with TechCrunch, Rebellions’ Chief Business Officer Marshall Choy — who is leading the company’s global expansion efforts — said it had recently established entities in the U.S., Japan, Saudi Arabia, and Taiwan. Choy said the company was building out its ecosystem of technology partners in the U.S., where it plans to court cloud providers, government agencies, telecom operators, and neoclouds. He declined to comment on IPO timing. “AI is now measured by its ability to operate in the real world at scale, under power constraints, and with clear economic return,” said Sunghyun Park, co-founder and CEO of Rebellions. “That shifts the center of gravity toward inference infrastructure and software that makes that infrastructure usable.” Rebellions is one of anew generation of chip startupsthat have sought to challenge Nvidia’sonce iron-clad dominancewithin the chip industry. As that dominance has begun to wane, other major tech companieslike AWS, Meta, and Google — along with the new generation of startups — have also sought to produce their own chips.
View

ScaleOps raises $130M to improve computing efficiency amid AI demand
AI may be booming, but behind the scenes, companies are wasting vast amounts of expensive compute. GPUs sit idle, workloads are over-provisioned, and cloud costs continue to climb.ScaleOpsbelieves the problem isn’t a shortage — it’s mismanagement. The startup, which builds software that automatically manages and reallocates computing resources in real time, has raised $130 million at an $800 million valuation, ScaleOps said Monday. The Series C funding round was led by Insight Partners, with participation from existing investors, including Lightspeed Venture Partners, NFX, Glilot Capital Partners, and Picture Capital. The company says its software reduces cloud and AI infrastructure costs by as much as 80%. ScaleOps was co-founded in 2022 by Yodar Shafrir, a former engineer at Run:ai, a GPU orchestration startupacquired by Nvidia, after seeing firsthand how difficult it was for companies to manage increasingly complex AI workloads. While tools like Kubernetes help run applications across large clusters of machines, they often rely on static configurations that struggle to keep up with fast-changing demand, leading to underused GPUs, performance issues, and costly inefficiencies. “As part of my role [at Run:ai], I met many customers, especially DevOps teams,” Shafrir, who is the company’s CEO, told TechCrunch. “While they really liked what Run:ai provided, they still struggled to manage their production workloads, especially as inference workloads became more common in the AI era. When I zoomed out, I realized the problem wasn’t just GPUs. It extended to compute, memory, storage, and networking. The same patterns kept repeating; teams were failing to manage resources efficiently.” DevOps teams often found themselves chasing down multiple stakeholders to resolve issues, and too often, those efforts fell short. Most existing tools offered visibility into problems, but stopped short of delivering actual solutions. That gap revealed a significant market opportunity. ScaleOps connects application needs with infrastructure decisions in real time and provides a fully autonomous solution that manages infrastructure end-to-end, Shafrir said. “Kubernetes is a great system. It’s flexible and highly configurable. But that’s also the problem,” Shafrir said. “Kubernetes relies heavily on static configurations. Applications today are highly dynamic, which requires constant manual work across teams. You need something that understands the context of each application — what it needs, how it behaves, and how the environment is changing.” There are several players in this space, includingCast AI,KubecostandSpot. While many companies have introduced automation tools, they often operate without full context, which can lead to performance issues and even downtime, limiting trust among teams running production environments, according to the CEO. The startup says its platform was built specifically for production from the ground up. It is fully autonomous, context-aware, and works out of the box without requiring manual configuration — capabilities the company believes differentiate ScaleOps from competitors. The New York-headquartered company serves enterprise customers globally, particularly those operating Kubernetes-based infrastructure, with a footprint that spans large organizations as well as companies across Europe and India. ScaleOps says its platform is used by a range of enterprise clients, including Adobe, Wiz, DocuSign, Salesforce, and Coupa. The Series C funding comes roughly a year and a half after ScaleOps raised $58 million inits Series B roundin November 2024. Since then, the team has seen strong demand for autonomous solutions to manage cloud infrastructure, Shafrir said, adding that it is still in the early stages of its growth. The company’s total funding is about $210 million, according to a spokesperson. ScaleOps said it has seen more than 450% year-over-year growth and that it has tripled its headcount over the past 12 months, with plans to more than triple it again by year-end. With the new capital, ScaleOps plans to roll out new products and expand its platform. As AI drives demand for compute, managing that infrastructure is becoming increasingly critical. The startup said it will continue building toward fully autonomous infrastructure.
View

Mantis Biotech is making ‘digital twins’ of humans to help solve medicine’s data availability problem
Large language models trained on vast datasets could speed genomics research, streamline clinical documentation, improve real-time diagnostics, support clinical decision-making, accelerate drug discovery, and even generate synthetic data to advance experiments. But their promise to transform biomedical research often runs into a bottleneck: beyond the structured data healthcare relies on, these models struggle in edge cases like rare diseases and unusual conditions, where reliable, representative data is scarce. New York-basedMantis Biotechclaims it’s developing the solution to fill this data availability gap. The company’s platform integrates disparate sources of data to make synthetic datasets that can be used to build so-called “digital twins” of the human body: physics-based, predictive models of anatomy, physiology, and behavior. The company is pitching these digital twins for use in data aggregation and analysis. These digital twins could be used for studying and testing new medical procedures, training surgical robots, and simulating and predicting medical issues or even patterns of behavior. For example, a sports team could predict the likelihood of a specific NFL player developing an Achilles heel injury based on their recent performance, training load, diet, and how long they’ve been active, Mantis’ founder and CEO Georgia Witchel explained to TechCrunch in a recent interview. To build these twins, Mantis’ platform first takes data from a variety of sources such as textbooks, motion capture cameras, biometric sensors, training logs and medical imaging. Then, it uses an LLM-based system to route, validate, and synthesize the various data streams, and runs all that information through a physics engine to create high-fidelity renders of that dataset, which can then be used to train predictive models. “We’re able to take all these disparate data sources and then turn them into predictive models for how people are going to perform. So anytime you want to predict how a human being is going to be performing, that is a really good use case for our technology,” Witchel said. The physics engine layer is key here, Witchel told TechCrunch, because it helps the platform enhance the available information by grounding the generated synthetic data and realistically modeling the physics of anatomy. “If I asked you to do hand-pose estimation for someone who is missing a finger, it would be really, really hard, because there are no publicly available datasets of labeled hand positions of someone who is missing a finger. We could generate that dataset really, really easily, because we just take our physics model and we say, remove finger X, regenerate model,” she said. Since Mantis’ platform fills gaps in data sources, Witchel thinks there’s potential for it to be used widely across the biomedical industry, where information on procedures or patients can be difficult to access, is unstructured or siloed into various sources. She stressed edge cases or rare diseases, where data is hard to obtain since there are often ethical and regulatory constraints around including patients’ data in public datasets, or using it for training AI models. “You know how when you see a three-year-old running around, and they have a Barbie, and they’re holding it by one leg and smashing it against a table? I want people to have that mindset with our digital twins,” she said. “I think that’s going to open up people to this idea that humans can be tested on when you’re using virtual humans. I feel currently, people operate with the exact opposite mindset, which totally makes sense, because people’s privacy should be respected. In fact, I don’t really think people’s data should be exploited at all, especially when you have these digital twins.” For now, Mantis has seen success in professional sports, presumably because there is a need to model high-performing athletes. Witchel said one of the startup’s main clients is an NBA team. “We create these digital representations of the athletes, where it basically shows here’s how this athlete has jumped, not just today, but for every single day in the past year, and here’s how their jumps are changing over time compared to the amount that they’re sleeping, or compared to how many times they lift their arms above their head,” she explained. The startup recently raised $7.4 million in seed funding led by Decibel VC, with participation from Y Combinator, a few angel investors, and Liquid 2. The funding will be used for hiring, advertising, marketing and go-to-market functions. The next step for Mantis, Witchel said, is to continue building out the tech, and eventually release the platform to the general public, targeting preventative healthcare. The company is also working to cater to pharmaceutical labs and researchers working on FDA trials, aiming to deliver insights into how patients are responding to treatments.
View

Samsung to Mass Produce Silicon Photonic Chips by 2028: Report
Samsung Electronics has outlined a roadmap to integrate light-based chips with AI semiconductors and challenge foundry leader TSMC.
View

Starcloud Reaches $1.1 Bn Valuation for its Data Centres in Space
The funding will support new satellites, manufacturing and hiring as the company targets Starcloud’s AI infrastructure demand.
View
