Latest AI News

Cursor admits its new coding model was built on top of Moonshot AI’s Kimi

Cursor admits its new coding model was built on top of Moonshot AI’s Kimi

AI coding company Cursor launched a new model this week called Composer 2, which itpromotedas offering “frontier-level coding intelligence.” However, an X user posting under the name Fynnsoon claimedthat Composer 2 was “just Kimi 2.5” with additional reinforcement learning — Kimi 2.5 beingan open source model recently released by Moonshot AI, a Chinese company backed by Alibaba and HongShan (formerly Sequoia China). As evidence, Fynn pointed to code that seemed to identify Kimi as the model. “[A]t least rename the model ID,” they scoffed. It was a surprising revelation, since Cursor is a well-funded U.S. startup thatraised a $2.3 billion round last fallat a $29.3 billion valuation, and isreportedly exceeding $2 billion in annualized revenue. Also, the company didn’t mention anything about Moonshot AI or Kimi in its announcement. However, Cursor’s vice president of developer educationLee Robinson soon acknowledged, “Yep, Composer 2 started from an open-source base!” But he said, “Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training.” As a result, he said Composer 2’s performance on various benchmarks is “very different” from Kimi’s. Robinson also insisted that Cursor’s use of Kimi was consistent with the terms of its license, a point the Kimi account on X repeated in asubsequent post congratulating Cursor, where it said the Cursor used Kimi “as part of an authorized commercial partnership” with Fireworks AI. “We are proud to see Kimi-k2.5 provide the foundation,” the Kimi account said. “Seeing our model integrated effectively through Cursor’s continued pretraining & high-compute RL training is the open model ecosystem we love to support.” So why not acknowledge Kimi upfront? Beyond any potential embarrassment in not creating a model from scratch, building on top of a Chinese model might feel particularly fraught right now, with the so-called AI “arms race” often framed asan existential battle between United States and China. (See, for example, Silicon Valley’sapparent panic after Chinese company DeepSeek released a competitive modelearly last year.) Cursor co-founder Aman Sangeracknowledged, “It was a miss to not mention the Kimi base in our blog from the start. We’ll fix that for the next model.”

1 month ago

View

Do you want to build a robot snowman?

Do you want to build a robot snowman?

Nvidia’s GTC conference had everything:trillion dollar sales projections,graphics technologythat canyassify video games, grand declarations thatevery company needs an OpenClaw strategy, and even a robot version of the beloved snowman Olaf from Disney’s “Frozen.” On the latest episode ofthe Equity podcast, TechCrunch’s Kirsten Korosec, Sean O’Kane, and I recapped CEO Jensen Huang’s keynote and debated what it means for Nvidia’s future. And yes, a big part of our discussion focused on poor Olaf, whose microphone had to be turned off when he started rambling. Even if the demo had gone flawlessly, Sean might still have had some reservations, as he noted these presentations always focus on “the engineering challenges” and not the “really messy gray areas” on the social side. “But what happens when a kid kicks Olaf over?” Sean asked. “And then every other kid who sees Olaf get kicked or knocked over has their whole trip to Disney ruined and it ruins the brand?” Read a preview of our conversation, edited for length and clarity, below. Anthony:[CEO Jensen Huang] was basically saying that every company needs to have an OpenClaw strategy now. I think that is just a very grand statement that’s meant to be attention grabbing; I think it’s also interesting coming at this kind of transitional moment for OpenClaw. The founder has gone to OpenAI. So it’s now this open source project that potentially can flourish and evolve beyond its creator, or it could languish. If companies like Nvidia are investing a lot into it, then [it’s] more likely that it’ll continue to evolve. But it’ll be interesting to see a year from now, whether that looks like a prescient statement or everyone’s like, “Open what?” Kirsten:In the case of Nvidia, it costs them nothing in the grand scheme of things to launch what they call NemoClaw, which is an open source project, which they built with the OpenClaw creator. But if they don’t do something, they have a lot to lose. So really that message to me, the way I translated it when Jensen was like, “Every enterprise needs to have an OpenClaw strategy,” it was, “Nvidia needs to have a solution or strategy for enterprises, because if it’s successful, it is another way or another pathway for Nvidia to be part of numerous other companies.” So doing nothing is a greater risk than doing something that doesn’t go anywhere. Sean:The real question here is why have we not talked about what is clearly the end game for Nvidia, and the thing that is going to turn it into the first $100 trillion company, which is an Olaf robot. Anthony:How could I forget? Kirsten:Anthony, just go to the end of the two and a half hours to watch this. So, the Olaf robot comes out, and this is something that Jensen loves to do. He loves to have these demos and some of them go better than others. It is also to demonstrate Nvidia’s technology in robotics, and I don’t know if Olaf was actually speaking in real time or if it was programmed — it felt a little programmed, or it had specific keywords that it used. But the greatest part about it is that they had to cut its mic at the end because it just started rambling and speaking to the crowd. And then it went over to its little passageway and was slowly lowered. And you could see it on the video. It was still talking, but no mic. Sean:Now we just need to give this little robot a wheelbase. And I knowthe perfect founder who can provide it. I mean, these demos are always silly. I don’t want to get up on my soapbox, because I know that we’ve talked about this a little bit earlier this week, but this was an impressive demo up until the moment where it fell a little bit short. This is another really good example, though, of [how] robotics is a really interesting engineering problem and a really interesting physics problem and a really interesting integration problem, and all of this stuff, but this was presented as, in partnership with Disney, and it’s supposed to be the future of Disney parks and things like that: You’re going to be able to walk around and see Olaf from “Frozen” and take pictures of them and everything. But these efforts never consider — or certainly don’t put front and center in events like this — all the other things you have to consider when you roll stuff out like this. There’s a really good YouTuber, Defunctland, that dida really good video about this— four hours long, not too long — about the history of Disney trying to get these kinds of robotics into their park, these automatons. The engineering challenges are really interesting and it’s fun to see that history, but it always comes back to the same question of: Okay, but what happens when a kid kicks Olaf over? And then every other kid who sees Olaf get kicked or knocked over has their whole trip to Disney ruined and it ruins the brand? There’s just so much on the social side of this. And that sounds silly, but this is the question that we’re kind of asking about humanoid robots, too. There’s so much hype about all this other stuff and we just don’t really hear as much conversation about the really messy gray areas on the social side of these things, and also just integrating them into people’s lives. We only ever really hear about the engineering challenges — which again, are really impressive. Kirsten:I have a counterpoint and then we have to get to our next [topic]. This is a job creator, because Olaf will have to have a human babysitter in Disneyland, probably dressed up as Elsa or something else. You can imagine that actually, what we’re doing is creating jobs [with] this engineering experiment. Loading the player…

1 month ago

View

An exclusive tour of Amazon’s Trainium lab, the chip that’s won over Anthropic, OpenAI, even Apple

An exclusive tour of Amazon’s Trainium lab, the chip that’s won over Anthropic, OpenAI, even Apple

Shortly after Amazon CEO Andy Jassy announced AWS’sgroundbreaking $50 billion investment dealwith OpenAI, Amazon invited me on a private tour of the chip development lab at the heart of the deal, at (mostly*) its own expense. Industry experts are watchingAmazon’s Trainium chip, created at that facility, for its implications for lower-cost AI inference and, potentially, a dent in Nvidia’s near monopoly. Curious, I agreed to go. My tour guides for the day were the lab’s director, Kristopher King (pictured below right) and director of engineering Mark Carroll (below left), as well as the team’s PR person who arranged the visit, Doron Aronson (pictured with yours truly later in the story). AWS has been Anthropic’s major cloud platform since the AI lab’s early days — a relationship significant enough to survive Anthropic later adding Microsoft as a cloud partner as well, and Amazon’s growing partnership with OpenAI. The OpenAI deal makes AWS the exclusive provider of the model maker’s new AI agent builder, Frontier, which could become an important part of OpenAI’s business if agents become as big as Silicon Valley thinks they will. We’ll see if that exclusivity stands exactly as announced. The Financial Timesreported this weekthat Microsoft may believe OpenAI’s deal with Amazon violates its own deal with OpenAI, namely with Redmond getting accessto all of OpenAI’s models and tech. What makes AWS so appealing to OpenAI? As part of this deal, the cloud giant has agreed to supply OpenAI with 2 gigawatts of Trainium computing capacity. This is a giant commitment, given that Anthropic and Amazon’s own Bedrock service are already consuming Trainium chips faster than Amazon can produce them. There are 1.4 million Trainium chips deployed across all three generations, and Anthropic’s Claude runs on over 1 million of the Trainium2 chips deployed, the company said. It’s worth noting that while Trainium was originally geared toward faster, cheaper model training (a bigger priority a couple of years ago), it’s now tuned and used for inference as well. Inference — the process of actually running an AI model to generate responses — is currently the biggest performance bottleneck in the industry. Case in point: Trainium2 handles the majority of the inference traffic onAmazon’s Bedrock service, which supports the building of AI applications by Amazon’s many enterprise customers and allows the apps to use multiple models. “Our customer base is just expanding as fast as we can get capacity out there,” King said. “Bedrock could be as big as EC2 one day,” he added, referring to AWS’s behemoth compute cloud service. Beyond offering an alternative to Nvidia’s backlogged, hard-to-acquire GPUs, Amazonsaysits new chips running on its new specialty Trn3 UltraServers cost up to 50% less to run for comparable performance than using classic cloud servers. Along withTrainium3, released in December, this AWS team also built new Neuron switches, and Carroll says that combo is transformative. “What that gives us is something huge,” Carroll said. The switches allow every Trainium3 chip to talk to every other chip in a mesh configuration, reducing latency. “That’s why Trainium3 is breaking all kinds of records,” particularly in “price per power,” he said. When trillions of tokens a day are involved, such improvements add up. In fact, Amazon’s chip team waslauded by Apple in 2024. In a rare moment of openness for the secretive company, Apple’s director of AI publicly described how it used another of the team’s chips — Graviton, a low-power, ARM-based server CPU and the first breakout chip this team designed. Apple also lauded Inferentia — a chip specifically designed for inference — and gave a nod to Trainium, which was new at the time. These chips represent the classic Amazon playbook: See what people want to buy, then build an in-house alternative that competes on price. The catch for chips, historically, has been switching costs. Applications written for Nvidia’s chips must be re-architected to work with others — a time-consuming process that discourages developers from switching. But the AWS chip team proudly told me that Trainium now supportsPyTorch, a popular open source framework for building AI models. That includes many of the ones hosted on Hugging Face, a vast library where developers share open source models. The transition, Carroll told me, requires “basically a one-line change, and then recompile, and then run on Trainium.” In other words, Amazon is attempting to chip away at Nvidia’s market dominance wherever possible. AWS has also this month announced apartnership with Cerebras Systems, integrating that company’s inference chip on servers running Trainium for what Amazon promises will be superpowered, low-latency AI performance. But Amazon’s ambitions go beyond the chips themselves. It also designs the server that hosts the chips. Besides the networking components, this team has designed “Nitro,” a hardware-software combo that provides virtualization tech (which allows many instances of software to run separately on the same server); new state-of-the-art liquid cooling technology; and the server sleds (pictured below) that host this gear. All of that is to control cost and performance. Amazon’s custom chip-designing unit was born when the cloud giantbought Israeli chip designer Annapurna Labsin January 2015 for about $350 million. So this team has now had more than 10 years designing chips for AWS. The unit has retained its Annapurna roots and name — its logo is everywhere in the office. This chip lab is located in a shiny, chrome-windowed building in Austin’s upscale “The Domain” district, a walkable area filled with shops and restaurants that’s sometimes calledAustin’s Silicon Valley. The offices have your classic tech corporate vibe: desks in cubicles, gathering spots, and conference rooms. But tucked away at the back of a high floor in the building is the actual lab, with sweeping views of the city. The shelving-filled lab, about the size of two large conference rooms, is a noisy industrial space thanks to the fans on the equipment. It looks like a cross between a high school shop class and a Hollywood set for a high-end lab, except the engineers are dressed in jeans, not white lab coats. Note that this is not where the chips are manufactured, so no white hazmat suits were necessary. The Trainium3 is a state-of-the-art 3-nanometer chip, produced by TSMC, arguably the leader in 3-nanometer manufacturing, with other chips produced by Marvell. But this is the room where the magic of the “bring-up” occurs. “A silicon bring-up is when you get the chip for the first time, and it’s like a big overnight party. You stay here, like a lock-in,” King explains. After 18 months of work, the chip is activated for the first time to verify it works as designed. The team even filmed some of the Trainium3 bring-upand posted it on YouTube. Spoiler alert: It’s never problem-free. For Trainium3, the prototype chip was originally air-cooled, like previous versions. The current chip is nowliquid-cooled,which offers energy advantages and was quite an engineering feat. During the bring-up, the dimensions for how the chip attached to the air-cooling heat sink were off, so the chip couldn’t be activated. Unfazed, the team “immediately got a grinder and just started grinding off the metal,” King said. Because they didn’t want the noise disrupting the bring-up pizza party atmosphere, they snuck off and did the grinding in a conference room. Staying up all night and solving problems “is what silicon bring-up is all about,” King said. The lab even has a welding station, where hardware lab engineer and master welder Isaac Guevara demonstrated welding tiny integrated circuit components through a microscope. This is such insanely difficult work that senior leader Carroll openly admitted he couldn’t do it, to the guffaws of Guevara and the rest of the engineers in the room. The lab also contains both custom-made and commercial tools for testing and analyzing issues with chips. Here’s signal engineer Arvind Srinivasan demonstrating how the lab tests each tiny component on the chip: But the star of the lab is an entire row showcasing each generation of the “sleds” the team designed. Sleds are the trays that house the Trainium AI chips, Graviton CPU chips, and supporting boards and components. Stack them together on a rack with the networking component, also custom-designed by this team, and you get the systems that are at the heart of Anthropic Claude’s success. Here’s the sled that was shown off during the AWS re:invent conference in December: I expected my guides to crow about the OpenAI deal during the tour. But they didn’t. The reticence could have been related to the aforementioned potential legal haze that might hang over the deal. But the sense I got was that these boots-on-the-ground engineers (who are currently designing the next version, Trainium4) haven’t had much chance to work with OpenAI yet. Their day-to-day work has so far been focused on Anthropic’s and Amazon’s needs. Currently, the biggest chunk of Trainium2 chips is deployed in Project Rainier — one of the world’s largest AI compute clusters — which went live in late 2025 with 500,000 chips. It’s used by Anthropic. But there was a wall monitor in the main office displaying a quote about how OpenAI will be using Trainium. The pride was there, if subtle. In addition to this lab, the team also has its own private data center for quality and testing purposes. A short drive away, it doesn’t run customer workloads, so it’s housed at a co-location facility, not an AWS data center. Security is tight: There are strict protocols to enter the building and to access Amazon’s area within. The data center’s cooling system is so loud that earplugs are mandatory, and the air is thick with the acrid smell of heated metal. It’s not a pleasant place for the average person to hang out. At this data center, there are rows and rows of servers filled with sleds that integrate all of Amazon’s newest custom chips: Graviton CPU, liquid-cooled Trainium3, Amazon Nitro, all happily computing away. The liquid runs on a closed system, meaning it is reused, which should also help reduce the environmental impact, the engineers said. Here’s what a current Trn3 UltraServer looks like: Multiple sleds are on top and bottom, with the Neuron switches in the middle. Hardware development engineer David Martinez-Darrow is seen here performing maintenance on a sled: While attention on the team has always been high, the scrutiny has really ratcheted up as of late. Amazon CEO Andy Jassy keeps a close eye on this lab, publicly bragging about its products like a proud dad. In December, he saidTrainium was already a multibillion-dollar business for AWSandcalled itone piece of AWS tech he’s most excited about. He alsogave the chip a shout-outwhen announcing the OpenAI agreement. The team feels the pressure, too. Engineers will work 24/7 for three to four weeks around each bring-up event to fix any issues so the chips can be mass-produced and put into data centers. “It’s very important that we get as fast as possible to prove that it’s actually going to work,” Carroll said. “So far, we’ve been doing really well.” *Disclosure: Amazon provided airfare and covered the cost of one night at a local hotel. Honoring itsLeadership Principle of Frugality, this was a back-of-the-plane middle seat and a modest room. TechCrunch picked up the other associated travel costs like Ubers and luggage fees. (Yes, I checked a bag for an overnight trip. I’m high maintenance that way.)

1 month ago

View

Delve accused of misleading customers with ‘fake compliance’

Delve accused of misleading customers with ‘fake compliance’

Ananonymous Substack postpublished this week accuses compliance startupDelveof “falsely” convincing “hundreds of customers they were compliant” with privacy and security regulations, potentially exposing those customers to “criminal liability under HIPAA and hefty fines under GDPR.” Delve is a Y Combinator-backed startup that last yearannounced raising a $32 million Series Aat a $300 million valuation. (The round was led by Insight Partners.) On Friday, the startup attempted to refute the accusationson its blog, calling the Substack post “misleading” and saying it “contains a number of inaccurate claims.” The Substack post is credited to “DeepDelver,” who described themselves as working at a (now former) Delve client. In response to emailed questions from TechCrunch, DeepDelver said that they and their collaborators “chose to remain anonymous out of fear for retaliation by Delve.” In their post, DeepDelver recounted receiving an email in December claiming the startup had “leaked a spreadsheet with confidential client reports.” While Delve CEO Karun Kaushik apparently assured customers in a subsequent email that they were in compliance and that no external party gained access to sensitive data, DeepDelver said they and other customers had become suspicious. “Having the shared experience of being underwhelmed with the Delve experience, and having the overall sense that something fishy was going on, we decided to pool resources and investigate together,” they wrote. Their conclusion? That Delve “achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance.” DeepDelver went into considerable detail about those claims, accusing the startup of providing customers with “fabricated evidence of board meetings, tests, and processes that never happened,” then forcing those customers to “choose between adopting fake evidence or performing mostly manual work with little real automation or AI.” DeepDelver also claimed that virtually all of Delve’s clients seem to have gone through two audit firms, Accorp and Gradient, which they described as “part of the same operation,” one that operates primarily in India, with only a nominal presence in the United States. Those firms, they said, are just rubber-stamping reports that were generated by Delve. As a result, DeepDelver said the startup “inverts” the normal compliance structure: “By generating auditor conclusions, test procedures, and final reports before any independent review occurs, Delve places itself in the role of both implementer and examiner. This is not a technicality. It is a structural fraud that invalidates the entire attestation.” In addition to accusing Delve of misleading its customers, DeepDelver said the startup is helping those customers “mislead the public by hosting trust pages that contain security measures that were never implemented.” DeepDelver said that while their company was discussing its issues with Delve, the startup “sent us multiple boxes of donuts […] to keep us happy.” Nonetheless, DeepDelver’s employer supposedly unpublished its trust page and no longer relies on the startup for compliance. Delve responded to the accusations by saying it does not issue compliance reports at all. Instead, it’s an “automation platform” that ingests information about compliance, then provides auditors with access to that information. “Final reports and opinions are issued solely by independent, licensed auditors, not Delve,” the company said. Delve also said that its customers “can opt to work with an auditor of their choosing or opt to work with one from Delve’s network of independent, accredited third-party audit firms.” Those auditors, the startup said, are “established firms used broadly across the industry, including by other compliance platforms.” In response to the accusation that it’s providing customers with “fake evidence,” Delve countered that it’s simply offering “templates to help teams document their processes in accordance with compliance requirements, as do other compliance platforms.” “Draft templates are not the same as ‘pre-filled evidence,’” the company said. Delve added that it is “actively investigating any leaks” and is “still reviewing the Substack.” When asked about Delve’s response, DeepDelver told TechCrunch that they were “baffled by the laziness, clumsiness and brazenness of it.” “They are trying to snake their way out [of] being held accountable by denying having ‘pre-filled evidence’ but calling it ‘templates’ instead, effectively shifting the blame to customers for adopting the ‘templates’ as is,” DeepDelver said. “They’re claiming they are not the ones to ‘issue’ the report, which is easy to claim if you define issuing a report as providing the final stamp.” They added that there are “a number of very serious allegations” that Delve did not address at all: “The India accusation, the lack of AI (they only talk about ‘automations’), and the trust (lol) page containing controls that were never implemented.” Apparently DeepDelver isn’t done with its criticism, as it promised, “Part II will follow soon.” In addition, following the initial Substack post, an X user named James Zhousaidthey were able to gain access to sensitive information from Delve, such as employee background checks and equity vesting schedules. Dvuln founder Jamieson O’Reillyshared more detailsfrom what O’Reilly said was a conversation with Zhou about “several gaping security holes in Delve’s external attack surface.” TechCrunch sent an email seeking additional comment to the media contact address listed on Delve’s website. The email bounced, but after this article was published, I received a calendar invite for a “Delve demo” later this week. This post was initially published on March 21, 2026. It has been updated with emailed answers from DeepDelver, additional information about purported security vulnerabilities provided by Jamieson O’Reilly, and additional details about Delve’s response to TechCrunch.

1 month ago

View

Are AI tokens the new signing bonus or just a cost of doing business?

Are AI tokens the new signing bonus or just a cost of doing business?

This week, a topic that has been boomeranging around Silicon Valley bounced into the spotlight: AI tokens as compensation. The idea is straightforward enough — rather than giving engineers only salary, equity, and bonuses, companies would also hand them a budget of AI tokens, the computational units that power tools like Claude, ChatGPT, and Gemini. Spend them to run agents, automate tasks, crank through code. The pitch is that access to more compute makes engineers more productive, and that more productive engineers are worth more. It’s an investment in the person holding them, is the idea. Jensen Huang, the leather-jacket-wearing CEO of Nvidia, seemed to capture everyone’s imagination when he floated the notion at the company’s annual GTC event earlier this week that engineers should receive roughly half their base salary again — in tokens. His top people, by his math, might burn through$250,000 a yearin AI compute. He called it a recruiting tool and predicted it would become standard across Silicon Valley. It isn’t entirely clear where the idea was first, well, ideated. Tomasz Tunguz, a renowned VC in the Bay Area who runs Theory Ventures and focuses on AI, data, and SaaS startups — and whose writing on all things data has garnered a loyal following over the years — was talking about this in mid-February, writing that tech startups were already adding inference costs as a “fourth componentto engineering compensation.” Using data from the compensation tracking site Levels.fyi, he put a top-quartile software engineer salary at $375,000. Add $100,000 in tokens and you’re at $475,000 fully loaded — meaning roughly one dollar in five is now compute. That’s no coincidence. Agentic AI has been taking off, and therelease of OpenClawin late January accelerated the conversation considerably. OpenClaw is an open-source AI assistant designed to run continuously — churning through tasks, spawning sub-agents, and working through a to-do list while its user sleeps. It’s part of a broader shift toward “agentic” AI, meaning systems that don’t just respond to prompts but take sequences of actions autonomously over time. The practical consequence is that token consumption has exploded. Where someone writing an essay might use 10,000 tokens in an afternoon, an engineer running a swarm of agents can blow through millions in a day — automatically, in the background, without typing a word. By this weekend, the New York Times had put together asmart lookat the so-called tokenmaxxing trend, finding that engineers at companies including Meta and OpenAI are competing on internal leaderboards that track token consumption. Generous token budgets are quietly becoming a standard job perk, the paper reported, the way dental insurance or free lunch once was. One Ericsson engineer in Stockholm told the Times he probably spends more on Claude than he earns in salary, though his employer picks up the tab. Maybe tokens really will become the fourth pillar of engineering compensation. But engineers might want to hold the line before embracing this as a straightforward win. More tokens may mean more power in the short term, but given how fast things are evolving, it doesn’t necessarily mean more job security. For one thing, a large token allotment comes with large expectations. If a company is effectively funding a second engineer’s worth of compute on your behalf, the implicit pressure is to produce at twice the rate (or more). And there’s a muddier problem underneath that: at the point where a company’s token spend per employee approaches or exceeds that employee’s salary, the financial logic of headcount starts to look different to its finance team. If the compute is doing the work, the question of how many humans need to be coordinating it becomes harder to avoid. Jamaal Glenn, an East Coast-based Stanford MBA and former VC turned financial services CFO, similarlypoints outthat what may seem like a perk can be a clever way for companies to inflate the apparent value of a compensation package without increasing cash or equity — the things that actually compound for an employee over time. Your token budget doesn’t vest. It doesn’t appreciate. It doesn’t show up in your next offer negotiation the way a base salary or equity grant does. If companies successfully normalize tokens as pay, they may find it easier to keep cash comp flat while pointing to a growing compute allowance as evidence of investment in their people. That’s a good deal for the company. Whether it’s a good deal for the engineer depends on questions most engineers don’t yet have enough information to answer.

1 month ago

View

Edge AI for ASHA Workers: Neonatal Care in Rural India Gets a Tech Upgrade

Edge AI for ASHA Workers: Neonatal Care in Rural India Gets a Tech Upgrade

Developed by Wadhwani AI, Shishu Mapaan uses video and AI to estimate anthropometric metrics of newborns, enabling faster, more accessible frontline care.

1 month ago

View

Why Wall Street wasn’t won over by Nvidia’s big conference

Why Wall Street wasn’t won over by Nvidia’s big conference

When Nvidia CEO Jensen Huang took the stage for hisannual GTC keynoteon Monday, the $4-trillion-dollar company’s stock started to drop. Wall Street investors, it seems, were unmoved by the leather jacket-clad founder’s bullish 2.5-hour speech. Instead, they placed more weight on AI’s uncertain future and fears of a bubble. The nervousness felt by Wall Street couldn’t be more different than the buzzy atmosphere in Silicon Valley, where confidence, not uncertainty abounds. Huang talked for more than two hours about the company’s latest innovations, from newvideo game graphics techandupdated networking infrastructureto autonomous vehicle deals and a new chip designed with Groq to accelerate AI inference in the Vera Rubin system. He also threw out some eye-watering numbers about Nvidia’s business and beyond. Huang called the AI agent ecosystem a $35 trillion market and the physical AI and robotics industry a $50 trillion market. Huang also said he expects to see$1 trillion worth of purchase ordersfor the company’s Blackwell and Vera Rubin chips — just two of Nvidia’s many products — by the end of 2027. Shouldn’t that make investors excited? It’s not surprising that they aren’t, Futurum CEO Daniel Neuman told TechCrunch. “[AI] is so good, so transformational, and moving so fast that we don’t actually understand what it’s going to mean for all the things that are the societal constructs that we’ve come to understand,” Neuman said. “The markets hate uncertainty. The speed of innovation has actually created a great new uncertainty that I think most people never expected.” Some of that uncertainty comes from misleading information coming out of the market, Neuman said, who added that headlines about low enterprise adoption of AI aren’t painting the full picture — at least, based on conversations he’s having. “Enterprise AI adoption is going to hit inflection and scale very quickly,” Neuman said. “I actually think it’s happening. When you say it’s not, I think what you’re probably saying is the [return on investment] and the receipts are still a little bit undefined and companies are citing the surveys and the reports that are largely six-month-old data. It just takes months to aggregate data.” This sentiment holds weight when you look at Nvidia’s numbers from past quarters. While companies may not be touting their AI ROI, they are increasingly purchasing Nvidia’s tech. The company continues to not only beat its lofty goals and quarterly estimates, but soar past them. Nvidia’s revenue was up 73% year-over-year last quarter. There is no sign that will change any time soon either. For example, just this week Nvidia confirmed Amazon made a plan to purchase 1 million GPUs, alongside other AI infrastructure, by the end of 2027 for Amazon Web Services (AWS),according to reporting from Reuters. Kevin Cook, a senior equity strategist at Zacks Investment Research, agreed with Neuman and joked to TechCrunch that investors not being happy doesn’t change the fact that the whole stock market is propped up by Nvidia, because its tech runs the rails for many of these businesses. “The economy is sort of orbiting around Nvidia,” Cook said. “It’s building this necessary infrastructure. All these different companies in hardware and software and physical AI — even Caterpillar is now physical AI — that are building off of these platforms.” None of this means there isn’t currently an AI bubble or couldn’t be one in the future. But while GTC may not have been a boon for Nvidia’s stock, the broader uncertainty doesn’t seem to be Nvidia’s problem. The company is clearly barreling full steam ahead, bringing seemingly the entire global economy right alongside it. “Nvidia, as you know, is a platform company,” Huang said in his GTC keynote. “We have technology. We have our platforms. We have a rich ecosystem, and today there are probably 100% of the $100 trillion dollars of industry here.

1 month ago

View

Delve accused of misleading customers with ‘fake compliance’

Delve accused of misleading customers with ‘fake compliance’

Ananonymous Substack postpublished this week accuses compliance startupDelveof “falsely” convincing “hundreds of customers they were compliant” with privacy and security regulations, potentially exposing those customers to “criminal liability under HIPAA and hefty fines under GDPR.” Delve is a Y Combinator-backed startup that last yearannounced raising a $32 million Series Aat a $300 million valuation. (The round was led by Insight Partners.) On Friday, the startup attempted to refute the accusationson its blog, calling the Substack post “misleading” and saying it “contains a number of inaccurate claims.” The Substack post is credited to “DeepDelver,” who described themselves as working at a (now former) Delve client. DeepDelver recounted receiving an email in December claiming the startup had “leaked a spreadsheet with confidential client reports.” While Delve CEO Karun Kaushik apparently assured customers in a subsequent email that they were in compliance and that no external party gained access to sensitive data, DeepDelver said they and other customers had become suspicious. “Having the shared experience of being underwhelmed with the Delve experience, and having the overall sense that something fishy was going on, we decided to pool resources and investigate together,” they wrote. Their conclusion? That Delve “achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance.” DeepDelver went into considerable detail about those claims, accusing the startup of providing customers with “fabricated evidence of board meetings, tests, and processes that never happened,” then forcing those customers to “choose between adopting fake evidence or performing mostly manual work with little real automation or AI.” DeepDelver also claimed that virtually all of Delve’s clients seem to have gone through two audit firms, Accorp and Gradient, which they described as “part of the same operation,” one that operates primarily in India, with only a nominal presence in the United States. Those firms, they said, are just rubber-stamping reports that were generated by Delve. As a result, DeepDelver said the startup “inverts” the normal compliance structure: “By generating auditor conclusions, test procedures, and final reports before any independent review occurs, Delve places itself in the role of both implementer and examiner. This is not a technicality. It is a structural fraud that invalidates the entire attestation.” In addition to accusing Delve of misleading its customers, DeepDelver said the startup is helping those customers “mislead the public by hosting trust pages that contain security measures that were never implemented.” DeepDelver said that while their company was discussing its issues with Delve, the startup “sent us multiple boxes of donuts already to keep us happy.” Nonetheless, DeepDelver’s employer supposedly unpublished its trust page and no longer relies on the startup for compliance. Delve responded to the accusations by saying it does not issue compliance reports at all. Instead, it’s an “automation platform” that ingests information about compliance, then provides auditors with access to that information. “Final reports and opinions are issued solely by independent, licensed auditors, not Delve,” the company said. Delve also said that its customers “can opt to work with an auditor of their choosing or opt to work with one from Delve’s network of independent, accredited third-party audit firms.” Those auditors, the startup said, are “established firms used broadly across the industry, including by other compliance platforms.” In response to the accusation that it’s providing customers with “fake evidence,” Delve countered that it’s simply offering “templates to help teams document their processes in accordance with compliance requirements, as do other compliance platforms.” “Draft templates are not the same as ‘pre-filled evidence,” the company said. Delve added that it is “actively investigating any leaks” and is “still reviewing the Substack.” Following the initial Substack post, an X user named James Zhousaidthey were able to gain access to sensitive information from Delve, such as employee background checks and equity vesting schedules. Dvuln founder Jamieson O’Reillyshared more detailsfrom what O’Reilly said was a conversation with Zhou about “several gaping security holes in Delve’s external attack surface.” TechCrunch sent an email seeking additional comment to the media contact address listed on Delve’s website. The email bounced, but I subsequently received a calendar invite for a “Delve demo” later this week. TechCrunch has also reached out to DeepDelver for additional comment. This post has been updated with additional information about purported security vulnerabilities provided by Jamieson O’Reilly, and additional details about Delve’s response to TechCrunch.

1 month ago

View

Publisher pulls horror novel ‘Shy Girl’ over AI concerns

Publisher pulls horror novel ‘Shy Girl’ over AI concerns

Hachette Book Group said it will not be publishing a novel called “Shy Girl” over concerns that artificial intelligence was used to generate the text. The novel was scheduled to be published in the United States this spring. Hachette said it will also discontinue the book in the United Kingdom, where it’s already available. Although the publisher claimed the decision came after a thorough review of the text, reviewers onGoodReadsandYouTubehad been speculating that the book was likely AI-generated. AndThe New York Times saidit asked Hachette about the “Shy Girl” concerns the day before the announcement. In an email to the NYT, author Mia Ballard denied using AI to write her novel, instead blaming an acquaintance she’d hired to edit the original, self-published version of “Shy Girl.” Ballard said she’s pursuing legal action, and that as a result of the controversy “my mental health is at an all time low and my name is ruined for something I didn’t even personally do.” Writer Lincoln Michel and other industry observers have noted that U.S. publishersrarely do extensive editingwhen they acquire titles that have already been published in other forms.

1 month ago

View

Coding Platform Cursor Admits Use of China’s Kimi K2.5 Model in Composer 2 After Backlash

Coding Platform Cursor Admits Use of China’s Kimi K2.5 Model in Composer 2 After Backlash

Cursor accesses the Kimi K2.5 model through Fireworks AI, which provides hosted inference and reinforcement learning infrastructure.

1 month ago

View

New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput

New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput

Anthropic submitted two sworn declarations to a California federal court late Friday afternoon, pushing back on the Pentagon’s assertion that the AI company poses an “unacceptable risk to national security” and arguing that the government’s case relies on technical misunderstandings and claims that were never actually raised during the months of negotiations that preceded the dispute. The declarations were filed alongside Anthropic’s reply brief in its lawsuit against the Department of Defense and come ahead of a hearing this coming Tuesday, March 24, before Judge Rita Lin in San Francisco. The dispute traces back to late February, when President Trump and Defense Secretary Pete Hegseth publicly declared they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology. The two people who submitted the declarations are Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector. Heck is a former National Security Council official who worked at the White House under the Obama administration before moving to Stripe and then Anthropic, where she runs the company’s government relationships and policy work. She was personally present at the February 24 meeting where CEO Dario Amodei sat down with Defense Secretary Hegseth and the Pentagon’s Under Secretary Emil Michael. In herdeclaration, Heck calls out what she describes as a central falsehood in the government’s filings: that Anthropic demanded some kind of approval role over military operations. That claim, she says, simply isn’t true. “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role,” she wrote. She also claims that the Pentagon’s concern about Anthropic potentially disabling or altering its technology mid-operation was never raised during negotiations. Instead, she says, it appeared for the first time in the government’s court filings, which gave Anthropic no opportunity to respond. Another detail in Heck’s declaration sure to draw attention is that on March 4 — the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic — Under Secretary Michael emailed Amodei to say the two sides were “very close” on the two issues the government now cites as evidence that Anthropic is a national security threat: its positions on autonomous weapons and mass surveillance of Americans. The email, which Heck attaches as an exhibit to her declaration, is worth reading alongside what Michael said publicly in the days afterward. On March 5, Amodei published a statement saying the company had been having “productive conversations” with the Pentagon. The day after that, Michaelposted on Xthat “there is no active Department of War negotiation with Anthropic.” A week after that, he told CNBC there was “no chance” of renewed talks. Heck’s point appears to be: If Anthropic’s stance on those two issues is what makes it a national security threat, why was the Pentagon’s own official saying the two sides were nearly aligned on exactly those issues right after the designation was finalized? (She stops short of saying the government used the designation as a bargaining chip, but the timeline she lays out leaves the question hanging.) Ramasamy brings a different kind of expertise to the case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including classified environments. At Anthropic, he’s credited with building the team that brought its Claude models into national security and defense settings, including the$200 million contractwith the Pentagon announced last summer. Hisdeclarationtakes on the government’s claim that Anthropic could theoretically interfere with military operations by disabling the technology or otherwise altering how it behaves, which Ramasamy says isn’t technically possible. Per his telling, once Claude is deployed inside a government-secured, “air-gapped” system operated by a third-party contractor, Anthropic has no access to it; there is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. Any kind of “operational veto” is a fiction, he suggests, explaining that a change to the model would require the Pentagon’s explicit approval and action to install. Anthropic, he says, can’t even see what government users are typing into the system, let alone extract that data. Ramasamy also disputes the government’s claim that Anthropic’s hiring of foreign nationals makes the company a security risk. He notes that Anthropic employees have undergone U.S. government security clearance vetting — the same background check process required for access to classified information — adding in his declaration that “to my knowledge,” Anthropic is the only AI company where cleared personnel actually built the AI models designed to run in classified environments. Anthropic’s lawsuit argues that the supply-chain risk designation — the first ever applied to an American company — amounts to government retaliation for the company’s publicly stated views on AI safety, in violation of the First Amendment. The government, in a 40-page filing earlier this week,rejected that framing entirely, saying that Anthropic’s refusal to allow all lawful military uses of its technology was a business decision, not protected speech, and that the designation was a straightforward national security call and not punishment for the company’s views.

1 month ago

View

This 15-Year-Old Indian Founder is Building Balloon Rocket Launchers for India

This 15-Year-Old Indian Founder is Building Balloon Rocket Launchers for India

Celestial Aerospace targets a commercial orbital launch by early 2029. Shreyans Jain says the next few years will test the team’s pace and discipline.

1 month ago

View

PreviousPage 87 of 152Next