AI NewsStarcloud raises $170 million Series Ato build data centers in space

Starcloud raises $170 million Series Ato build data centers in space

5:31 PM IST · March 30, 2026

Starcloud raises $170 million Series Ato build data centers in space

Starcloud’s latest funding round values the space compute company at $1.1 billion, making it one of the fastest startups to reach unicorn status after graduating from Y Combinator. The company’s Series A, which closed 17 months after its demo day presentation, was led by Benchmark and EQT Ventures. It’s another sign of the interest in outsourcing data centers to orbit as resource and political obstacles slow their development on Earth, but the business modeldependson unproven technology and significant capital expenditure. Starcloud has now raised a total of $200 million, and launched its first satellite with an Nvidia H100 GPU in November 2025. The company will launch a more powerful version, Starcloud 2, later this year with multiple GPUs, including an Nvidia Blackwell chip and an AWS server blade, as well as a bitcoin mining computer. The company will also begin developing a data center spacecraft designed to launch from Starship, the reusable heavy lift rocket being built by Elon Musk’s SpaceX. Starcloud 3, as the spacecraft is named, will be a 200 kilowatts, three-ton spacecraft that fits the “pez dispenser” system SpaceX designed to deploy its Starlink satellites from Starship. CEO and founder Philip Johnston said he expects that will be the first orbital data center that is cost-competitive with terrestrial data centers, with costs on the order of $.05 per kw/hour of power — if commercial launch costs land around $500 per kilogram. The challenge is that Starship isn’t flying yet; Johnston says he expects commercial access to open up in 2028 and 2029. That’s the reality facing all the big space data center projects: powerful space computers will be cost-prohibitive until a new generation of rockets starts launching at a high operational cadence, something that might not happen until the 2030s. “If it ends up being delayed, we’ll just carry on launching the smaller versions on Falcon 9,” Johnston said. “We’re not going to be competitive on energy costs until Starship is flying frequently.” “There’s kind of two business models,” Johnston explains: One is selling processing power to other spacecraft on orbit; the company’s first satellite, for example, analyzes data collected by Capella Space’s radar spacecraft. Then, in the future when launch costs go down, more powerful distributed data centers could potentially pull work from their terrestrial counterparts. That gets at how new this industry really is. When Nvidia CEO Jensen Huang unveiled the company’s Vera Rubin Space-1 chip modules at his company’s annual GPU Technology Conference last week, he didn’t note that none had been produced or shared with the company’s development partners. In fact, the number of advanced GPUs on orbit is numbered in the dozens, while Nvidia is estimated to have sold nearly 4 million to terrestrial hyperscalers in 2025. Or consider that SpaceX’s Starlink communications network, the largest satellite network in orbit with 10,000 spacecraft, produces something around 200megawattsof energy, while data centers with more than 25gigawattsof power are currently under construction in the U..S, according to Cushman and Wakefield. Johnston argues that his company is well ahead of the competition, with the first terrestrial GPU deployed in orbit. It was used to train an AI model in orbit, a first, according to Starcloud, and run a version of Gemini. Beyond the performance, Johnston says Starcloud now has valuable data about what it takes to run a powerful chip in space. “An H100 is probably not the best chip for space, to be honest, but the reason we did it is we wanted to prove that we could run state of the art terrestrial chips in space,” he told TechCrunch. That hard-won knowledge —another GPU, an Nvidia A6000, failed during launch — will influence future designs. There is a laundry list of technical challenges to be solved, including efficient power generation and cooling the hot-running chips. Starcloud-2 will have the largest deployable radiator flown on a private satellite; he expects at least two additional versions of that spacecraft will head to orbit, Johnston said. Then there is the challenge of synchronization. The largest datacenter workloads, often for training, require hundreds or thousands of GPUs to work in tandem. Doing that in space will either require fantastically large spacecraft, or powerful and reliable laser links between spacecraft flying in formation. Most companies working on this technology expect those workloads to come long after simpler inference tasks take place on orbit. Besides Starcloud,Aetherflux, Google’s Project Suncatcher, and Aethero — which launched Nvidia’s first space-based Jetson GPU in 2025 — are all developing space data center businesses. The elephant in the room is SpaceX itself, which has asked the U.S. government for permission to build and operate a million satellites for distributed compute in space. Going head-to-head with SpaceX is a daunting task for any entrepreneur, but Johnston sees room for coexistence. “They are building for a slightly different use case than us,” he told TechCrunch. “They’re mainly planning on serving Grok and Tesla workloads. It may be at some point that they offer a third party cloud service, but what I think they are unlikely to do is what we’re doing [as] an energy and infrastructure player.”

read more

Latest AI News

View All News →
Anthropic says ‘evil’ portrayals of AI were responsible for Claude’s blackmail attempts

Anthropic says ‘evil’ portrayals of AI were responsible for Claude’s blackmail attempts

Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic. Last year, the company said that during pre-release tests involving a fictional company, Claude Opus 4 would oftentry to blackmail engineersto avoid being replaced by another system. Anthropic laterpublished researchsuggesting that models from other companies had similar issues with “agentic misalignment.” Apparently Anthropic has done more work around that behavior, claiming ina post on X, “We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation.” The company went into more detail ina blog poststating that since Claude Haiku 4.5, Anthropic’s models “never engage in blackmail [during testing], where previous models would sometimes do so up to 96% of the time.” What accounts for the difference? The company said it found that “documents about Claude’s constitution and fictional stories about AIs behaving admirably improve alignment.” Related, Anthropic said that it found training to be more effective when it includes “the principles underlying aligned behavior” and not just “demonstrations of aligned behavior alone.” “Doing both together appears to be the most effective strategy,” the company said.

1 hour ago

View

We’re feeling cynical about xAI’s big deal with Anthropic

We’re feeling cynical about xAI’s big deal with Anthropic

Anthropic and xAIannounced a big partnershipthis week, with Anthropic buying all the compute capacity at xAI’s Colossus 1 data center in Tennessee. On the latest episode ofTechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what the deal might mean for xAI’s parent company SpaceX, as SpaceX prepares to go public andapparently plans to dissolve xAIas a separate organization. Kirsten did her best to offer “a positive view” on the partnership — after all, it’s a new way for xAI to make money. But she also noted that this also suggests xAI isn’t doing much when it comes to training its own frontier AI models, and it’s harder for the company to position itself as a “forward-looking, innovative” business when that’s the case. Then Sean asked: “Why be positive when you can be cynical?” In his view, this seems like “a major heat check before the IPO.” Yes,becoming a neocloudmight be “a more believable business in the near term,” but it’s less likely to get outside investors excited in the long term. (And then there’sthe environmental lawsuitthat xAI is facing over Colossus 1.) Keep reading for a preview of our conversation, edited for length and clarity. Sean O’Kane:I always love a surprise, especially when everybody’s eyes [are] on another ball,a major trialthat’s happening. Seemingly out of nowhere this week, SpaceX and therefore its AI subsidiary xAI — which apparently no longer exists now, or is imminently not about to exist, which we can get to — struck a deal with Anthropic. Basically, the real version of the deal is that Anthropic’s essentially taking over all of the compute at the data center known as Colossus 1 in Memphis, Tennessee, to focus on Anthropic’s more enterprise-focused AI products. There’s been a lot of reporting about how [Anthropic’s] been looking for more compute […] and it seems like an escape valve for them to be able to strike this deal and get access to all this compute. In the near term, for xAI and for SpaceX, yes, they are a neocloud now, in the sense that they had to do something with all this compute that they were building, because it certainly seems like they were not going to need it for Grok — which, outside of X, is not burning up the world as far as becoming the new hot consumer chat bot. Kirsten Korosec:And we should say that in terms of what a neocloud is, for those who don’t know, this is the idea of buying GPUs from Nvidia and the like, and renting those out as opposed to using those for their own AI, training their own AI models. So this is a different kind of business, andthe point that our AI editor, Russell Brandom, makesis that a lot of companies are building out data centers, but if given a choice between, do they rent them out [or using them to train their own models], they are still prioritizing using this compute for their own internal AI model training. I think that’s an important point and one that suggests that maybe xAI isn’t doing so much on the AI model training [side] Anthony Ha:Right, and as Sean was alluding to, most people would not necessarily think of Grok as — not only that it’s known for some pretty unpleasant, if notdownright illegal, content, but also it’s not necessarily super cutting edge. Especially if we start talking about enterprise AI, which I know we’re gonna be getting into later in this episode, you don’t hear a lot about people using Grok for work-critical tasks. And so the question becomes: How can xAI actually make money? And apparently just selling the infrastructure could be one of the main ways to do it. Kirsten:And you could take a positive view on that, right? They figured out a way to make money. But I think that when you are positioning your company — in this case, SpaceX-slash-xAI — as a forward-looking, innovative company, that’s tougher to sell if you are simply just renting out your GPUs and not using them for that innovation. Sean:But why be positive when you can be cynical? Which is to say that this seems like a major heat check before the IPO that we’re about to see get rammed into the markets with SpaceX. Anthony, you mentioned not only is Grok not being used for big enterprise tasks, there’s been reporting that xAI employees wereusing other models, they weren’t even using [Grok] internally, and that caused this big shakeup inside of xAI, postacquisition from SpaceX, that involved essentiallyall the co-founders leaving other than Elon Musk, [and] him basically saying he’s starting from scratch on xAI, despite the fact that SpaceX paid $250 billion for it in the run up to this mega-IPO. And now he’s saying thatthey’re going to dissolve xAIas a separate entity inside SpaceX altogether. He’s starting to call the whole thing SpaceXAI, because this man loves nothing but to ruin a brand that has some value to it — see Twitter. This may be a more believable business in the near term, and so on some level, I could see this being maybe more attractive to investors come IPO time, because it’s like a bit more reliable and certainly more real than them being a frontier lab developer. But it’s also not the kind of business that’s going to draw the same — at least, in a normal environment — outside investment that we’re seeing go into all the frontier labs. That’s maybe one of the biggest tension points we’ve seen develop during this IPO process. Loading the player…

5 hours ago

View

‘We Have Swarms of Agents’: Yasmeen Ahmad on Google’s Future of Enterprise AI

‘We Have Swarms of Agents’: Yasmeen Ahmad on Google’s Future of Enterprise AI

Google has introduced Knowledge Catalog, a context engine to enhance data interpretation in multi-cloud environments.

9 hours ago

View

How to Use Netflix's New AI Voice Search Feature: A Step-by-Step Guide

How to Use Netflix's New AI Voice Search Feature: A Step-by-Step Guide

Netflix recently began rolling out a new way for viewers to search for shows and movies on its platform. While we can search for content online via voice dictation, it merely presents results based on keywords. However, the new native AI-based voice search tool will provide contextual search results, taking the intent of the user's query into account. Currently available to a small set of users in beta, the content streaming company is asking users to test the new functionality and provide feedback on how it can be refined, while also pointing out the bugs and issues. The company has yet to announce when the stable version of the AI search tool will be rolled out to a wider global user base.

17 hours ago

View