AI NewsData centers get ready — the Senate wants to see your power bills

Data centers get ready — the Senate wants to see your power bills

1:26 AM IST · March 27, 2026

Data centers get ready — the Senate wants to see your power bills

Two U.S. senators on Thursday fired the latest salvo in an increasingly active front against data centers and their energy use. Senators Josh Hawley and Elizabeth Warren sent a letter to the U.S. Energy Information Administration (EIA) asking it to collect details on energy use from data centers — and how that use is affecting the grid. The senators urged the EIA “to establish a mandatory annual reporting requirement for data centers and other large loads,” they wrote in the letter, which TechCrunch has viewed. “As electricity demand growth continues to accelerate after years of relative stagnation, the lack of reliable, standardized data on large load energy consumption poses significant risks to effective grid planning and oversight.” Wired was first toreporton the letter. The letter isn’t the first move by politicians to try and place new regulatory requirements on data centers. Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez said Wednesday they wouldintroduce legislationthat would halt new data center construction until Congress could come to an agreement on how to regulate AI. Energy use by data centers has exploded in recent years. Google’s data centers, for example,doubled their consumptionbetween 2020 and 2024. The trend isn’t likely to change in the near future. By 2035, planned new data centers willnearly triplethe sector’s energy demand. The EIA is a government agency tasked with collecting and analyzing data related to the energy system — sort of like a Census bureau for the grid. It was established in 1977 under the Department of Energy in the wake of the oil shocks of the early 1970s. For decades, the EIA has gathered a wealth of information about energy use in the U.S., including costs, generating sources, and energy-efficiency programs. It also tracks how different sectors use energy, though it only focuses on four very broad categories: residential, commercial, industrial, and transportation. Hawley and Warren are also asking the EIA to collect more granular information on data centers, including how energy consumption differs between AI computing tasks and general cloud services. The senators have very specific requests regarding what that data should look like, including hourly, annual, and peak energy loads and the rates companies pay. They also want to know about any grid upgrades required by the addition of new large loads, how those upgrades are paid for, and whether data center customers participate in demand response programs, in which utilities pay heavy users to reduce their use for a period of time. The letter calls out the EIA administrator Tristan Abbey, who in December said the agency will be an “essential player” in collecting data regarding energy demand from data centers. Hawley and Warren requested the agency reply to their letter by April 9. It’s possible the process is already underway, though the EIA hasn’t publicly shared if it is. Changes to the EIA surveys must go through the Office of Management and Budget process, which requires a public comment period. “We get requests for analysis very often. We get requests for an actual new product less frequently,” Abbeysaidat the public event in December. “It takes probably about two years to launch a new survey from scratch. But there are authorities that exist where you can avoid the two-year process by conducting surveys of smaller scope, but potentially a sharper signal.”

read more

Latest AI News

View All News →
Get ready for the whisper-filled office of the future

Get ready for the whisper-filled office of the future

How will work setups change if we spend more and more time talking to our computers? A recent featurein the Wall Street Journallooks at the rising popularity of dictation apps like Wispr, especially now that they can be connected to vibe coding tools, and what that might mean for office etiquette. One VC said that visiting startup offices now feels like stepping into a high-end call center. AndGusto co-founder Edward Kimis apparently telling his team that in the future, offices will sound “more like a sales floor.” (As someone still scarred from the time his desk was briefly relocated to a sales floor, let me say: Oh no.) Kim claimed that he only types now when he absolutely has to. But he admitted that constantly dictating in the office can be “just a little awkward.” Similarly, AI entrepreneur Mollie Amkraut Mueller said her husband became annoyed with her new habit of whispering to her computer, so their late-night work sessions now involve sitting apart, or “one of us will stay in our office.” But Wispr founder Tanay Kothari insisted that this will all seem “normal” one day, just as it’s become normal to spend hours staring at your phone.

3 minutes ago

View

Anthropic says ‘evil’ portrayals of AI were responsible for Claude’s blackmail attempts

Anthropic says ‘evil’ portrayals of AI were responsible for Claude’s blackmail attempts

Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic. Last year, the company said that during pre-release tests involving a fictional company, Claude Opus 4 would oftentry to blackmail engineersto avoid being replaced by another system. Anthropic laterpublished researchsuggesting that models from other companies had similar issues with “agentic misalignment.” Apparently Anthropic has done more work around that behavior, claiming ina post on X, “We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation.” The company went into more detail ina blog poststating that since Claude Haiku 4.5, Anthropic’s models “never engage in blackmail [during testing], where previous models would sometimes do so up to 96% of the time.” What accounts for the difference? The company said it found that “documents about Claude’s constitution and fictional stories about AIs behaving admirably improve alignment.” Related, Anthropic said that it found training to be more effective when it includes “the principles underlying aligned behavior” and not just “demonstrations of aligned behavior alone.” “Doing both together appears to be the most effective strategy,” the company said.

4 hours ago

View

We’re feeling cynical about xAI’s big deal with Anthropic

We’re feeling cynical about xAI’s big deal with Anthropic

Anthropic and xAIannounced a big partnershipthis week, with Anthropic buying all the compute capacity at xAI’s Colossus 1 data center in Tennessee. On the latest episode ofTechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what the deal might mean for xAI’s parent company SpaceX, as SpaceX prepares to go public andapparently plans to dissolve xAIas a separate organization. Kirsten did her best to offer “a positive view” on the partnership — after all, it’s a new way for xAI to make money. But she also noted that this also suggests xAI isn’t doing much when it comes to training its own frontier AI models, and it’s harder for the company to position itself as a “forward-looking, innovative” business when that’s the case. Then Sean asked: “Why be positive when you can be cynical?” In his view, this seems like “a major heat check before the IPO.” Yes,becoming a neocloudmight be “a more believable business in the near term,” but it’s less likely to get outside investors excited in the long term. (And then there’sthe environmental lawsuitthat xAI is facing over Colossus 1.) Keep reading for a preview of our conversation, edited for length and clarity. Sean O’Kane:I always love a surprise, especially when everybody’s eyes [are] on another ball,a major trialthat’s happening. Seemingly out of nowhere this week, SpaceX and therefore its AI subsidiary xAI — which apparently no longer exists now, or is imminently not about to exist, which we can get to — struck a deal with Anthropic. Basically, the real version of the deal is that Anthropic’s essentially taking over all of the compute at the data center known as Colossus 1 in Memphis, Tennessee, to focus on Anthropic’s more enterprise-focused AI products. There’s been a lot of reporting about how [Anthropic’s] been looking for more compute […] and it seems like an escape valve for them to be able to strike this deal and get access to all this compute. In the near term, for xAI and for SpaceX, yes, they are a neocloud now, in the sense that they had to do something with all this compute that they were building, because it certainly seems like they were not going to need it for Grok — which, outside of X, is not burning up the world as far as becoming the new hot consumer chat bot. Kirsten Korosec:And we should say that in terms of what a neocloud is, for those who don’t know, this is the idea of buying GPUs from Nvidia and the like, and renting those out as opposed to using those for their own AI, training their own AI models. So this is a different kind of business, andthe point that our AI editor, Russell Brandom, makesis that a lot of companies are building out data centers, but if given a choice between, do they rent them out [or using them to train their own models], they are still prioritizing using this compute for their own internal AI model training. I think that’s an important point and one that suggests that maybe xAI isn’t doing so much on the AI model training [side] Anthony Ha:Right, and as Sean was alluding to, most people would not necessarily think of Grok as — not only that it’s known for some pretty unpleasant, if notdownright illegal, content, but also it’s not necessarily super cutting edge. Especially if we start talking about enterprise AI, which I know we’re gonna be getting into later in this episode, you don’t hear a lot about people using Grok for work-critical tasks. And so the question becomes: How can xAI actually make money? And apparently just selling the infrastructure could be one of the main ways to do it. Kirsten:And you could take a positive view on that, right? They figured out a way to make money. But I think that when you are positioning your company — in this case, SpaceX-slash-xAI — as a forward-looking, innovative company, that’s tougher to sell if you are simply just renting out your GPUs and not using them for that innovation. Sean:But why be positive when you can be cynical? Which is to say that this seems like a major heat check before the IPO that we’re about to see get rammed into the markets with SpaceX. Anthony, you mentioned not only is Grok not being used for big enterprise tasks, there’s been reporting that xAI employees wereusing other models, they weren’t even using [Grok] internally, and that caused this big shakeup inside of xAI, postacquisition from SpaceX, that involved essentiallyall the co-founders leaving other than Elon Musk, [and] him basically saying he’s starting from scratch on xAI, despite the fact that SpaceX paid $250 billion for it in the run up to this mega-IPO. And now he’s saying thatthey’re going to dissolve xAIas a separate entity inside SpaceX altogether. He’s starting to call the whole thing SpaceXAI, because this man loves nothing but to ruin a brand that has some value to it — see Twitter. This may be a more believable business in the near term, and so on some level, I could see this being maybe more attractive to investors come IPO time, because it’s like a bit more reliable and certainly more real than them being a frontier lab developer. But it’s also not the kind of business that’s going to draw the same — at least, in a normal environment — outside investment that we’re seeing go into all the frontier labs. That’s maybe one of the biggest tension points we’ve seen develop during this IPO process. Loading the player…

8 hours ago

View

‘We Have Swarms of Agents’: Yasmeen Ahmad on Google’s Future of Enterprise AI

‘We Have Swarms of Agents’: Yasmeen Ahmad on Google’s Future of Enterprise AI

Google has introduced Knowledge Catalog, a context engine to enhance data interpretation in multi-cloud environments.

12 hours ago

View