Want to get featured here? Explore premium visibility opportunities.

Contact us

AI NewsSam Altman would like remind you that humans use a lot of energy, too

Sam Altman would like remind you that humans use a lot of energy, too

4:22 AM IST · February 22, 2026

Sam Altman would like remind you that humans use a lot of energy, too

OpenAI CEO Sam Altman addressed concerns about AI’s environmental impact this week whilespeaking at an event hosted by The Indian Express. For one thing, Altman — who wasin India for a major AI summit— said concerns about AI’s water usage are “totally fake,” though he acknowledged it was a real issue when “we used to do evaporative cooling in data centers.” “Now that we don’t do that, you see these things on the internet where, ‘Don’t use ChatGPT, it’s 17 gallons of water for each query’ or whatever,” Altman said. “This is completely untrue, totally insane, no connection to reality.” He added that it’s “fair” to be concerned about “the energy consumption — not per query, but in total, because the world is now using so much AI.” In his view, this means the world needs to “move towards nuclear or wind and solar very quickly.” There’s no legal requirement for tech companies to disclose how much energy and water they use, so scientists have beentrying to study it independently. Data centers have also been connected torising electricity prices. Citing a previous conversation with Bill Gates, the interviewer asked whether it’s accurate to say a single ChatGPT query currently uses the equivalent of 1.5 iPhone battery charges, to which Altman replied, “There’s no way it’s anything close to that much.” Altman also complained that many discussions about ChatGPT’s energy usage are “unfair,” especially when they focus on “how much energy it takes to train an AI model, relative to how much it costs a human to do one inference query.” “But it also takes a lot of energy to train a human,” Altman said. “It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you.” So in his view, the fair comparison is, “If you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question versus a human? And probably, AI has already caught up on an energy efficiency basis, measured that way.” You can watch the full interview below. The conversation about water and energy usage begins at around 26:35.

read more

Latest AI News

View All News →
Adaption aims big with AutoScientist, an AI tool that helps models train themselves

Adaption aims big with AutoScientist, an AI tool that helps models train themselves

For years, AI researchers have anticipated the moment when AI systems will be able to improve themselves better than humans could. With investors pouring money into a new generation of research-driven AI labs, there are more resources than ever available to pursue the goal. Now, one of those neolabs has taken a major step towards making it real. On Wednesday,Adaptionintroduced a new product calledAutoScientistthat helps models learn specific capabilities quickly by using an automated approach to conventional fine-tuning. The techniques are applicable to a wide range of fields, but the Adaptation team is particularly focused on the potential for speeding up and easing the process of training and fine-tuning a frontier-level AI model. According to co-founder and CEO Sara Hooker, who previously worked as VP of AI research at Cohere, AutoScientist represents a new way to approach the AI training process. “What’s super exciting about it is that it co-optimizes both the data and the model, and learns the best way to basically learn any capability,” Hooker told TechCrunch. “It suggests we can finally allow for successful frontier AI trainings outside of these labs” AutoScientist builds on the company’s existing data offering,Adaptive Data, which aims to make it easier to build high-quality datasets over time. AutoScientist, meanwhile, is designed to turn those continuously improving datasets into continuously improving AI models. “Our view at Adaption is that the whole stack should be completely adaptable, and should basically optimize on the fly to whatever task you have,” Hooker says. Of course, that approach will only be as good as the results. In its launch materials, Adaption boasts that AutoScientist has more than doubled win-rates across different models — impressive numbers, but difficult to put into context. Since the system is built to adapt models to specific tasks, conventional benchmarks like SWE-Bench or ARC-AGI aren’t applicable. Still, Adaption is confident that users will see the difference once they try AutoScientist out — so confident that the lab is making the tool free to use for the first 30 days after its release. “The same way that code generation unlocked a lot of tasks, this is going to unlock a lot of innovation at the frontier of different fields,” Hooker says.

1 hour ago

View

Zoho Commits ₹70 Crore to ONDC to Empower MSMEs with Accessible Sovereign Tech

Zoho Commits ₹70 Crore to ONDC to Empower MSMEs with Accessible Sovereign Tech

With this investment, Zoho seeks to support MSMEs in their digital transformation and contribute to India's economic growth.

1 hour ago

View

 Proximal Cloud, NxtGen Partner to Enable Sovereign AI Deployments in India

Proximal Cloud, NxtGen Partner to Enable Sovereign AI Deployments in India

The partnership targets regulated sectors with compliant, private AI infrastructure and local data control.

1 hour ago

View

AIM Launches ‘Best Firm for GCC Talent’ Certification

AIM Launches ‘Best Firm for GCC Talent’ Certification

The new certification programme focuses on culture, learning, and retention across India’s various Global Capability Centres.

1 hour ago

View