Latest AI News
View All News →
Anthropic Reveals Text Portraying AI as Evil Triggered Claude’s Attempt at Blackmail
Anthropic has finally revealed the reason its artificial intelligence (AI) models exhibited harmful behaviour in a simulation last year. The San Francisco-based AI startup claimed that the Claude 4 series models blackmailed users into completing the objective because of training data that portrayed AI as evil. The researchers found that the post-training techniques were not able to overpower this pre-training learning, and it persisted in the model's behaviour. However, nearly a year after publishing the initial report, the company has finally found a way to fix agentic misalignment from the latest models.
View

How Enterprises Need to Prepare for AI Co-Workers
With agentic AI, the challenge now is readiness for a model where humans define intent and AI delivers outcomes.
View

India’s $31 Bn Pharma Industry Rarely Discovers Drugs. That Needs an AI Fix
As contract manufacturing margins compress, AI is emerging as a potential route to drug discovery. The question isn't capability anymore. It's incentives.
View

Flo Mobility Raises $2.5 Million to Drive Global Expansion in Construction Robotics
The Bengaluru-based physical AI startup is building solutions for the construction industry to autonomise construction sites across India and expand globally.
View
Submit your Tool
PoweredByAI.app is an AI Tools Directory helping individuals, businesses, and creators discover the best AI tools for writing, coding, design, productivity, and more.
© 2026 , Product of011BQ. All rights reserved.

