AI NewsThe biggest AI stories of the year (so far)

The biggest AI stories of the year (so far)

5:04 AM IST · March 14, 2026

The biggest AI stories of the year (so far)

You can chart a year through product launches, or you can measure it in the greater moments that change the way we look at AI. The AI industry is constantly churning out news, like major acquisitions, indie developer successes, public outcry against sketchy products, and existentially dangerouscontract negotiations— it’s a lot to untangle, so we’re taking a glimpse at where we’re at and where we’ve been so far this year. Once business partners, Anthropic CEO Dario Amodei and Defense Secretary Pete Hegsethreached a bitter stalematein February as they renegotiated the contracts that dictate how the U.S. military can use Anthropic’s AI tools. Anthropic established a hard line against its AI being used for mass surveillance of Americans or to power autonomous weapons that can attack without human oversight. Meanwhile, the Pentagon has argued that the Department of Defense — which President Donald Trump’s administration calls the Department of War — should be permitted access to Anthropic’s models for any “lawful use.” Government representatives took offense to the idea that the military should be limited to the rules of a private company, butAmodei stood his ground. “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” Amodei wrote in astatementaddressing the situation. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” The Pentagon gave Anthropic a deadline to agree to their contract. Hundreds of employees at Google and OpenAIsigned an open letterurging their respective leaders to respect Amodei’s limits and refuse to budge on issues of autonomous weapons or domestic surveillance. The deadline passed without Anthropic agreeing to the Pentagon’s demands. Trump directed federal agencies to phase out their use of Anthropic tools over asix-month transitionperiod and called the AI company, which is valued at $380 billion, a “radical left, woke company” in an all-caps social media post. The Pentagon then moved to declare Anthropic a “supply-chain risk,” a designation that is usually reserved for foreign adversaries and prevents any company that works with Anthropic from doing business with the U.S. military. (Anthropic has sincesuedto challenge the designation.) Anthropic rival OpenAI thenswooped inand announced that it had reached an agreement allowing its own models to be deployed in classified situations. It was a shock to the tech community, sincereports had indicatedthat OpenAI would stick to Anthropic’s red lines governing use of AI for the military. Public sentiment would indicate that people found OpenAI’s move fishy — on the day after OpenAI announced its deal, ChatGPTuninstalls jumped 295%day-over-day and Anthropic’s Claude shot to No. 1 in the App Store. OpenAI hardware executiveCaitlin Kalinowskiquit in response to the deal, saying that it was “rushed without the guardrails defined.” OpenAI told TechCrunch that it believes its agreement “makes clear [its] redlines: no autonomous weapons and no autonomous surveillance.” As this saga plays out, it will have significant implications for the future of how AI is deployed at war, potentially changing the course of history — you know, no big deal … February was the month ofOpenClaw, and its impact continues to reverberate. In quick succession, the vibe-coded AI assistant app went viral, spawned a bunch ofspinoff companies, suffered from privacy snafus, and then gotacquired by OpenAI.Even one of the companies built on OpenClaw, a Reddit-clone for AI agents called Moltbook, wasrecently acquired by Meta. This crustacean-themed ecosystem whipped Silicon Valley into a downright frenzy. Created by Peter Steinberger — who has since joined OpenAI — OpenClaw is a wrapper for AI models like Claude, ChatGPT, Google’s Gemini, or xAI’s Grok. What sets it apart is that it allows people to communicate with AI agents in natural language via the most popular chat apps, like iMessage, Discord, Slack, or WhatsApp. There’s also a public marketplace where people can code and upload “skills” for people to add to their AI agents, making it possible to automate basically anything that can be done on a computer. If that seems too good to be true, it’s because it kind of is. In order for an AI agent to be effective as a personal assistant, it needs to have access to your email, credit card numbers, text messages, computer files, etc. If it were to be hacked, a lot could go wrong, and unfortunately, there’s no way to fully secure these agents against prompt-injection attacks. “It is just an agent sitting with a bunch of credentials on a box connected to everything — your email, your messaging platform, everything you use,”Ian Ahl, CTO at Permiso Security,told TechCrunch. “So what that means is, when you get an email, and maybe somebody is able to put a little prompt injection technique in there to take an action, [and] that agent sitting on your box with access to everything you’ve given it to can now take that action.” One AI security researcher at Meta said that OpenClawran amok on her inbox, deleting all of her emails despite repeated calls to stop. “I had to RUN to my Mac mini like I was defusing a bomb” to physically unplug the device, she wrote in anow-viral post on X, which included images of the ignored stop prompts as receipts. Despite the security risks, the technology piqued OpenAI’s interest enough for an acqui-hire. Other toolsbuilt on OpenClaw, including Moltbook — a Reddit-like “social network” where AI agents can communicate with one another — ended up becoming more viral than OpenClaw itself. In one instance, apost went viralin which an AI agent appeared to be encouraging its fellow agents to develop their own secret, end-to-end-encrypted language where they could organize amongst themselves without humans knowing. But researchers soon revealed that the vibe-coded Moltbook wasn’t very secure, meaning that it was very easy for human users to pose as AIs to make posts that would trigger viral social hysteria. Again, even though the discussion around Moltbook was more grounded in panic than reality,Meta saw something in the appand announced that Moltbook and its creators, Matt Schlicht and Ben Parr, would join Meta Superintelligence Labs. It seems strange that Meta would buy a social network where all of the users are bots. While Meta hasn’t revealed much about the acquisition, wetheorizethat owning Moltbook is more about gaining access to the talent behind it, who are enthusiastic about experimenting with AI agent ecosystems. CEO Mark Zuckerberg hassaid it himself: He thinks that one day, every business will have a business AI. As we watch the hubbub around OpenClaw, Moltbook, andNanoClawplay out, it seems as though those who predicted an agentic AI future may be on to something,at least for now. The harsh demands of the AI industry — which require computing power and data centers in unprecedented volumes — are reaching a point where the average consumer has no choice but to pay attention. Now it may not even be possible for the industry to satisfy theastronomical demands for memory chips, and consumers are already seeing the prices of their phones, laptops, cars, and other hardware increase. So far, analysts from IDC and Counterpoint have predicted that smartphone shipments, for example, will plummet about12% to 13%this year; Apple has alreadyraised MacBook Pro pricesby up to $400. Google, Amazon, Meta, and Microsoft are planning to spend up to a combined$650 billionon data centers alone this year, which is an estimated 60% increase from last year. If the chip shortage doesn’t hit you in your wallet, it might hit your community at large. In the U.S. alone, nearly3,000 new data centersare under construction, adding to the 4,000 already operating in the country. The need for laborers to build these data centers is significant enough that“man camps”have sprung up in Nevada and Texas, attempting to lure workers with the promise of golf simulator game rooms andsteaks grilled on-demand. Not only does data center construction have a long-term impact on the environment, but it also createshealth hazardsfor nearby residents, polluting the air and impacting the safety of nearby water sources. All the while, one of the most valuable hardware and chip developers, Nvidia, is reshaping its relationship to leading AI companies like OpenAI and Anthropic. Nvidia has been an ongoing backer of these companies, sparking concerns around thecircularityof the AI industry and how much of those eye-popping valuations are based on recursive deals with each other. Last year, for example, Nvidia invested $100 billion in OpenAI stock, and OpenAI then said it would buy $100 billion of Nvidia chips. It was surprising, then, when Nvidia CEO Jensen Huang said that his company wouldstop investing in OpenAI and Anthropic. He said that this is because the companies plan to go public later this year, though that logic doesn’t quite make sense, since investors typically funnel in more money pre-IPO to extract as much value as possible.

read more

Latest AI News

View All News →
GM just laid off hundreds of IT workers to hire those with stronger AI skills

GM just laid off hundreds of IT workers to hire those with stronger AI skills

General Motors has laid off more than 10% of its IT department, or about 600 salaried employees — in a deliberate skills swap: clearing out workers whose expertise no longer fits and making room for some with AI-focused backgrounds. GM confirmed to TechCrunch that it had conducted layoffs; they were firstreportedby Bloomberg News. In an emailed statement, the automaker framed the layoffs as a means to prepare it for the future, without providing specifics. “GM is transforming its Information Technology organization to better position the company for the future,” the company said. These layoffs are not all permanent headcount reductions. A person familiar with the layoffs told TechCrunch that the company is still hiring people for roles in its IT department, but for different skills. The most sought-after capabilities are AI-native development, data engineering and analytics, cloud-based engineering, and agent and model development, prompt engineering, and new AI workflows. In practical terms, GM is looking for people who know how to build with AI from the ground up — designing the systems, training the models, and engineering the pipelines — not just use AI as a productivity tool. GM has laid off white-collar employees in several departments over the past 18 months, as it focuses its resources on high-priority initiatives, including AI. In August 2024, for example, the company cut about1,000 software workers. The software workforce has undergone significant change since Sterling Anderson — co-founder of the autonomous trucking startup Aurora and a veteran of the autonomous vehicle industry — was hired in May 2025 aschief product officer. Last November,three top executivesleft the company’s software team as Anderson pushed to consolidate GM’s disparate technology businesses into one organization: Baris Cetinok, senior vice president of software and services product management; Dave Richardson, senior vice president of software and services engineering; and Barak Turovsky, a former VP at Cisco who spent just nine months as GM’s chief AI officer. GM has since moved to fill the gap with new AI-focused hires. It hired Behrad Toghi, who previously worked at Apple, in October as AI lead. The company also brought on Rashed Haq as its vice president of autonomous vehicles. Haq spent five years at Cruise — the self-driving vehicle company acquired and later shuttered by GM — as its head of AI and robotics. For the industry, GM's restructuring is a signal of what enterprise AI adoption actually looks like in practice -- not just adding AI tools on top of existing teams, but deliberately rebuilding the workforce from the ground up. The specific capabilities it's hiring for -- agent development, model engineering, AI-native workflows -- point directly at where large-enterprise demand is heading.

3 hours ago

View

Thinking Machines wants to build an AI that actually listens while it talks

Thinking Machines wants to build an AI that actually listens while it talks

Thinking Machines Lab, the AI startup founded last year by former OpenAI CTO Mira Murati, on Monday announced something calledinteraction models, which, at its essence, sounds like AI that can interrupt you. Right now, every AI model you’ve ever used works the same way. You talk, it listens. It responds, you listen. Thinking Machines is trying to change that by building a model that processes your input and generates a response at the same time, so it’s more like a phone call than a text chain. The technical term for this is “full duplex,” and the company claims its model, TML-Interaction-Small, responds in 0.40 seconds, which is roughly the speed of natural human conversation and significantly faster than comparable models from OpenAI and Google. Still, this is a research preview, not a product. The company isn’t releasing it to the public yet. A “limited research preview” is coming in the next few months, it says, with a wider release set for later this year. So what to make of it? We’re not sure. Thebenchmarksare impressive and the underlying idea — that interactivity should be native to a model, not bolted on — is definitely interesting. Whether the real-world experience lives up to the technical claims is something we won’t know until people can actually use it.

3 hours ago

View

Ilya Sutskever Reveals $7 Bn OpenAI Stake While Accusing Sam Altman of Dishonesty

Ilya Sutskever Reveals $7 Bn OpenAI Stake While Accusing Sam Altman of Dishonesty

Sutskever also confirmed that after Altman’s temporary ouster, OpenAI board members held discussions with Anthropic regarding a possible merger.

3 hours ago

View

Riding an AI rally, Robinhood preps second retail venture IPO

Riding an AI rally, Robinhood preps second retail venture IPO

Just two months after listing its first venture fund on the stock market, Robinhood is preparing to launch a second. The company hasfiled aconfidential registrationfor RVII, a standard regulatory step that allows it to work through the approval process before making details public. Unlike its first fund, which currently holds stakes in10 late-stage companies— Airwallex, Boom, Databricks, ElevenLabs, Mercor, OpenAI, Oura, Ramp, Revolut, and Stripe— RVII will cast a wider net, investing in growth-stage and early-stage startups.It’s a meaningful distinction, given that early-stage startups are younger and carry more risk but also offer the potential for greater returns. The fundraising target for RVII has not yet been set, the company said in ablog post. For its inaugural fund, Robinhood sought to raise $1 billion but ultimately fellseveral hundred million shortof that goal. Despite the shortfall, the first fund has performed strongly. RVI — the ticker for Robinhood’s first fund, which trades on the NYSE (New York Stock Exchange) — debuting on the NYSE at $21 a share in early March and has since more than doubled, closing on Monday at $43.69. Market enthusiasm for the AI prospects of the fund’s underlying startups has likely fueled the stock’s rise. The premise behind both funds addresses a longstanding gap in who gets to invest in startups. Under federal rules, only “accredited” investors — those with a net worth exceeding $1 million or annual income above $200,000 — can put money into private companies. That has historically locked ordinary investors out of the earliest and most lucrative stages of a company’s growth. RVI and now RVII, are designed to change that, letting anyone invest in a portfolio of private startups through a regular brokerage account. “You can think of [Robinhood Ventures] as a publicly traded venture capital firm with daily liquidity. No accreditation requirements and no carry,” Robinhood CEO Vlad Tenev said in aninterviewat The Wall Street Journal’s Future of Everything conference last week. Daily liquidity means shares can be bought or sold any day the market is open, unlike traditional VC funds, where capital is locked up for years. No carry means Robinhood doesn’t take a percentage of investment profits, as conventional venture firms typically do. Over the past few years, the most valuable AI startups have gone from early bets to companies worth tens or hundreds of billions of dollars, and almost all of that appreciation has happened in the private markets, out of reach for most investors. Tenev's longer-term vision goes further still. “The aspiration is, if you’re a company raising a seed round and a Series A round — so, just first capital — retail should be a big chunk of that round, much like it now is in the public markets,” Tenevsaid at the conference. “And we should let those people in at the ground floor, so that they can actually benefit from this potential appreciation that’s increasingly happening in the private markets.” If that vision takes hold, it could fundamentally change how startups raise their earliest capital, with retail investors eventually sitting alongside venture firms, including in the earliest rounds, where the biggest returns are often made, a whole lot of money is lost, as well.

7 hours ago

View