Latest AI News

ChatGPT Uninstalls Surge 295% After Pentagon Deal as Claude Climbs App Charts
US users sharply turned on ChatGPT following its defence partnership, driving a spike in uninstalls and boosting rival Claude’s downloads.
View

Claude Down For the 2nd Time in 24 Hours, Anthropic Investigates
In India, more than 300 users flagged issues during the same period. Around 59% of complaints were related to the Claude chat, while 18% involved the mobile app and 22% the website.
View

Tagbin Signs MoU with DRIIV to Boost AI Innovation, Digital Capacity
The five-year pact will deploy AI labs, command centres, and digital governance platforms.
View

IndiaAI Mission’s Genloop Launches ‘Data Analyst for Every Team’
The platform has been deployed across sales, product, marketing and operations teams in enterprises.
View

ChatGPT uninstalls surged by 295% after DoD deal
U.S. app uninstalls of ChatGPT’s mobile app jumped 295% day-over-day on Saturday, February 28, as consumers responded to the news ofOpenAI’s deal with the Department of Defense(DoD), which has been rebranded under the Trump administration as theDepartment of War. This data, which comes from market intelligence providerSensor Tower, represents a sizable increase compared with ChatGPT’s typical day-over-day uninstall rate of 9%, as measured over the past 30 days. Meanwhile, U.S. downloads for OpenAI competitor Anthropic’s Claude jumped up by 37% day-over-day on Friday, February 27, and 51% as of Saturday, February 28, after the company announced that itwould not partner with the U.S. defense department. Anthropic said it was not able to agree on the deal terms over concerns that AI would be used to surveil Americans and be used in fully autonomous weaponry, which AI is not yet ready to do safely. A set of consumers seemingly favored Anthropic’s position on the matter, the data suggests. In addition, ChatGPT’s download growth was impacted by the news of its DoD partnership, with its U.S. downloads dropping by 13% day-over-day on Saturday, shortly after the news of its deal went public. Those downloads continued to fall on Sunday, when they were down by 5% day-over-day. (Before the partnership was announced, the app’s downloads had grown 14% day-over-day on Friday.) These rapid changes were also reflected in Claude’s App Store ranking, asthe app hit No. 1 on the U.S. App Store on Saturday, where it continues to sit as of Monday, March 2. That’s a jump of over 20 ranks compared with roughly a week before (February 22, 2026). Consumers are also sharing their opinions about OpenAI’s deal in the app’s ratings, where 1-star reviews for ChatGPT surged 775% on Saturday, then grew 100% day-over-day on Sunday, Sensor Tower said. Five-star reviews declined during the same period, dropping by 50%. Other third-party data providers back up Sensor Tower’s findings. Appfigures, for instance, noted that Claude’s total daily U.S. downloads on Saturday surpassed those of ChatGPT for the first time. It also saw the U.S. downloads of Claude increase, but its estimates put that figure even higher: by 88% day-over-day on Saturday. It noted that Claude is now the No. 1 free iPhone app in six countries outside the U.S., including Belgium, Canada, Germany, Luxembourg, Norway, Switzerland. A third market intelligence provider,Similarweb, said that Claude’s U.S. downloads over the past week were around 20x what they were in January, but that could be because of other reasons beyond the political issues, it cautioned.
View

Cursor has reportedly surpassed $2B in annualized revenue
The AI coding assistant Cursor hassurpassed $2 billionin annualized revenue, a metric calculated by multiplying the latest month’s revenue by 12, according to a Bloomberg source. This individual says the four-year-old startup saw its revenue run rate double over the past three months. The disclosure appears timed to counter a recent wave of skepticism. Last week,tweets went viralquestioning whether Cursor’s momentum was stalling, citing high-profile defections by individual developers to competing tools — particularly Anthropic’s Claude Code. Founded in 2022, Cursor initially sold its product primarily to individual developers. Over the last year, however, it has focused more on landing large corporate buyers, which now account for approximately 60% of revenue, according to Bloomberg. While some individual developers and smaller startups haveswitched from Cursorto Claude Code, which is seen as more competitively priced, that attrition seems to higher-spending corporate customers who tend to stick around longer. Beyond Claude Code, OpenAI’s coding tool Codex is also competing for share in the rapidly growing market for AI-assisted software development. Other startups in the space include Replit, Cognition, and Lovable. Cursor was last valued at$29.3 billionwhen it raised a $2.3 billion funding round co-led by Accel and Coatue in November. Cursor did not immediately respond to our request for comment.
View

No one has a good plan for how AI companies should work with the government
As Sam Altman discovered Saturday night, it’s a fraught time to do work for the U.S. government. Around 7 p.m., the OpenAI CEOannounced he would be fielding questionspublicly on X, as a way of demystifying his company’sdecision to pick upthe Pentagon contract that Anthropic had just walked away from. Most of the questions boiled down to OpenAI’s willingness to participate in mass surveillance and automated killing – the exact activities Anthropichad ruled outin its negotiations with the Pentagon. Altman typically punted to the public sector, saying it wasn’t his role to set national policy. “I very deeply believe in the democratic process,” he wrote in one response, “and that our elected leaders have the power, and that we all have to uphold the constitution.” An hour later, he confessed surprise that so many people seemed to disagree. “There is more open debate than I thought there would be,” Altman said, “about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something people disagree on.” It’s a telling moment for both OpenAI and the tech industry at large. In his Q&A, Altman employed a stance that’s standard in the defense industry, where military leaders and industry partners are expected to defer to civilian leadership. But what’s more telling is that, as OpenAI transitions from a wildly successful consumer startup into a piece of national security infrastructure, the company appears unequipped to manage its new responsibilities. Altman’s public town hall came at a heightened time for his company. The Pentagon had justblacklisted OpenAI rival Anthropicfor insisting on contractual limitations for surveillance and automated weaponry. Days later, OpenAI announced it had won the same contract Anthropic had given up. Altman portrayed the deal as a quick way to deescalate the conflict – and it was surely a lucrative one. But he seemed unprepared for how much blowback it generated from both the company’s users and its employees. OpenAI has been engaging with the U.S. government for years — but not like this. When Altman was making his case to the Congressional committeesin 2023, for instance, he was still mostly following the social media playbook. He was bombastic about the company’s world-changing potential while acknowledging the risks and enthusiastically engaging with lawmakers — a perfect combination for stirring up investors while heading off regulation. Less than three years later, that approach is no longer tenable. AI is so obviously powerful and the capital needs are so intense that it’s impossible to avoid a more serious engagement with the government. The surprise is how unprepared both sides seem to be for it. The biggest immediate conflict is Anthropic itself, and U.S. Defense Secretary Pete Hegseth’s stated plan Friday to designate the lab as a supply chain risk. That threat looms over the whole conversation like an unfired gun. As former Trump official Dean Ballwrote over the weekend, the designation would cut Anthropic off from hardware and hosting partners, effectively destroying the company. It would be an unprecedented move against an American company, and while it mightultimately be reversed in court, it will cause damage in the interim and send shockwaves through the industry. As Ball describes the process, Anthropic was carrying out an existing contract under terms that had been established years earlier – only to have the administration insist on changing the terms. It’s far beyond anything that would fly between private companies, and sends a chilling message to other vendors. “Even if Secretary Hegseth backs down and narrows his extremely broad threat against Anthropic, great damage has been done,” Ball wrote. “Most corporations, political actors, and others will have to operate under the assumption that the logic of the tribe will now reign.” It’s a direct threat to Anthropic, but also a serious problem for OpenAI. The company is already under intense pressure from employees to maintain some semblance of a red line. At the same time, right-wing media will be on alert for any sign of OpenAI being a less-then-staunch political ally. In the middle of everything is the Trump administration, doing its best to make the situation as difficult as possible. It can be argued that OpenAI didn’t set out to become a defense contractor, but by virtue of its massive ambitions, it’s been forced to play the same game as Palantir and Anduril. Making inroads during the Trump administration means picking sides. There are no apolitical actors here, and winning some friends will mean alienating others. It remains to be seen how high a price OpenAI will pay, either in lost business or lost employees, but it’s unlikely to emerge unscathed. It might seem strange that this crackdown is coming at a time when there are more prominent tech investors holding influential positions in Washington than ever, but most of them seem entirely happy with tribal logic. Among Trump-aligned venture capitalists, Anthropic has long been perceived as currying favor with the Biden administration in ways that would damage the larger industry – a perception underscored by Trump advisorDavid Sacks’ reaction to the ongoing conflict. Now that the reverse has happened, few seem willing to stand up for the broader principle of free enterprise. This is a difficult position for any company to be in – and while politically aligned players may benefit in the short term, they’ll be just as exposed when political winds inevitably shift. There’s a reason why, for decades, the defense sector was dominated by slow-moving, heavily regulated conglomerates like Raytheon and Lockheed Martin. Operating as an industrial wing of the Pentagon gave them the political cover they needed to avoid the politics, staying focused on the technology without having to press reset every time the White House changed hands. Today’s startup competitors might move faster than their predecessors – but they’re much less prepared for the long term.
View

Investors spill what they aren’t looking for anymore in AI SaaS companies
Investors have been pouring billions into AI companies over the past few years, as the technology continues to hold sway in the Valley and thus the world. But not all AI companies are grabbing investor attention. Indeed, even as it seems every company these days is rebranding to include “AI” in its name, some startup ideas are just no longer in favor with investors. TechCrunch spoke with VCs to learn what investors aren’t looking for in AI software-as-a-service startups anymore. Popular SaaS categories for investors now include startups building AI-native infrastructure, vertical SaaS with proprietary data, systems of action (those helping users complete tasks), and platforms deeply embedded in mission-critical workflows, according to Aaron Holiday, a managing partner at 645 Ventures. But he also gave a list of companies that are considered quite boring to investors these days: Startups building thin workflow layers, generic horizontal tools, light product management, and surface-level analytics — basically, anything an AI agent can now do. Abdul Abdirahman, an investor at the firm F Prime, added that generic vertical software “without proprietary data moats” is no longer popular, and Igor Ryabenky, a founder and managing partner at AltaIR Capital, went deeper on that point. He said investors aren’t interested in anything, really, that doesn’t have much product depth. “If your differentiation lives mostly in UI [user interface] and automation, that’s no longer enough,” he said. “The barrier to entry has dropped, which makes building a real moat much harder.” New companies entering the market now need to build around “real workflow ownership and a clear understanding of the problem from day one,” he said. “Massive codebases are no longer an advantage. What matters more is speed, focus, and the ability to adapt quickly. Pricing also needs to be flexible: rigid per-seat models will be harder to defend, while consumption-based models make more sense in this environment.” Jake Saper, a general partner at Emergence Capital, also had thoughts on ownership. To him, the differences between Cursor and Claude Code are the “canary in the coal mine.” “One owns the developer’s workflow, the other just executes the task,” Saper continued. “Developers are increasingly choosing the execution over process.” He said any product dealing with “workflow stickiness” — meaning trying to attract as many human customers as possible to continuously use the product — might find themselves in an uphill battle as agents takeover the workflow. “Pre-Claude, getting humans to do their jobs inside your software was a powerful moat, but if agents are doing the work, who cares about human workflow?” he told TechCrunch. He also thinks integrations are becoming less popular, especially as Anthropic’s model context protocol (MCP) makes it easier than ever to connect AI models to external data and systems. This means someone doesn’t need to download multiple integrations or build their own customer integrations; they can just use the MCP. “Being the connector used to be a moat,” Saper said. “Soon, it’ll be a utility.” Also, no longer en vogue include the “workflow automation and task management tools that enable the coordination of human work become less necessary if, over time, agents just execute the tasks,” Abdirahman said, citing examples, mainly public SaaS companies whose stocks are down as new AI-native startups arise with better, more efficient technology. Ryabenky said the SaaS companies struggling to raise right now are the ones that can easily be replicated, he said. “Generic productivity tools, project management software, basic CRM clones, and thin AI wrappers built on top of existing APIs fall into this category,” he said. “If the product is mostly an interface layer without deep integration, proprietary data, or embedded process knowledge, strong AI-native teams can rebuild it quickly. That is what makes investors cautious.” Overa, what remains attractive about SaaS is depth and expertise, with tools embedded in critical workflows. He said companies should right now look into integrating AI deeply into their products and update their marketing to reflect that, Ryabenky continued. “Investors are reallocating capital toward businesses that own workflows, data, and domain expertise,” Ryabenky said. “And away from products that can be copied without much effort.”
View

A married founder duo’s company, 14.ai, is replacing customer support teams at startups
The customer service industry is in a bit of flux, thanks to AI.Investorsandcorporate leadershave rung alarm bells for the BPO (Business Process Outsourcing) industry. On the other hand, AI-powered customer support startups such as Decagon, Parloa, and Sierra have picked up millions of dollars in funding from venture capitalists. 14.ai, a Y Combinator-backed startup, is taking an approach of building an AI-native agency that has replaced legacy customer support teams at many startups it has worked with. The company has raised $3 million in seed funding led by Y Combinator, with participation from General Catalyst, Base Case Capital, SV Angel, and the founders of Dropbox, Slack, Replit, and Vercel. The startup was founded by a married duo, Marie Schneegans and Michael Fester. The two met in Paris more than a decade ago and went on to build separate companies. Schneegans was a co-founder atcorporate intranet company Workwell. Fester previously founded Snips, a company that worked on local first assistants for smart devices, whichwas acquired by Sonos in 2019. After this, they wanted to build a company together, so they moved to the U.S. The duo picked up customer service as the problem to tackle, but didn’t want to build a pure-play SaaS company. They founded 14.ai to operate as an AI-native customer support agency of sorts. “We’re not building software for customers. 14.ai is an AI-native customer service agency. We combine software and services in one package. For customers, operating software is hard, especially for customer service. We take over their entire operation, and we use our own purpose-built stack for customer service,” Fester said. The company said it can integrate with a support system within a day and start clearing the support ticket backlog very quickly. It can monitor tickets across various channels, including email, calls, chat, TikTok, Facebook, Telegram, and WhatsApp. “We started working with a men’s health supplement company called Sperm Worms by a former YC founder, who had a lot of backlog of tickets. His team of customer service agents was in the Philippines, and they were not being able clear tickets efficiently. We took over on Thursday morning, and by Thursday afternoon, we had cleared tickets from all channels like social media, SMS, email, chat, and voice,” Schneegans said. The company currently has six people working, and they all take turns to be available around the clock for the clients they work with. The startup said that with the new funding, it aims to increase the headcount in the next six months. 14.ai is only working with AI engineers and plans to hire more AI engineers. The startup learns workflows of customer support and other functions, such as sales and revenue growth, and tries to automate tasks through its software so humans have to spend less time on particular issues. “We are not just a support agency, but also a revenue growth engine because we capture all kinds of conversations early on for a client and get insights from them,” Fester said. The company wants to take off three key items from a startup’s balance sheet, including ticketing systems, AI software add-ons, and human labor costs. The startup caters to many clients in different sectors such as luxury skin care brandYon-KA, smart glasses makerBrilliant Labs, andlighting company Creative Lighting. The startup also wants to improve its own product by experimenting and letting AI handle most tasks. For that, it runsGloGlo, a glucose gummies brand for Type 1 diabetics, and tries to operate autonomously with AI. Tom Blomfield, a partner at Y Combinator, thinks that 14.ai strikes the right balance between using AI and humans for customer service. He said that with the right integration, AI can solve 60% of the task automatically, and the remaining 40% could be handled by humans. “As the AI takes over more and more of the work, the balance between AI and humans will change over time. With the existing platforms, the customer is left to handle round after round of painful headcount reductions,” he told TechCrunch over email. “In contrast, 14.ai becomes the customer service department, both AI and human. They can reassign customer support agents between customers who are at different stages of the AI adoption journey, and carry out that load balancing much more effectively,” he added. Notably, AI-powered agencies are one of the things Y Combinator mentioned in itsrequests for startups in 2026.
View

Tech workers urge DOD, Congress to withdraw Anthropic label as a supply chain risk
Hundreds of tech workers have signed anopen letterurging the Department of Defense to withdraw its designation of Anthropic as a “supply chain risk.” The letter also calls on Congress to step in and “examine whether the use of these extraordinary authorities against an American technology company is appropriate.” The letter includes signatories from major technology and venture capital firms including OpenAI, Slack, IBM, Cursor, Salesforce Ventures, and more. It follows adispute between the DOD and Anthropicafter the AI lab last weekrefused to give the militaryunrestricted access to its AI systems. Anthropic’s two red lines in its negotiations with the Pentagon were that it didn’t want its technology to be used for mass surveillance on Americans or to power autonomous weapons that made targeting and firing decisions without a human in the loop. The DOD said it had no plans to do either of those things, but that it didn’t believe it should be limited by the rules of a vendor. After Anthropic CEO Dario Amodei declined to reach an agreement with Defense Secretary Pete Hegseth, President Donald Trump on Friday directed federal agencies to stop using Anthropic’s technology after a six-month transition period. Hegseth than moved to designate Anthropic a supply chain risk — a designation normally reserved for foreign adversaries that would blacklist the AI firm from working with any agency or company that does business with the Pentagon. In apost on Friday, Hegseth wrote: “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” But a post on X does not automatically make Anthropic a supply chain risk. The government needs to complete a risk assessment and notify Congress before military partners have to cut ties with Anthropic or its products. Anthropic said in ablog postthe destination is both “legally unsound” and that it would “challenge any supply chain risk designation in court.” Many in the industry see the administration’s treatment of Anthropic as harsh and clear retaliation. “When two parties cannot agree on terms, the normal course is to part ways and work with a competitor,” the open letter reads. “This situation sets a dangerous precedent. Punishing an American company for declining to accept changes to a contract sends a clear message to every technology company in America: accept whatever terms the government demands, or face retaliation.” Beyond concern over the government’s harsh treatment of Anthropic, many in the industry are still concerned about potential government overreach and use of AI for nefarious purposes. Boaz Barak, an OpenAI researcher,wrote in a social media poston Monday that blocking governments from using AI to do mass surveillance is also his “personal red line” and “it should be all of ours.” Moments after Trump publicly attacked Anthropic, OpenAI announced it hadreached a dealof its own for its models to be deployed in the DOD’s classified environments. OpenAI CEO Sam Altman said last week that the firm has the same red lines as Anthropic. “If anything good can come out of the events of the last week, it would be if we in the AI industry start treating the issue of using AI for government abuse and surveilling its own people as a catastrophic risk of its own right,” Barak wrote. “We have done a good job of evaluations, mitigations, and processes, for risks such as bioweapons and cyber security. Let’s use similar processes here.”
View

Users are ditching ChatGPT for Claude. Here’s how to make the switch
Many users are switching to Claude following a string of controversies surrounding ChatGPT and its parent company, OpenAI. The tipping point came afterAnthropic, the company behind Claude, refused to allow the Department of Defense to use its AI models for mass domestic surveillance or fully autonomous weapons. In response, President Trump ordered all federal agencies to stop using Anthropic’s products, and Defense Secretary Pete Hegsethannouncedplans to designate the company a supply-chain threat. Hours later, OpenAIannouncedits own agreement with the Pentagon, claiming to include safeguards, but the deal has still sparked widespread debate over privacy and the ethical use of AI. As a result,Claude has surged to the top of the free app rankingsin Apple’s US App Store, overtaking ChatGPT. According to Anthropic, daily signups have hit record highs, free users have jumped by more than 60% since January, and paid subscribers have more than doubled this year. For many users, the recent controversy has made Claude a compelling alternative to ChatGPT. If you’re considering making the switch, this guide will walk you through transferring your data and closing your ChatGPT account. Breaking up with ChatGPT shouldn’t mean losing years of digital memory. Instead of starting from scratch with a new AI assistant, you can transfer your data to Claude so it can get up to speed on your preferences right away. There are a few ways to do this. One place to start is Settings. From there, you go to Personalization and find the Memory section. Select “Manage” and review your stored information, updating anything that no longer accurately reflects your preferences. Once everything is up to date, copy the content you want to keep. Alternatively, you can export your entire chat history. Head to Settings, select Data Controls, and choose “Export Data.” ChatGPT will compile your chat records into text or JSON files and email them to you. (Be aware that this may take a while if you have a lot of history.) You can also take a manual approach by copying key conversations from your history or asking ChatGPT to summarize your main preferences, frequently discussed topics, and any custom instructions you use. Once you’ve gathered your data, transferring it to Claude is straightforward. Open Claude, go to Settings, then Capabilities, and make sureMemoryis turned on. (You’ll need to sign up for the Pro, Max, Team, or Enterprise plan to enable this feature.) Then, start a new conversation and use a prompt like, “Here’s some important context I’d like you to remember. Update your memory about me with this.” Then paste your information or summaries directly into the chat. For exported chat files, don’t paste raw logs. Instead, prompt Claude with something like: “Review this and summarize my key preferences.” We also recommend taking a moment to verify with Claude that your information has been saved accurately. You can always update your preferences as they change. To make a complete break from ChatGPT, simply canceling your subscription isn’t enough to remove your data. Here’s what to do:
View

Anthropic’s Claude reports widespread outage
Anthropic experienced widespread disruptions on Monday morning, with thousands of users reporting problems accessing Claude services. The outage seems to be affecting Claude.ai as well as Claude Code, though the company said the Claude API is working as intended. Most users experience the error when attempting to log in, as in the screenshot below. “The issues we are seeing are related to Claude.ai and with the login/logout paths,” the company’sstatus pagereads. Anthropic has not yet detailed what caused the outage, though the company said it has identified an issue and is implementing a fix. The disruption follows an influx of users to Claude, which seems to have benefited from the attention around the company’sfraught negotiationswith the Pentagon. The chatbot app rose tothe top of the App Store chartsthis weekend, overtaking archrival ChatGPT, after spending a long time below the top 20. U.S. President Donald Trump last week told federal agencies to stop using Anthropic products after a dispute over safeguards preventing the Department of Defense from using its AI models for mass domestic surveillance or fully autonomous weapons. Secretary of Defense Pete Hegseth said he woulddesignate the company as a supply-chain threat, though Anthropicsaysit hasn’t yet received any formal notices.
View
