Latest AI News

Pentagon moves to designate Anthropic as a supply-chain risk

Pentagon moves to designate Anthropic as a supply-chain risk

In a post on Truth Social, President Trump directed federal agencies to cease use of all Anthropic products after the company’spublic dispute with the Department of Defense. The president allowed for a six-month phase-out period for departments using the products, but emphasized that Anthropic was no longer welcome as a federal contractor. “We don’t need it, we don’t want it, and will not do business with them again,” the president wrote in the post. pic.twitter.com/B51SWfn81N Notably, the president’s post did not mention any plans to designate Anthropic as a supply chain risk, as had been previously mentioned as a consequence. However,a subsequent tweetfrom Secretary of Defense Pete Hegseth made good on the threat. “In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security,” Secretary Hegseth wrote. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” The Pentagon dispute centered on Anthropic’s refusal to allow its AI models to be used to power either mass domestic surveillance or fully autonomous weapons, which Secretary Hegseth found unduly restrictive. CEO Dario Amodei reiterated his stancein a public post on Thursday, refusing to compromise on the two points. “Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place,” Amodei wrote at the time. “Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.” OpenAI has come out in support of Anthropic’s decision.Per the BBC, CEO Sam Altman sent a memo to staff on Thursday saying he shared the same “red lines” and that any OpenAI-related defense contracts would also reject uses that were “unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.” OpenAI co-founder Ilya Sutskever, who very publicly fell out with Altman inNovember 2023and has since co-founded his own AI company, also waded into the conversation on Friday,writing on X: “It’s extremely good that Anthropic has not backed down, and it’s significant that OpenAI has taken a similar stance. In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside. Good to see that happen today.” Anthropic, OpenAI and Google each receivedcontract awardsfrom the U.S. Defense Department last July. While someGoogle employeeshave come out in support of Anthropic, Google and its parent company have yet to comment.

2 months ago

View

Anthropic vs. the Pentagon: What’s actually at stake?

Anthropic vs. the Pentagon: What’s actually at stake?

The past two weeks have been defined by aclashbetween Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth as the two battle over the military’s use of AI. Anthropic refuses to allow its AI models to be used for mass surveillance of Americans or for fully autonomous weapons that conduct strikes without human input. At the same time, Secretary Hegseth has argued the Department of Defense shouldn’t be limited by the rules of a vendor, arguing any “lawful use” of the technology should be permitted. On Thursday,Amodei publicly signaledthat Anthropic isn’t backing down — despite threats that his company could be designated as a supply chain risk as a result. But with the news cycle moving fast, it’s worth revisiting exactly what’s at stake in the fight. At its core, this fight is about who controls powerful AI systems — the companies that build them, or the government that wants to deploy them. As we said above, Anthropic doesn’t want its AI models to be used for mass surveillance of Americans or for autonomous weapons with no humans in the loop for targeting and firing decisions. Traditional defense contractors typically have little say in how their products will be used, but Anthropic has argued from its inception that AI technology poses unique risks and therefore requires unique safeguards. From the company’s perspective, the question is how to maintain those safeguards when the technology is being used by the military. The U.S. military already relies on highly automated systems, some of which are lethal. The decision to use lethal force has historically been left to humans, but there are few legal restrictions on military use of autonomous weapons. The DoD doesn’t categorically ban fully autonomous weapons systems. According to a2023 DOD directive, AI systems can select and engage targets without human intervention, as long as they meet certain standards and pass review by senior defense officials. That’s precisely what makes Anthropic nervous. Military technology is secretive by nature, so if the U.S. military were taking steps to automate lethal decision-making, we might not know about it until it was operational. And if it used Anthropic’s models, it could count as “lawful use.” Anthropic’s position isn’t that such uses should be permanently off the table. It’s that its models aren’t capable enough to support them safely yet. Imagine an autonomous system misidentifying a target, escalating a conflict without human authorization, or making a split-second lethal decision that no one can reverse. Put a less-capable AI in charge of weapons, and you get a very fast, very confident machine that’s bad at making high-stakes calls. AI also has the power to supercharge lawful surveillance of American citizens to a concerning degree. Under current U.S. laws, surveillance of American citizens is already possible, whether through collection of texts, emails, and other communication. AI changes the equation by enabling automated large-scale pattern detection, entity resolution across datasets, predictive risk scoring, and continuous behavioral analysis. The Pentagon’s argument is that it should be able to deploy Anthropic’s technology for any lawful use it deems necessary, rather than be limited by Anthropic’s internal policies on things like autonomous weapons or surveillance. More specifically, Secretary Hegseth has argued the Department of Defense shouldn’t be limited by the rules of a vendor and that it would engage in “lawful use” of the technology. Sean Parnell, the Pentagon’s chief spokesperson, said in aThursday X postthat the department has no interest in conducting mass domestic surveillance or deploying autonomous weapons. “Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell said. “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions.” He added that Anthropic has until 5:01 p.m. ET on Friday to decide. “Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW,” he said. Despite the DoD’s stance that it simply doesn’t believe it should be limited by a corporation’s usage policies, Secretary Hegseth’s concerns about Anthropic have at times seemed connected to cultural grievance. Ina speech at SpaceX and xAI offices in January, Hegseth railed against “woke AI” in a speech that some saw as a preview of his feud with Anthropic. “Department of War AI will not be woke,” Hegseth said. “We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.” The Pentagon has threatened to either declare Anthropic a “supply chain risk” — which effectively blacklists Anthropic from doing business with the government — or invoke the Defense Production Act (DPA) to force the company to tailor its model to the military’s needs. Hegseth has given Anthropic until 5:01 p.m. on Friday to respond. But with the deadline approaching, it’s anyone’s guess whether the Pentagon will make good on its threat. This is not a fight either party can easily walk away from. Sachin Seth, a VC at Trousdale Ventures who focuses on defense tech, says a supply chain risk label for Anthropic could mean “lights out” for the company. However, he said, if Anthropic is dropped from the DoD, it could be a national security issue. “[The Department] would have to wait six to 12 months for either OpenAI or xAI to catch up,” Seth told TechCrunch. “That leaves a window of up to a year where they might be working from not the best model, but the second or third best.” xAI is gearing up to become classified-ready and replace Anthropic, and it’s fair to say given ownerElon Musk’s rhetoricon the matter that the company would have no problem giving the DoD total control over its technology. Recentreportsindicate that OpenAI may stick to the same red lines as Anthropic.

2 months ago

View

Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

In a newly released deposition filed in Elon Musk’s case against OpenAI, the tech executive attacked OpenAI’s safety record, claiming that his company, xAI, better prioritizes safety. He went so far as to say that “Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT.” The comment came up in a line of questioning about apublic letterMusk signed in March 2023. In it, he called on AI labs to pause development of AI systems more powerful than GPT-4, OpenAI’s flagship model at the time, for at least six months. The letter, which was signed by over 1,100 people, including many AI experts, stated there was not enough planning and management taking place at AI labs, as they were locked in an “out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.” Those fears have since gained credibility. OpenAI now faces aseries of lawsuitsalleging thatChatGPT’s manipulative conversation tacticshave led several people to experience negative mental health effects, with some dying by suicide. Musk’s comment suggests that these incidents could be used as fodder in his case against OpenAI. The transcript of Musk’s video testimony, which took place back in September, was filed publicly this week, ahead of the expected jury trial next month. Thelawsuitagainst OpenAI centers on the company’s shift from a nonprofit AI research lab to a for-profit company, whichMusk claims violatedits founding agreements. As part of his arguments, Musk claims that AI safety could be compromised by OpenAI’s commercial relationships, as such relationships would place speed, scale, and revenue above safety concerns. However, since that recording, xAI has faced safety concerns of its own. Last month, Musk’s social network X wasflooded with nonconsensual nude imagesgenerated by xAI’s Grok, some of whichwere said to be of minors. This led the California Attorney General’s office toopen an investigationinto the matter. The EU is alsorunning its own investigation, and other governments have taken action, too, with some imposing blocks and bans. In the newly filed deposition, Musk claimed he had signed the AI safety letter because “it seemed like a good idea,” not because he had just incorporated an AI company looking to compete with OpenAI. “I signed it, as many people did, to urge caution with AI development,” Musk said. “I just wanted … AI safety to be prioritized.” Musk also responded to other questions in the deposition, including those about artificial general intelligence, or AGI — the concept of AI that can match or surpass human reasoning across a broad range of tasks — saying “it has a risk.” He also confirmed that he “was mistaken” about hissupposed $100 million donationto OpenAI; thesecond amended complaintin the case puts the actual figure closer to $44.8 million. He also recalled why OpenAI was founded, which, from his perspective, was because he was “increasingly concerned about the danger of Google being a monopoly in AI,” adding that his conversations with Google co-founder Larry Page were “alarming, in that he did not seem to be taking AI safety seriously.” OpenAI was formed as a counterweight to that threat, Musk claimed.

2 months ago

View

Pentagon moves to designate Anthropic as a supply-chain risk

Pentagon moves to designate Anthropic as a supply-chain risk

In a post on Truth Social, President Trump directed federal agencies to cease use of all Anthropic products after the company’spublic dispute with the Department of Defense. The president allowed for a six-month phase-out period for departments using the products, but emphasized that Anthropic was no longer welcome as a federal contractor. “We don’t need it, we don’t want it, and will not do business with them again,” the president wrote in the post. pic.twitter.com/B51SWfn81N Notably, the President’s post does not mention any plans to designate Anthropic as a supply chain risk, as had been previously mentioned as a consequence. However,a subsequent tweetfrom Secretary of Defense Pete Hegseth made good on the threat. “In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security,” Secretary Hegseth wrote. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” The Pentagon dispute centered on Anthropic’s refusal to allow its AI models to be used to power either mass domestic surveillance or fully autonomous weapons, which Secretary Hegseth found unduly restrictive. CEO Dario Amodei reiterated his stancein a public post on Thursday, refusing to compromise on the two points. “Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place,” Amodei wrote at the time. “Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”

2 months ago

View

Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter

Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter

Anthropic has reached a stalemate with the United States Department of War over the military’srequest for unrestricted accessto the AI company’s technology. But as the Pentagon’sFriday afternoon deadlinefor Anthropic’s compliance approaches, more than 300 Google employees and over 60 OpenAI employees have signedan open letterurging the leaders of their companies to support Anthropic and refuse this unilateral use. Specifically, Anthropic stood in opposition to the use of AI for domestic mass surveillance and autonomous weaponry. The open letter’s signatories seek to encourage their employers to “put aside their differences and stand together” to uphold the boundaries Anthropic has asserted. “They’re trying to divide each company with fear that the other will give in,” the letter says. “That strategy only works if none of us know where the others stand.” The letter specifically calls on executives at Google and OpenAI to maintain Anthropic’s red lines against mass surveillance and fully automated weaponry. “We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War’s current demands.” Leaders at the companies have not yet formally reponded to the letter. TechCrunch has reached out to Google and OpenAI for comment. However, informal statements suggest both companies are sympathetic to Anthropic’s side of the case. In an interview with CNBC on Friday morning, OpenAI CEO Sam Altmansaidthat he doesn’t “personally think the Pentagon should be threatening DPA against these companies.” According to a CNN reporter, an OpenAI spokespersonconfirmedthat the company shares Anthropic’s red lines against autonomous weapons and mass surveillance. Agreed. Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression. Surveillance systems are prone to misuse for political or discriminatory purposes.https://t.co/f2JRHAhjTW Google DeepMind has not formally addressed the conflict, but Chief Scientist Jeff Dean, presumably speaking as an individual, did express opposition to mass surveillance by the government. “Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression,” Deanwroteon X. “Surveillance systems are prone to misuse for political or discriminatory purposes.” According to anAxiosreport, the military currently can use X’s Grok, Google’s Gemini, and OpenAI’s ChatGPT for unclassified tasks, and has been negotiating with Google and OpenAI to bring its technology over for use in classified work. While Anthropic has an existing partnership with the Pentagon, the AI company has remained firm in maintaining the boundary that its AI be used for neither mass domestic surveillance, nor fully autonomous weaponry. Defense Secretary Pete Hegsethtold Anthropic CEO Dario Amodeithat if his company doesn’t concede, the Pentagon will either declare Anthropic a “supply chain risk” or invoke the Defense Production Act (DPA) to force the company to comply with military demands. Ina statement on Thursday, Amodei maintained his company’s position. “These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” the statement reads. “Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”

2 months ago

View

Perplexity’s new Computer is another bet that users need many AI models

Perplexity’s new Computer is another bet that users need many AI models

Starting this week, Perplexity subscribers will have a new agentic tool at their disposal. Perplexity Computer, in the company’s words, “unifies every current AI capability into a single system.” More specifically, Perplexity says it is a computer user agent that can execute complex workflows independently using 19 different AI models, even creating subagents to handle specific problems. The tool is available now, only on the company’s highest subscription tier, the $200/month Perplexity Max. It runs entirely in the cloud, which might spare it some of the security concerns of other agentic tools like OpenClaw. TechCrunch hasn’t done a hands-on demo of the new tool, but inexample workflowson Perplexity’s website, it is shown handling tasks that involve collecting statistics, financial, or legal data; creating analysis; and sharing its findings as finished websites or visualizations. Perplexity invited the press to a background briefing with executives last week to discuss the product and lay out the agenda for the year. The event was intended to include a demonstration of the tool, but the company cancelled the demo because of flaws found in the product hours before the event. This tool represents the evolution of Perplexity, which made a splash early in the AI boom by wrapping frontier models in familiar user interfaces, particularly its search-engine-like answer service. It then moved on to launch its Comet web browser last summer. Competitors like Google have now changed their products to be more like those built at Perplexity, one executive said, but that’s a threat as much as a compliment. The company is changing in response to a shifting ecosystem: One of the first AI companies to offer advertising, it abandoned that business late last year, saying last week that it undermined users’ trust in their answers’ accuracy. But Perplexity’s total user base—in the tens of millions of users—pales in comparison to that of OpenAI, which claims 800 million weekly users and began testing ads in ChatGPT this year. Now, Perplexity executives say they are aiming for a more boutique set of users, with products that serve people making “GDP-moving decisions.” Executives in the briefing, who asked not to be identified by name, described prioritizing enterprise subscriptions, particularly for deep research. “You don’t hear us talk about MAUs ever, because we’re not actually on a mission to get as many users as possible,” one executive said. Perplexity recently released a new benchmark for complex research tasks, called Draco, where (no surprise) its own deep research offering beats out competitors like Gemini. Perplexity says it is no longer reliant on other companies’ APIs for its web index and now has its own AI-optimized search API. But the company is doubling down on packaging frontier models in a consumer-friendly user experience, arguing that there is value in orchestrating multiple third-party LLMs to obtain the most cost-effective and accurate answers to queries. “Multi-model is the future,” one Perplexity exec argued. Models, in their view, are specializing, not commoditizing. The company has found that its users frequently switch between models to obtain the results they are looking for, with December 2025 queries for visual outputs most often sent to Gemini Flash, software engineering done by Claude Sonnet 4.5, and medical research in GPT-5.1. If one LLM is better at coding tasks and another does a better job drafting marketing copy, Perplexity’s software can automatically choose the ideal one. Another example, executives said, is running Perplexity’s own modified open-source Chinese-built LLMs to answer queries more cheaply, a technique the company got dinged for hiding from its customers last year. But done transparently, the technique could prove an efficient way to optimize LLM queries. The company also offers users the opportunity to query multiple models at once, in a feature called Model Council. But the unit economics of offering multiple queries at flat subscription rates aren’t entirely clear. Still, without expensive infrastructure projects on its books and with, the executives claimed, high margins on users fees, Perplexity believes it will remain competitive by allocating tokens to the best model for a purpose. And there is more coming: Perplexity Comet browser is coming to iOS next month, and the company is planning a developers’ conference, Ask, on March 11 in San Francisco to promote third-party use of its API. One executive said that instead of looking at the previous day’s number of queries each morning, he was now looking at the most recent revenue metrics. At least some customers are noticing a new focus on the bottom line, with the Perplexity subreddit featuring frequent complaints of new rate limits on free and subscription product tiers. However, the execs at the briefing dismisses such complaints. “Any discussions on the free tier being made worse or rate-limited is completely false,” an executive said.

2 months ago

View

AI music generator Suno hits 2M paid subscribers and $300M in annual recurring revenue

AI music generator Suno hits 2M paid subscribers and $300M in annual recurring revenue

Sunoco-founder and CEO Mikey Shulmanshared on LinkedInthat the AI music generator has amassed 2 million paid subscribers and $300 million in annual recurring revenue. Just three months ago, Suno announced a$250 million funding roundthat valued the company at $2.45 billion. At the time, Suno told The Wall Street Journal that annual revenue had hit $200 million — that would indicate that the company has had some major growth in a short time frame. Suno lets users create music using natural language prompts, making it possible for people with little experience to generate audio with little effort. This has sparked concern from musicians and record labels, who have sued Suno for copyright infringement, since its AI model was likely trained on existing recorded music. But Warner Music Group recentlysettled its lawsuitand instead reached a deal that allows Suno to launch models that use licensed music from its catalog. Suno has generated synthetic music that sounds real enough to top charts on Spotify and Billboard. Telisha Jones, a 31-year-old in Mississippi, used Suno to turn her poetry into the viral R&B song “How Was I Supposed to Know” and signed a record deal with Hallwood Media in a deal reportedly worth$3 million. Still, many musicians havespoken outagainst the use of AI in music, including Billie Eilish, Chappell Roan, Katy Perry, and more.

2 months ago

View

Who’s really running AI? Inside the billion-dollar battle over regulation with Alex Bores

Who’s really running AI? Inside the billion-dollar battle over regulation with Alex Bores

The Pentagon isplaying chicken with Anthropicover who gets to control how the military uses AI while communities across the country areblocking data center construction. As the AI debate has been flattened to “doomers versus boomers,” one state legislator is attempting to walk a middle road. On this episode of TechCrunch’sEquitypodcast, Rebecca Bellan sits down with Alex Bores, a New York State Assemblymember and candidate for U.S. Congress. Bores sponsored New York’s first-of-its-kind AI safety lawthe RAISE Act— and quickly became the target of aSilicon Valley lobbying group with $125 million to spendon attack ads. Listen to the full episode to hear about: Subscribe to Equity onYouTube,Apple Podcasts,Overcast,Spotifyand all the casts. You also can follow Equity onXandThreads, at @EquityPod.

2 months ago

View

ChatGPT reaches 900M weekly active users

ChatGPT reaches 900M weekly active users

ChatGPT has reached 900 million weekly active users, OpenAIannouncedFriday, putting the AI chatbot within striking distance of 1 billion. OpenAI also shared that it now has 50 million paying subscribers. “Subscriber momentum accelerated meaningfully to start the year, with January and February on track to be the largest months for new subscribers in our history,” the company wrote in a blog post. “People use ChatGPT to learn, write, plan, and build. As usage scales, the product improves in ways people feel immediately: faster responses, higher reliability, stronger safety, and more consistent performance.” The new weekly active user figure marks a jump of 100 million users from the800 millionthat OpenAI reported in October 2025. OpenAI shared the new numbers as part of its announcement that it hasraised $110 billionin private funding, marking one of the largest private funding rounds in history. The new funding includes a $50 billion investment from Amazon, along with $30 billion each from Nvidia and SoftBank, at a $730 billion pre-money valuation. The round remains open, and the company expects more investors to join.

2 months ago

View

OpenAI raises $110B in one of the largest private funding rounds in history

OpenAI raises $110B in one of the largest private funding rounds in history

OpenAI has raised $110 billion in private funding, the companyannounced Friday morning, commencing one of the largest private funding rounds in history. The new funding consists of a $50 billion investment from Amazon as well as $30 billion each from Nvidia and SoftBank, against a $730 billion pre-money valuation. Notably, the round remains open, and OpenAI expects more investors to join as it proceeds. “We are entering a new phase where frontier AI moves from research into daily use at global scale,” OpenAI said. “Leadership will be defined by who can scale infrastructure fast enough to meet demand, and turn that capacity into products people rely on.” As part of the investment, OpenAI is launching significant infrastructure partnerships with both Amazon and Nvidia. As in previous rounds, it is likely that a significant portion of the dollar amount comes in the form of services rather than cash, although the precise split was not disclosed. The company’s previous round closed in March 2025, raising $40 billion against a $300 billion valuation. At the time, it wasthe largest private funding round on record. As part of itsAmazon partnership, OpenAI plans to develop a new “stateful runtime environment” where OpenAI models will run onAmazon’s Bedrock platform. The company will also expand itspreviously announced AWS partnership, which committed $38 billion in compute services, by $100 billion. OpenAI has committed to consuming at least 2GW of AWS Tranium compute as part of the deal, and also plans to build custom models to support Amazon consumer products. “We have lots of developers and companies eager to run services powered by OpenAI models on AWS,” said Amazon CEO Andy Jassy in a statement, “and our unique collaboration with OpenAI to provide stateful runtime environments will change what’s possible for customers building AI apps and agents.” The Information hadpreviously reportedthat $35 billion of Amazon’s investment could be contingent on the company either achieving AGI or making its IPO by the end of the year. OpenAI’s announcement confirms the funding split, but says only that the additional $35 billion will arrive “in the coming months when certain conditions are met.” OpenAI gave fewer details on the Nvidia partnership, but said it had committed to using “3GW of dedicated inference capacity and 2GW of training on Vera Rubin systems” as part of the deal. Nvidia’s participation in the round has been the subject of intense speculation, particularly as reports of a $100 billion investment in September gave way to reports of a smaller investment in the months that followed. In January,Huang dismissed the idea that Nvidia was backing away from OpenAI, saying, “we will invest a great deal of money. I believe in OpenAI. The work that they do is incredible.”

2 months ago

View

Last 24 hours to get TechCrunch Disrupt 2026 tickets at the lowest rates of the year

Last 24 hours to get TechCrunch Disrupt 2026 tickets at the lowest rates of the year

Today is it! When the clock hits11:59 p.m. PT, the lowest ticket rates of the year forTechCrunch Disrupt 2026go up. No extensions. No second chances. The same access will cost more tomorrow. If you’re planning to attend, this is your final window to lock inup to $680 off your passorup to 30% off group passes. After tonight, this year’s biggest savings disappear.Register now. If you’re raising capital, hiring top talent, launching your startup, or hunting for your next portfolio company, missing Disrupt from October 13–15 at San Francisco’s Moscone West isn’t just inconvenient. It’s a missed opportunity to move ahead while others hesitate. Here’s what you gain when you attend: Founder Pass: Get the insights, tools, and investor access you need to scale.Investor Pass: Discover breakout startups and expand your portfolio with curated matchmaking. Disrupthas long been a stage for founders and investors who define eras. The voices you’ll hear are candid, tactical, and often unfiltered. The 2026 agenda drops soon. Keep an eye on theevent site. Previous speakers have included leaders of industry-defining startups and top-tier venture firms, including: Tonight at 11:59 p.m. PT, the lowest ticket rates of the year to TechCrunch Disrupt 2026 are gone. After today, you pay more. Register now. Lock in up to $680 in savings. Or bring your team and save up to 30% withcommunity passesof four or more.

2 months ago

View

AVGC-XR Sector Can Create 20 Lakh Jobs in Karnataka: CM Siddaramaiah

AVGC-XR Sector Can Create 20 Lakh Jobs in Karnataka: CM Siddaramaiah

At the 7th Bengaluru GAFX, CM Siddaramaiah talked about implementing the AVGC-XR policy and roadmap to ethical AI.

2 months ago

View

PreviousPage 131 of 155Next