Want to get featured here? Explore premium visibility opportunities.

Contact us

AI NewsWhat the jury will actually decide in the case of Elon Musk vs. Sam Altman

What the jury will actually decide in the case of Elon Musk vs. Sam Altman

6:44 AM IST · May 15, 2026

What the jury will actually decide in the case of Elon Musk vs. Sam Altman

Nine California jurors are now deliberating over the future of OpenAI, the world-leading artificial intelligence lab. While the trial exploring Elon Musk’s case against OpenAI’s other cofounders and Microsoft has covered territory ranging fromthe breakupof the founders in 2018 to Altman’sfiring and rehiringin 2023, the jurors will be considering a set of fairly narrow questions. OpenAI has also made three arguments in its defense that the jury will weigh: If Musk wins out, it could mean the end of OpenAI as a for-profit company, but it’s not entirely clear what will result. Next week, the judge will begin a set of new hearings where lawyers from both sides will debate what the consequences of a verdict in favor of the plaintiffs might be. That process could be rendered moot by a negative verdict, however. Musk’s attorneys say the defendants clearly understood that Musk wanted to support a non-profit that would ensure the benefits of AI to the world, and prevent it from being controlled by any one organization. In particular, they say a $10 billion investment from Microsoft in 2023 into OpenAI’s for-profit affiliate—the first to happen after the statute of limitations—was the event that turned Musk’s concern into conviction. That deal, Musk’s lawyers say, was different from previous investments and led to OpenAI’s investors being enriched by the company’s commercial products, at the expense of the charitable mission of AI safety that Musk promoted. OpenAI’s attorneys have asked every witness to describe specific restrictions put on Musk’s donations, and none have, including his financial adviser Jared Birchall, his chief of staff Sam Teller, or his special adviser Shivon Zilis. They say everyone involved agreed that private fundraising would be required to achieve its goals, and note that Musk himself attempted to launch an OpenAI-affiliated for-profit he would personally control, and later to merge OpenAI into his company Tesla. They also note the organization’s other donors haven’t said their charitable trust was violated. Importantly, a forensic accountant hired by OpenAI testified that all of Musk's donations had been used by OpenAI well before the key date of August 5, 2021. That is evidence that Musk's donations were already used for their purpose well before he brought his lawsuit, invalidating any charitable trust that may have existed. Mainly, they insist that the for-profit affiliate that conducts most of OpenAI's actual activity continues to fulfill the organization's mission, and has generated nearly $200 billion in equity value to support the non-profit foundation. Notably, Sam Altman argued that providing ChatGPT for free helps fulfill the mission of sharing the benefits of AI with the world. The plaintiffs point to the multibillion-dollar valuations of stakes held by OpenAI founders like Brockman and Ilya Sutskever, as well as Microsoft itself, as a sign that Musk's donations were ultimately used for personal benefit, as opposed to supporting the mission of the charity. They argue that the work at OpenAI's for-profit was commercially focused, while the foundation itself was left essentially dormant, without full-time employees, and, ultimately, not even in control of the for-profit. OpenAI says all of Musk's contributions were used by the foundation by 2020, and that equity distributions came well after he left the organization in 2018. Even beforehand, evidence shows the key players agreed that being able to compensate researchers with stock was key to developing AGI, the hypothetical form of AI capable of performing any intellectual task a human can. OpenAI executives maintain that the for-profit's work meaningfully advanced the foundation's mission, including safety activities. They say the non-profit board continues to control the for-profit, and instituted new governance controls following "the blip," when Altman was fired by OpenAI's non-profit board in 2023 for lack of candor and then rehired just days later. Musk's case focused on the events of the blip, when Microsoft CEO Satya Nadella, whose company depended on OpenAI's tech, was personally involved with helping to bring Altman back and creating a new board to govern OpenAI. They note that Microsoft executives wondered if their commercial agreement might conflict with the non-profit's goals, and suggest that Microsoft's commercial priorities led OpenAI away from its mission. They've focused attention on a clause in Microsoft's agreement with OpenAI that gave Microsoft veto rights over major corporate decisions at OpenAI. Microsoft's witnesses have insisted that the company's executives didn't know of any specific conditions on Musk's donations despite extensive due diligence, and never vetoed any decision by OpenAI. They note that the company's investments and compute power allowed OpenAI to achieve its biggest triumphs. Musk has suggested that his skepticism of his cofounders grew over time, until in the fall of 2022 he finally decided they had betrayed him when he found out about Microsoft's plans for a new $10 billion investment that took place in 2023. He wouldn't file his lawsuit until mid-2024. OpenAI's attorneys argue that the terms of that deal were spelled out in a term sheet for a previous fundraising round in 2018, which Musk received and his advisers reviewed, but Musk said he didn't read in detail. They also note numerous blog posts and other communications from over the years that show Musk could have known what OpenAI was doing well before he brought them to court, including tweets where Musk criticized the company years before the suit. Zilis, Musk's adviser, even voted to approve these transactions as a member of the OpenAI board. Ultimately, the OpenAI attorneys emphasize that Musk's formal role in the organization ended in 2018 and his last donations took place in 2020. OpenAI's attorneys say the real reason that Musk filed his suit was he realized that he was wrong about OpenAI, after its launch of ChatGPT revolutionized the business of artificial intelligence. They argue that OpenAI has operated under its current structure since its first Microsoft investment in 2018, and that forcing the organization to restructure eight years later is unreasonable. There is evidence that Musk was planning his own competing AI efforts while he was still the chair of OpenAI, and hired OpenAI employees to work on AI at Tesla. OpenAI's attorneys argue that these efforts undermined OpenAI at a time when it was using Musk's donations to pursue its mission. They noted that Zilis, the mother of three of Musk's children, didn't disclose her personal relationship to other OpenAI board members for years. And they argue that Musk withheld his donations in 2017 in an effort to win control of a planned for-profit affiliate of OpenAI. Finally, "Mr. Musk abandoned OpenAI for dead in 2018," Bill Savitt, OpenAI's lead attorney, told the jury.

read more
Research repository ArXiv will ban authors for a year if they let AI do all the work

Research repository ArXiv will ban authors for a year if they let AI do all the work

ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers. Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has becomea source of data on trends in scientific research. ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters toget an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it toraise more money to address issues like AI slop. In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section —postedThursdaythat “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.” Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. Dietterichtold 404 Mediathat this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision. Recent peer-reviewed research has found thatfabricated citations are on the risein biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caughtusing citations that were made up by AI.

13 hours ago

View

The haves and have nots of the AI gold rush

The haves and have nots of the AI gold rush

The vibes around the current AI boom aren’t great, even in the tech industry, according toa lengthy social media postfrom Menlo Ventures partner Deedy Das. Das described San Francisco as “pretty frenetic right now,” as “the divide in outcomes is the worst I’ve ever seen.” Using a “back of the envelope AI calculation,” he projected that there are around 10,000 people — founders and employees at companies like OpenAI, Anthropic, and Nvidia — that have “hit retirement wealth of well above $20M,” while everyone else worries “they can work their well-paying (but <$500k) job for their whole life and never get there.” Plus, “layoffs are in full swing,” and “many software engineers feel that their life’s skill is no longer useful,” leading to confusion about the best career paths and “a deep malaise about work (and its future),” Das said. This prompted some eye-rolling on X, with entrepreneur Deva Hazarikaarguingthat “most of the people in this post” are “incredibly fortunate and can simply make a choice to be happy.” Another usersuggestedit’s “pretty damn novel & also kinda nasty” that in the current cycle, “the same technology is both the lottery ticket & the thing eating your fallback.” The vibes in SF feel pretty frenetic right now. The divide in outcomes is the worst I've ever seen.Over the last 5yrs, a group of ~10k people – employees at Anthropic, OpenAI, xAI, Nvidia, Meta TBD, founders – have hit retirement wealth of well above $20M (back of the envelope…

13 hours ago

View

OpenAI co-founder Greg Brockman reportedly takes charge of product strategy

OpenAI co-founder Greg Brockman reportedly takes charge of product strategy

OpenAI co-founder and president Greg Brockman is officially taking the reins of the company’s product strategy,according to Wired. This seems to solidify an already-existing change, with Brockman overseeing OpenAI’s products on an interim basis while the company’s CEO of AGI deployment Fidji Simo is out on medical leave. Wired also reports that in a staff memo, Brockman described plans to combine ChatGPT and its programming product Codex into a single unified experience. “We’re consolidating our product efforts to execute with maximum focus toward the agentic future, to win across both consumer and enterprise,” Brockman reportedly said. This is just the latest OpenAI shakeup since CEO Sam Altmandeclared a “code red”at the end of last year and said the company needs to refocus on the core ChatGPT experience. Since then, OpenAI hashalted “side quests”including video generator Sora and OpenAI for Science, and it’s beenhighlighting its ambitions to build an AI “super app.” TechCrunch has reached out to OpenAI for comment. The company told Wired that Simo, who remains on medical leave, worked with Brockman on these changes.

17 hours ago

View

From Panchatantra to Prompts: How AI Platforms are Saving Indian Bedtime Stories

From Panchatantra to Prompts: How AI Platforms are Saving Indian Bedtime Stories

Personalised narration, familiar characters, and educational themes are turning bedtime stories into an interactive experience powered by artificial intelligence.

1 day ago

View