Latest AI News

Google rolls out a native Gemini app for Mac
Googleannouncedon Wednesday that it’s introducing a native Gemini app for Mac, catching up to rivals like OpenAI and Anthropic, which have had Mac apps for quite some time. “Now, you can bring up Gemini from anywhere on your Mac with a quick shortcut (Option + Space) to get help instantly, without ever switching tabs,” Google explained in its blog post. “Whether you’re drafting a market report and need to verify a date or building a budget in a spreadsheet and need the right formula, you can get an answer and get right back to work.” When using the new app, you’re able to share anything on your screen with Gemini to get help with what you’re looking at in the moment, including local files. For example, if you’re looking at a complex chart, you could ask “What are the three biggest takeaways here?” and receive a summary. The app also supports the ability to generate images with Nano Banana and videos with Veo. The native macOS app is available to all Gemini users on macOS versions 15 and up, globally, starting today. It’s available for download atgemini.google/mac.
View

Can AI judge journalism? A Thiel-backed startup says yes, even if it risks chilling whistleblowers
After helping lead the lawsuit that bankrupted media firm Gawker, Aron D’Souza says he saw something broken in the American media system: People who felt harmed by coverage had little recourse to fight back. His solution is software. D’Souza says his latest startup,Objection, aims to use AI to adjudicate the truth of journalism. And for the price of $2,000, anyone can pay to challenge a story, triggering a public investigation into its claims. (D’Souza is also the founder of theEnhanced Games, an Olympics-style competition that allows performance-enhancing drugs and is set to debut in Las Vegas next month.) Objection launched on Wednesday with “multiple millions” in seed funding from Peter Thiel and Balaji Srinivasan, as well as VC firms Social Impact Capital and Off Piste Capital. Thiel, whofunded the Gawker lawsuitpartly in defense of the individual right to privacy, has long been critical of the media. D’Souza says his goal is to restore trust in the Fourth Estate, which he argues has collapsed over decades. Critics, including media lawyers, warn Objection could make it harder to publish the kind of reporting that holds powerful institutions to account, particularly if that reporting relies on confidential sources. Anonymous sources have played a key role in major award-winning investigations into corruption and corporate wrongdoing. These are often people who are at risk of losing their jobs or facing other retaliation for sharing important information. It’s the journalist’s job — alongside their publication’s editors, peers, and lawyers — to ensure that those sources are reliable and not acting out of pure malice and to verify the information they provide. But that’s not enough for D’Souza, who said “using a fully anonymized source who hasn’t been independently verified” would lead to a lower evidence and trust score on Objection. Under the platform’s rubric, primary records like regulatory filings and official emails carry the most weight, while anonymous whistleblower claims are ranked near the bottom. Those inputs are collected in part by a team of freelancers — former law enforcement agents and investigative journalists — and are ultimately fed into what Objection calls an “Honor Index,” a numerical score the company says reflects a reporter’s integrity, accuracy, and track record. “Protecting a source’s information is a vital way of telling an important story, but there’s an important power asymmetry there,” D’Souza told TechCrunch in an exclusive interview. “The subject gets reported upon, but then there’s no way to critique the source.” His solution presents a lose-lose for journalists: either divulge sensitive source information to Objection’s “cryptographic hash” that determines “if it’s high quality reporting,” or face demerits for protecting sources who share important information at great personal risk. If technology like Objection takes off, it could chill whistleblowing, experts argue. Jane Kirtley, a lawyer and professor of media law and ethics at the University of Minnesota, says Objection fits into a long pattern of attacks that erode public trust in the press. “If the underlying theme is, ‘Here’s yet another example of how the news media are lying to you,’ that’s one more chink in the armor to help destroy public confidence in independent journalism,” she said, adding that clearly journalists need to do their part to be as transparent as possible in their reporting. Kirtley pointed to existing journalistic standards, like the Society of Professional Journalists’ Code of Ethics, which advises reporters to use anonymous sources only when there is no other way to obtain the information. She also cited longstanding industry practices like peer criticism and internal editorial review as built-in accountability methods. More broadly, she questioned whether Silicon Valley entrepreneurs who are not steeped in journalistic traditions are equipped to evaluate what serves the public interest. D’Souza says Objection is not an attempt to silence whistleblowers: “It’s an attempt to fact-check; it’s the same as [X’s] Community Notes. The wisdom of the crowd plus the power of technology to create new methods of truth-telling.” When asked if Objection could make it harder for media to publish important stories holding power to account, he said “If it raises the standards of transparency and trust, that’s a good thing.” He calls Objection a “trustless system” with transparent methodology that relies on a jury of large language models from OpenAI, Anthropic, xAI, Mistral, and Google, prompted to act as average readers and evaluate evidence claim by claim. The company’s chief technologist, ex-NASA and SpaceX engineer Kyle Grant-Talbot, leads the technical development on the platform, which D’Souza says is designed to apply scientific rigor to disputes over facts. The proposal comes as AI systems themselves face scrutiny over bias, hallucinations, and transparency — all of which could complicate their use as arbiters of truth. While Objection can be applied to any published content, including podcasts and social media, D’Souza’s focus remains largely on legacy and written media outlets. “Each objection is limited to a single factual allegation,” D’Souza said in a follow-up email. “This means that even where reporting is long and complex, an objection will be limited to a narrow factual issue within it. A user may choose to file multiple objections to different parts of the same article, but these will all proceed independently of each other.” Objections cost $2,000, a steep price for most Americans but relatively minor for wealthy individuals or corporations that might otherwise turn to the courts. D’Souza said he expects the platform to serve people who feel misrepresented in the media. But critics note that those who are most able to use Objection are likely to be the same powerful actors who already have other avenues to push back. “The fact that this is a pay-to-play kind of system … tells me that they are less concerned about providing helpful information for the general public and much more concerned with giving the already powerful a means to basically browbeat their journalistic opponents,” said Kirtley. First Amendment and defamation lawyer Chris Mattei was even more blunt, saying the platform “seems like a high-tech protection racket for the rich and powerful.” “At a time when so many try to obscure the truth, we should be encouraging whistleblowers with knowledge of wrongdoing,” said Mattei, who is a leading litigator. “The purpose of this company seems to be the opposite.” The system also only evaluates evidence submitted to it, including party submissions and material gathered by its investigators, raising questions about how it handles incomplete or undisclosed information, which is common in investigative reporting. When asked how he would prevent misuse, such as companies targeting unfavorable coverage or the system itself lacking sensitive evidence, D’Souza said journalists can submit their own evidence to protect their reputations. That effectively requires reporters to participate in a system they didn’t opt into, one that could further put their credibility on the line. If they don’t, the system may return an “indeterminable” result, potentially casting doubt on reporting that is accurate but difficult to verify publicly. Even when Objection finds no issue with a story, a companion feature called “Fire Blanket” can still introduce doubt about its credibility. The tool, currently active on X via platform APIs, flags disputed claims in real time by posting warnings — injecting the company’s own “under investigation” labels into public conversations while the claim is still under review. Eugene Volokh, a First Amendment scholar at UCLA, said the platform itself would not likely violate free speech protections, framing it instead as part of the broader ecosystem of criticism that surrounds journalism. He compared the concept to opposition research that’s aimed at reporters instead of politicians, and dismissed the idea that it would have a chilling effect on whistleblowers. “All criticism creates a chilling effect,” he told TechCrunch. Whether anyone adopts it, or simply tunes it out, may determine whether Objection reshapes journalism or fades into the growing ecosystem of tools attempting to do so. Or as Kirtley said: “Why would you believe that AI would necessarily give you more reliable information about the truth or fals[ity] of fact than a journalist who had researched and written the story? I mean, why would you just assume that? I wouldn’t assume that at all.” Editor’s note: Because D’Souza’s proposal centers on transparency and accountability, we have publisheda link to the full transcript, edited lightly for length and clarity.
View

AI learning app Gizmo levels up with 13M users and a $22M investment
Since its launch in 2021,Gizmo, anAI-powered learning platformthat transforms students’ notes into interactive study materials, has attracted more than 13 million users across over 120 countries. This is a significant jump from the more than 300,000 users the platform had when TechCrunch last covered it in 2023. And, as user adoption increases, investor interest is following suit. The company recently secured $22 million in Series A funding, according to itsannouncementon Tuesday. The funding will go toward expanding Gizmo’s engineering and AI teams, as well as expanding its presence in the U.S. college market. The company, which had just seven employees prior to the raise, plans to scale to around 30, CEO Petros Christodoulou tells TechCrunch. The company’s momentum comes at a time when student behavior is shifting. Academic performance in the U.S. has hit a historic low, according to the2025 National Assessment of Educational Progress. Excessive screen time and reduced attention spans have been noted inprevious studiesas contributing factors. Plus, with many young learners drawn to platforms like TikTok and YouTube, the biggest challenge for edtech startups is how to sustain engagement. Gizmo is betting that gamifying learning may be the solution. Designed for teenagers and young adults, Gizmo believes that its appeal is the use of game mechanics to drive engagement. Features like leaderboards, streaks, limited daily lives for incorrect answers, and the ability to challenge friends are designed to keep users coming back. Other micro-learning platforms have also gained traction in recent years, such as Anki, Quizlet, and Nibble, alongside newer entrants such asYunoandKnowt, which have all attempted to redirect screen-time habits into productive learning. However, for a young learning app like Gizmo to grow this much interest in just a few years is notable. For comparison, Yuno touts 1 million app downloads, and Knowt has more than 7 million users. The Series A round was led by Shine Capital, with participation from Ada Ventures, Seek Investments, GSV, and NFX, which previously led Gizmo’s $3.5 million seed round.
View

LinkedIn data shows AI isn’t to blame for hiring decline… yet
LinkedIn’sBlake Lawit,the chief global affairs and legal officer of the Microsoft-owned professional networking site, confirmed in an interview at theSemafor World Economysummit this week that the company’s data shows a decline in hiring of around 20% since 2022. However, he pushed back at the idea that AI was to blame. “At LinkedIn… we have an economic graph which is over a billion members. We’ve got companies, jobs, skills. It’s really an amazing real-time view of what’s happening in the labor market. And we’ve looked — because everyone wants to know the answer to this question: Is AI impacting jobs right now? We’ve looked and, honestly, we haven’t seen it,” he said during his interview. Instead, the executive suggested that the decline in hiring was more closely tied to a rise in interest rates. “We have not seen the sort of impacts that you would expect to see in areas that everyone is talking about AI… like industries, whether or not it’s customer support, or administrative, or marketing — all these places that if we were seeing impacts [from] AI that’s where it would be,” Lawit continued. “Yes, hiring’s down, but not down more,” he added. Lawit also noted that LinkedIn’s data didn’t indicate that the decline in hiring of college-aged young adults getting their first jobs was “down more,” either, when compared with people who were in the middle of or later in their careers. Still, he didn’t rule out that things could change. “Doesn’t mean it’s not going to happen in the future, but not yet.” On that point, however, Lawit had a warning of sorts. Lawit noted that over the last several years, the skills that are needed to do the average job have changed 25%. With the rise of AI, LinkedIn expects that figure to be 70% by 2030. “So, even if you’re not changing jobs, your job’s changing on you,” he said.
View

Hightouch reaches $100M ARR fueled by marketing tools powered by AI
Historically, marketers relied on designers and other creative professionals to develop images and videos for personalized online ad campaigns. In late 2024, seven-year-old startupHightouchlaunched an AI-powered service that allows marketing professionals to create custom content for brands such as Domino’s, Chime, PetSmart, and Spotify without involving brand design teams or ad agencies. The offering has been highly successful. Since introducing its AI product 20 months ago, Hightouch has added $70 million in annualized recurring revenue (ARR), it tells TechCrunch, bringing the startup to a total of $100 million in ARR. “Before GenAI, it was impossible for someone without many, many years of design skills to create consumer-level assets,” said Kashish Gupta, Hightouch’s co-CEO. The company is also led by co-CEO Tejas Manohar, a former engineering manager at Segment, a customer data platform acquired by Twilio for$3.2 billionin 2020. However, Hightouch’s approach goes beyond what standard AI models can do on their own. Hightouch says that many brands initially attempted to generate ad campaigns using general foundational models — broad AI systems that power tools like chatbots but lack knowledge of specific brands — only to find the resulting images and videos failed to meet “on-brand” standards. “Foundation models didn’t know about specific consumer brands, whether it was colors or fonts, tone, or assets,” Gupta says. “The LLMs would hallucinate products that didn’t exist, and you can’t do advertising and emails on products that don’t exist.” To ensure brand consistency, Hightouch connects directly to its customers’ existing creative tools, such as the popular design platform Figma, photo libraries, and content management systems (CMS). By pulling from these sources, the platform “learns” a company’s specific brand identity. Hightouch’s AI agents then use these photos, designs, and customer insights to help marketers build personalized ad campaigns autonomously, without having to wait on designers or developers. The goal of Hightouch’s AI is to create images and videos that look like they were made by professional designers, avoiding the “fake” or generic look often associated with AI. “For example, Domino’s will never generate a pizza,” Gupta says. “They’ll always use existing images of pizza, and they’ll place it into an ad where the background might be generated, and other things might be generated around it.” The company, which now employs approximately 380 people, was valued at $1.2 billion in February 2025 when it raised an$80 million Series Cfunding round led by Sapphire Ventures. Pictured above, left to right: Tejas Manohar, Josh Curl, and Kashish Gupta
View
Full transcript: Conversation with Aron D’Souza on Objection and AI in journalism
Editor’s note: Because D’Souza’s proposal centers on transparency and accountability, we are publishing the full transcript of our conversation, which took place on April 14, 2026. This transcript has been lightly edited for clarity and length. You can read the related storyhere. Aron D’Souza:I actually took a lot away from our conversation that we had last week. And I think it’s very intellectually interesting — if you had all the resources in the world, how would you reinvent journalism, or let’s say truth-telling, to improve the quality of our society? Rebecca:Let’s start at the top. What is your company that you’re launching? What’s it called? How much backing have you got? Who are your backers, and what problem is it trying to solve? Aron:Let me start with the problem. In 1970, according to Gallup, courts, scientists, and journalists were all about equally trusted — between 70 and 80% of Americans trusted those three institutions. Today, trust in courts is about the same, scientists have gone down a little bit since the COVID pandemic, but trust in journalism has plummeted from 70% to30% in 50 years. That’s the core problem I’m trying to solve. The company we’re building is called Objection, and it uses AI plus human investigators to fact-check any public reporting. It’s the first accountability system that’s been created for journalism writ large. This builds on the experience I had leading the Gawker lawsuit for Peter Thiel, where it took 10 years and $10 million for Hulk Hogan to get justice. That’s too slow and too expensive for most people to access. I thought to myself, let me build a system that allows everyone to get access to justice, access to fact-finding, access to truth, much cheaper and more efficiently. We’ve raised a multimillion-dollar seed round with Peter Thiel, Balaji Srinivasan, and a few other venture capitalists. Rebecca:Okay, and you’re launching this week? Aron:We are launching tomorrow. Rebecca:So this is a platform that gives people access to object to reporting? Aron:Yes. The first question is a philosophical one: What is truth? I like to say that truth is not a vibe, truth is a process. The two very good processes for finding truth in our society are courts and the scientific method. Adversarial courts, where you have a plaintiff and a defendant, often with two opposite sides of the story, with mountains of evidence being adjudicated by an impartial judge. On the other hand, the scientific method, which is about the repeatability of an experiment — if you’re going to write a story about the launch of Objection and you’re an objective journalist, ideally another objective journalist should write almost identically the same story. These are two very good methods of finding truth, and ones that are very trusted by our society, which can be useful tools for reimagining how we can find truth. Because this is ultimately the most difficult problem our society faces. We cannot have a civilization without a shared sense of truth, and we don’t have an agreed system of truth-finding. I’m trying to build one. Rebecca:Why is this not just a better-funded, AI-enabled version of Pravda? Elon Musk, 2018 — he went on a Twitter rant after getting some negative coverage about Tesla, wanting to build a reputation system for essentially disciplining critics. Aron:As far as I know, he never even built that. Rebecca:Right, so this feels similar. Aron:The criticism that billionaires are involved in this project — well, virtually every media outlet is owned by a billionaire. Your own outlet is owned by Apollo Global Management, Leon Black, who has his own checkered history.[Disclaimer: TechCrunch is no longer owned by Apollo.]Your answer to that is a division of editorial and advertising responsibilities, and I think in a similar way one can answer that criticism: There is a division between my investors and the software that we are building. Rebecca:That’s not my question. My question was: this is like Pravda — this is very similar to Pravda, and it’s something you’re actually implementing. Aron:Pravda was never built. He went on this Twitter rant, and I thought it was intellectually very interesting, because one of the core tenets of journalism is that journalists hold power to account. Who holds journalism to account? We would all be uncomfortable if that were government; you end up in a Chinese Communist Party-style apparatus. So what is the effective, private-sector-driven approach? The important material difference — because Pravda never published white papers and I don’t really know how they would have approached it — is that we are building a trustless system. I use the analogy of Encyclopedia Britannica versus Wikipedia. Britannica said: we have high editorial standards, we only have the most robust professors from Oxford and Cambridge writing our articles, and we have a storied editorial board, and a serious publication process. Wikipedia came up with a radically different approach: don’t trust the people, trust the software. By allowing everyone to contribute as easily as possible, it was proven — after only a few years — by the journal Nature that Britannica had more errors than Wikipedia.[Fact check: The Naturereportfound that Wikipedia was just about as good as Britannica, not that it had fewer errors.]We are building something that requires no trust in me, no trust in my investors. It requires trust in a process that is fully documented on our website, and you can download all the technical white papers. Rebecca:You say trust the system, but the system itself is built and governed by people, including you. What could make someone trust this system? Aron:Just look at the technology, look at the process. We don’t say we don’t trust Wikipedia because we don’t trust Jimmy Wales as a person. Rebecca:No, but I don’t know who Jimmy Wales is. I do know who Peter Thiel is, and I know who Balaji Srinivasan is. Peter Thiel funded a lawsuit that bankrupted a media company, and you led that lawsuit. Why should journalists trust a system that he backs to be neutral? Aron:Would you call the accountability brought upon Gawker as a negative — they published an unauthorized sex tape, and a free and independent jury in Pinellas County— Rebecca:That wasn’t my question. I’ll answer yours if you answer mine. Aron:Okay — reiterate the question for me. Rebecca:Why should journalists trust a system that is backed by Peter Thiel and run by the person who brought down a media company? Balaji Srinivasan is also pretty anti-institution, pro-network-states. This isn’t necessarily neutral infrastructure. It’s coming from actors with a history of hostility towards the press. Aron:I think it’s a history of healthy skepticism. And the institutions that have been built to fact-check — like ProPublica, etc. — haven’t actually done a very good job. There’s been a lot of money put into fact-checking, particularly on social media, for the last decade. Rebecca:What makes you think they haven’t done a good job? I mean, PolitiFact exists, ProPublica exists — I don’t think they’re out here publishing mistakes or misinformation. Aron:If they were actually effective at what they were doing, the decline in trust in media would have ceased. Rebecca:I agree to disagree there. I think the fact that trust in media has dropped doesn’t necessarily mean there’s no truth in news media. The problem is perception, and that perception has been shaped by powerful people with large audiences claiming “fake news” because they don’t like news reporting on their bad behavior. Aron:But then why has that same pattern of behavior not affected trust in, say, science or courts? President Trump, for example— Rebecca:We’ve got MAHA going on. I think there is a lot of mistrust of science today. Aron:That’s not borne out by the evidence. If you look at the Gallup poll on trust in science, courts, and journalists — scientists do take a hit post the COVID pandemic. But courts, for example: the president, who has a much larger audience than the Supreme Court Justices, is constantly critical of the court system, yet we have very strong trust in the court system regardless. Powerful actors routinely criticize courts, yet trust in courts is remarkably good — even, I would say as a lawyer, too high, because although courts are very good at getting to the truth, that process is very expensive and very slow. Rebecca:So the process is also slow and expensive? Aron:The one thing we can definitely agree on is that lawyers don’t deserve to get paid $2,000 an hour for what they do and for dragging out what should generally be a very simple process into something that takes decades. As someone who is a lawyer and a legal academic — when you actually see the practical consequences of how a defamation or libel trial works, it’s disgusting. The entire legal system is designed to benefit the lawyers. Everything takes longer than it should, everyone’s billing by the hour, and you end up with a situation where individuals have no access to justice. Peter Thiel’s client Hulk Hogan was a single-digit millionaire and he couldn’t afford to sue Gawker. We can discuss the merits of Gawker and the First Amendment implications, but at the end of the day, someone who’s a variable celebrity should be able to access justice, and I think that is a major failure of the current environment. Rebecca:That’s not the journalist’s fault. Publishing someone’s nudes nonconsensually — I don’t call that journalism. Your platform seems to be focused mainly on journalists. You say this is about truth and accountability. Is this really aimed at journalists, or is this anyone with mass influence, including podcasters? Aron:You’ll see there is a live test case about Joe Rogan on the platform. Joe’s show is one I’ve been on, and his format engenders trust in his audience in a new and unique way. But he does have a very large audience and exerts a large amount of power, so it would be inappropriate to exclude someone like that. Podcasters, YouTubers, TikTokers — under-30s trust TikTok as much as they trust the New York Times for their news. It’s scary. Rebecca:Why is your pitch centered on holding journalists accountable rather than on the people and platforms with larger audiences and looser standards? Journalists do have a code of ethics. It’s not that journalists aren’t held accountable — there is a peer review process happening within organizations, anyone can bring a defamation suit, anyone can respond with rebuttals or reporting that counters their arguments. Aron:Those are fair points. But the nature of social media is that many platforms have addressed the content creator misinformation problem. Community Notes on X have been very effective. Rebecca:Community Notes are also used for articles that are posted. Aron:And Community Notes is a really good example of a trustless system — the entire source code is available to read on the X website. I will challenge you to read it, because it is complicated, but it’s extremely well thought out and nuanced in such a way that no one person can control. Yes, there are lots of problems out there. I’ve chosen to focus on legacy media in particular because the gatekeeping of saying there are editorial standards and robust internal processes doesn’t mesh well with the modern world. When I talk to journalists off the record over a glass of wine, they always say it was great to be a journalist in the ’80s — big expense accounts, large budgets, a lot of time for in-depth investigative reporting. Because of structural changes in news media over the last 30 to 40 years, that’s just not possible. Part of that economic change has meant incentives have changed. In 1990, the job of a writer at the New York Times was to sell subscriptions. Now journalists have to be very focused on getting clicks, individual headlines, what goes viral — they’re beholden to algorithms. That has fundamentally changed the incentives. Rebecca:I feel like my job is being mansplained a little here. That’s not necessarily how we operate. Of course you want a headline people will want to read, but I don’t sit here thinking, what story is going to go viral and get me a lot of clicks? I sit here thinking, how can I hold power to account? How can I report accurately and truthfully about trends that I’m seeing? A lot of our stories don’t do well, and a lot of them do — it’s actually really unclear what makes stories perform. Sometimes the most mundane story generates the most views. There’s some truth to what you say, of course. But just because some models have shifted toward more click-driven stories doesn’t mean there’s necessarily less truth within the story itself. Aron:When budgets were larger and incentives were different, it was much easier to write a truthful, non-clickbaity story, not beholden to algorithms. And it’s really disappointing that journalists in America earn less than a new Uber driver. It’s an important profession that serves the public interest — it’s literally the only profession outside of government enumerated in the Bill of Rights. Rebecca:It’s the Fourth Estate. You say it’s an important profession. Can a journalist fully comply with your system while protecting a source’s identity? Aron:I actually thought about that from our last conversation, and there’s a technological solution here. I’d love your opinion on it because I asked my team to build it. And this is the beauty about building software, is that you can iterate rapidly. Protecting a source’s information is a vital way of telling an important story, but there’s an important power asymmetry: The subject gets reported upon, but there’s no way to critique the source. And there’s no capacity to observe the editorial process, because as an outsider, we are not privy to those editorial meetings where editors, lawyers, etc., have approved the use of that source. What I’ve asked my engineers to build is a cryptographic hash. You could come onto Objection, upload information about the recording from the source, pieces of identity verification, etc. The AI will read that set of data and say, okay, this is high-quality reporting — and, Rebecca, you are hereby issued a certificate to say you can use that anonymous source in a certain way, and it’s been independently verified in a fully trustless, open source system. There are some limitations from an engineering perspective — it requires AI models, so if you’re writing about Sam Altman, we couldn’t pass that data into OpenAI. But there are enough foundational models that the problem could be solved. Rebecca:Okay, let’s take a step back for a minute because I don’t know that we’ve fully explained the model to the reader. You can pay to object to a piece of content — whether that’s an article, a podcast, a YouTube video. Once that objection is up, there’s a combination of human and AI … ? Aron:Human investigators, mostly ex-law enforcement professionals: CIA, FBI, MI6 agents. Rebecca:Have you hired these people already? Aron:Yes. It’s basically like a fleet of Uber drivers — contractors and freelancers, just like most journalists are freelancers. There are some investigative journalists too who have worked at important publications. They go in and investigate the case. They look at the article line by line, sentence by sentence, claim by claim. Anyone who’s been quoted on the record gets a call: Did you actually say that? Was it taken out of context? Is it a full, fair, and accurate analysis? Importantly, every piece of information they capture goes into a public data room. So even a simple article, like Joe Rogan endorsing ivermectin, now has hundreds of pieces of evidence attached to it. Rebecca:Okay. When you get to the point where there are anonymous sources — which I should add are important for getting out stories that are truthful, and for whistleblowers who want to hold power to account, who want to say if there’s something bad happening in business or in government — what happens there? Aron:We have a ratings rubric for evidence. A level one piece of evidence is an unimpeachable primary source document, something that could be tendered in court. All the way down to a level five piece of evidence, which is a rumor. Rebecca:Most reporters will not report on rumors. Aron:Most won’t, but there have been plenty of rumored capital raises and deals reported on in my lifetime. Rebecca:Capital raise deals are not rumors. Those are based on someone with inside knowledge of the deal leaking that information, usually for their own gain. Aron:Yeah, sometimes it’s a banker, or— Rebecca:A company wants to drum up some support for their next raise, whisper whisper. Aron:And there are often bad incentives going on there. The anonymous source can be cryptographically measured in this fully open source model. Rebecca:So that would require the journalist to provide information about the source to your platform. But that would never happen. What do you mean “fully open source model”? I would have to provide the name of the person? Aron:Or as much information as you feel comfortable providing, and it would just give an evidence score on the back. Rebecca:So this addition, this was based off of our previous conversation? Because previously you said that using anonymous sources equals weaker evidence, and that will give the journalist a lower rating. Aron:Using a fully anonymized source that hasn’t been independently verified would lead to that, yes. Rebecca:Anonymous sources are anonymous for a reason. You tend not to provide identifying information about them publicly because it could reveal who they are. A source will always be someone we vet as trustworthy. Sometimes you can mess up. My reporting process is to determine who this source actually is, how they would have that information, whether I can have eyes directly on primary source documents — and they share this with me only because they know none of that information is going public and they’re not going to lose their job or face retaliation. Aron:Rebecca, you work at a high-quality outlet. Rebecca:The outlets you attack are also high quality. The outlets you have a problem with are also high quality. Aron:Well, Gawker, for example— Rebecca:You can bring up Gawker all you want. I don’t know that a celebrity rag that published Hulk Hogan’s dick pics is the standard by which we want to be critiquing journalists writ large. Would the Pentagon Papers — Aron:What about Fox News, The Sun, The Daily Mail? Rebecca:I don’t spend a lot of time reading them, so I can’t say whether or not they are inaccurate. Are they biased? Yes. Do they choose to report certain information over other information? I would say so, and that is the same for publications on the left. Does that make individual journalists less truthful, or hinder the fact that you still need anonymous sources to hold power to account and protect whistleblowers? Like would the Pentagon Papers pass your system? The Facebook Files? The Uber Files? Aron:Let’s ask the question in court. In the system designated by the will of the people and the Constitution — the judicial apparatus, from the Supreme Court on down — are anonymous sources allowed? Rebecca:Yes, actually. There are DOJ and court processes that do allow for internal, non-shared sources. Aron:Only in extremely rare proceedings, never in a commercial trial. Usually it’s about a victim of sexual abuse, a minor, or national security-related proceedings. What’s called an in-camera hearing, or in Britain a Star Chamber. And it’s worth pointing out that in those proceedings, both the prosecution and the defense get access to that source. They get to cross-examine them in private. There is no situation in court where only one side is allowed to present evidence the other side can’t see. Rebecca:But there’s a fairness doctrine in journalism. It’s not that only one side gets to present — when I have information from a source I’m protecting, I don’t just publish it. I will go to the company and say, here is the information I have. I’m not saying who I got it from, but this is what I plan to publish. What is your response? Aron:That’s the unique power asymmetry of an anonymous source, because you can only critique the output of the information, not the credibility of the source. If they were on the record, it’s much easier. Rebecca:There’s still a balancing act that needs to happen. Can you give me a concrete example of a real investigative story that your system would improve? Aron:I was reading a story in the Financial Times this morning about an early investor in OpenAI — it didn’t quote who that individual was — who was concerned about their business model. This is the number one story in the FT today, and it’s completely based on an anonymous source. We’re not talking about national security here. We’re not talking about protection of a minor. Rebecca:In a lot of ways, you kind of are. A lot of people think AI has an existential risk to national security — if everything is reliant on systems we can’t control or understand, that’s a national security risk. OpenAI has been sued multiple times after minors used its chatbots and then allegedly committed suicide with the aid of those chatbots. To have an early investor say they don’t agree with the business direction of OpenAI is to bring about a larger conversation: Can we agree that incentives change a business structure? You said that for news — it’s the same for Big Tech. Incentives to keep you clicking, keep you responding, cause a platform that is not necessarily designed to benefit all humanity but to generate more attention. Aron:What you’re talking about is the ticking time bomb thesis. What is a national security threat — is it an abstract risk far in the future, or is it a terrorist with a bomb walking the streets of New York? Where do you draw the line? Rebecca:The point is that I don’t see why journalist-protected sources should only relate to an imminent national security threat. They’re there for holding businesses accountable for their actions, too. Aron:The standard set by our constitutional body — the courts — is that the use of an anonymous source is not allowed. Rebecca:But journalism surfaces truths before they make it to courts. Aron:Journalism surfaces truth before it makes it to court partly because courts are so inefficient and slow. That’s one of my core criticisms. Rebecca:But also there are so many journalists out there. Courts are slow and there’s a whole due process, but they have different goals. For courts: legal liability. For science: reproducibility. Journalism is about public accountability. Different goals require different structures. Aron:And then who designs and authorizes these structures? Rebecca:The media has been going for a long time. It has taken a long time to develop the methods required to hold themselves accountable and hold each other accountable. Aron:I think that’s a fundamental problem, for an institution to hold itself accountable is not true accountability. Rebecca:They hold themselves accountable because they don’t want to get sued. And the whole world holds journalists accountable. You can publish something and have very powerful people, like yourself, respond. Aron:I’ll accept that I’m powerful-adjacent. Rebecca:Journalism has layered verification: multiple sources required, primary documents, rigorous internal fact-checking, legal team review, public accountability through courts and public scrutiny, rebuttals, competing reporting, Community Notes. Aron:And we’re building a system that feeds into that ecosystem. Like Community Notes, that has been a very powerful tool in truth verification, wouldn’t you agree? Rebecca:Sure, yeah. But you can’t reproduce anonymous sources. They’re anonymous for a reason. Journalists have a duty of care, and they spend years developing those sources. They are deeply personal. This is not something AI can replicate. This is not something another journalist could just slot into. And by downgrading the credibility of anonymous sources, the model inherently feels like it would chill whistleblowing. What would you say to critics who say this is an attempt by Big Tech billionaires to silence whistleblowers? Aron:The criticism of Big Tech billionaires is a moot one — most media outlets are owned by multi-billionaires. Rebecca:Okay, taking the billionaires away. How do you respond to the idea that this is an attempt to silence whistleblowers? Aron:It’s an attempt to fact-check; it’s the same as Community Notes. The wisdom of the crowd plus the power of technology to create new methods of truth-telling. Ultimately, when public trust in journalism has declined so much over the last 50 years, someone should be saying there’s something going on here, and there’s a need for a major institutional reform and rethink. I didn’t cause the decline in trust in journalism. This has been happening for five decades. Take a step back. Don’t think of yourself as a journalist. Think about having a blank slate in front of you. How would you create the perfect system using the technology we have today, the distribution mechanics we have today, the advent of artificial intelligence? You wouldn’t just replicate a private equity billionaire owning a media outlet and saying, trust me, this editorial committee is the best way to do it. Rebecca:I don’t know that I would do what you are doing. I don’t know that I would make it so that whistleblowing becomes a deterrent. I’m sure there are ways to improve journalism — I would give it a whole lot more funding. I trust my colleagues and peers to adjudicate. A lot of us went to journalism school. It is kind of a trade, but there is a code of ethics you must adhere to. Aron:But there’s no professional licensure. It’s not like being a doctor or a lawyer — you have to pass an exam set by the state, be a member of the Bar Association or the AMA. Rebecca:Which keeps it even more open. I thought you weren’t a fan of gatekeeping. I’m curious — who will use this the most in its first year? Who do you imagine is the customer? Aron:The customers are individuals who’ve been misrepresented by the media. I’ve done thousands of interviews over the last two years for the Enhanced Games, and it’s amazing — sometimes they spell my name wrong, get basic details wrong. Rebecca:And for readers, the Enhanced Games is your other company — basically a version of the Olympic Games that allows for enhancements via drugs or supplements. Go on. Aron:When there are basic errors, that concerns me. Rebecca:Basic errors are human. But was there a specific moment — an article that mischaracterized you as a person? Aron:Sometimes, yes, certainly. Rebecca:I mean, journalism is up for interpretation. There is no one truth, especially when things are evolving constantly. Not everything’s binary. Aron:That’s actually the agree-to-disagree point. I don’t believe we are condemned to live in a post-truth era. There are still facts. Rebecca:There are still facts. But AI determining truth assumes it’s always objective, extractable, computable. Many journalistic truths are interpretive, contextual, evolving. For example: Did OpenAI act irresponsibly? That’s not a binary. Aron:According to Gallup, the principal concern Americans have about news is the blending of fact and opinion, something courts and science do very well.[Note: TechCrunch was unable to find this Gallup poll.A 2018 Pew Research Center reportfound Americans struggled to distinguish between fact and opinion in news media.]In a scientific article in Nature, you see method, data, and then analysis — very clear signposting that distinguishes fact from opinion. Same in court: When a judge issues an opinion, they go through the facts as plainly as possible and then deliver the opinion. That’s a really important distinction. If you ask me: Have I been mischaracterized? Yes. In questions of fact? Not that often. People say some very mean things. But that’s their opinion. Rebecca:You say a significant portion of people who’ve been written about in the media feel misrepresented. These aren’t just everyday people; these are powerful people being written about, and maybe they don’t like it. Why wouldn’t a company like OpenAI or xAI file objections about every negative story about them? Aron:I would actually hope that the journalists writing those stories feel confident — they’ll say, yes, my reporting will stand up to the highest levels of criticism and transparency. Rebecca:You could say that about a lot of things. There are several advocacy groups that have criticized Elon Musk’s companies, andhe has sued theminto the ground. Are they operating in truthfulness? I think a lot of the time they are. But they can’t afford the legal fees. And what you’re asking — putting every article up for a score — means journalists have to do extra labor to avoid being diminished in the public eye for work they’ve already done the legwork for. How is this different from SLAPP suits? This is just cheaper and more scalable. Aron:SLAPP suits only exist in the United States, and one of the reasons they exist is because the cost of litigation is perilous to most people. We cannot have a legal system only accessible to billionaires and the largest corporations. The fact that Elon Musk can in theory sue an advocacy group out of existence — I think that’s truly dysfunctional. Part of what Objection offers is reform of that, because they could take the battle about facts onto Objection, and it would take a couple of days and a few thousand dollars rather than decades and millions. Keeping disputes about facts out of court is the most important thing we can do, because court is not survivable for anyone. It doesn’t benefit anyone except the lawyers. Rebecca:You say journalism’s model is broken because of bad incentives. What about the incentives of your platform? More disputes equals more revenue. Conflict becomes profitable, outrage becomes monetized. Your business model depends on people attacking journalism. Aron:No, our business model depends on adjudication. We want to be a trusted source of adjudication, not just for media disputes but for everything. If you have a breach of contract dispute, ideally it gets arbitrated on Objection rather than in court. We make money by adjudicating. We don’t make money by selling clicks and outrage. Rebecca:At the end of the day, you’re going to get more money the more people submit objections. How can you ensure wealthy actors aren’t just using this to pummel journalists or podcasts or whoever speaks ill of them, which is protected speech? Aron:You could level the same criticism at legacy media. Rupert Murdoch owns one of the most powerful media outlets in the world and has arguably installed multiple prime ministers in Australia and the UK, and several presidents in the United States. Traditional legacy media suffers from that exact same issue in a different way. Rebecca:My question is for you, though. Have you built a system that lets powerful actors continuously litigate journalism in public? Do you have any protections in place to ensure people aren’t weaponizing the system? Aron:The system is so accessible that everyone should be able to use it. Rebecca:What does it cost? Aron:It costs as little as $2,000. Rebecca:$2,000 is not accessible. Most Americans don’t have that. Aron:You’re owned by one of the largest private equity funds in the world. Rebecca:You know how private equity works, right? If it doesn’t make dollars, it doesn’t make sense. We’re operating under very slim margins. We do not have money. And being owned by a private equity firm doesn’t mean we have endless resources to litigate stories. What would happen is they would tell me: don’t make this a problem on my balance sheet. Pull back your reporting. That’s most likely what I would be told. Aron:Or your private equity owners would say, wow, Rebecca, you’re doing great reporting, your trust scores are amazing, we’re one of the most trustworthy outlets out there. And that’s increasing public trust, driving up the subscriber base. You could put it right on your website: one of the most trusted outlets in the world. Rebecca:Whether or not my overlords would be pleased about a good score doesn’t change the fact that submitting one of these objections is rather expensive. Aron:To hire a lawyer for one hour of time at a big New York law firm is now $2,000. To prosecute a defamation case is five to ten million. Rebecca:No, but to submit one of these stories costs $2,000. That is not accessible. We have a world where big tech companies spend millions on lobbying like it’s nothing, to pass laws favorable to their businesses. I can imagine this would just be another form of lobbying, another way to buy influence. I don’t see how normal institutions or everyday people could possibly compete with what big tech could throw at this financially. Aron:I think $2,000 is very accessible, particularly to the people who are written about in the press — there are 150,000 of them— Rebecca:Most Americans are living paycheck to paycheck. Aron:If it were truly an accessibility issue, and someone who felt aggrieved by an article did not have the funds to bring an objection, we’ll give them a complimentary subscription. I’ll pay for it out of my own pocket. There are real costs associated with it — human investigators are involved, it’s not just AI powering a data center — but it’s exponentially cheaper than litigation. Rebecca:Exponentially cheaper for powerful people who want to attack their critics. Let’s go back to the AI, because I don’t think we’ve established for the reader what it is. Aron:It’s a jury of foundational models. All the major foundational models — OpenAI, Grok, Anthropic, Mistral, etc. — are prompted to behave as if they are everyday Americans serving as jurors. The jury system is highly trusted by courts, judges, and the general public as an effective way of finding truth. We prompt the individual models to behave as, say, a 50-year-old man in Brooklyn versus a 25-year-old woman in Portland, based on statistical evidence of the demographics of the United States. Rebecca:Okay. How does the system handle incomplete information, conflicting testimony, or evolving facts? Aron:That’s the beauty of the adversarial court system, which we model ourselves on. There’s always conflicting facts in an adversarial trial. Truth is not a vibe — truth is a process. Truth is the argument that is better evidenced and better structured. Judges are fallible individuals, but what they’re good at is saying: who is making the better argument in front of me? That’s about sourcing and demonstrating evidence. Rebecca:You’re asking journalists to expose their underlying evidence. Is your platform open source? Can anyone look at how your AI makes decisions? Aron:Yes. We want to be as trustless as possible, so the full white paper, methodology, and algorithm are all outlined in excruciating mathematical detail on our website, Objection.ai. Every case ends in a judgment where the AI models outline every step of their reasoning in as much detail as possible, and every piece of evidence they used to reach that conclusion. Rebecca:So journalists should expose their underlying evidence. Should AI companies be held to the same standard? Should OpenAI, Anthropic, xAI open source their models, disclose their training data, show their reasoning process? Aron:That’s a very interesting question. I’m not involved in any of those companies, but I think there is an ethical argument that powerful institutions, particularly those wielding potentially monopolistic power, have an obligation of transparency. We ask government officials to disclose every gift they receive, and the Freedom of Information Act applies to every email they send, because citizens are subjected to their monopoly power. In return, we demand transparency, and transparency reduces corruption. Similarly, organizations that wield great power should aspire to as much transparency as possible. I hope you publish this entire interview — it’d be very interesting for a portion of your audience, or they could throw it through an AI model and see if you accurately quoted me. And I do agree, subject to designing the right commercial incentives for the further development of artificial intelligence, there should be some degree of open sourcing. The fact that— Rebecca:Some degree of open sourcing. But AI is already acting as an arbiter of truth at a much larger scale than journalism. Why should journalists be required to operate with more transparency than the systems increasingly shaping public knowledge? Aron:That’s a very fair question, and I would say both need to aspire to the highest levels of transparency. It was a vast strategic misstep by Mark Zuckerberg to change the Facebook algorithm from a chronological one to an algorithmic one around 2009–2010. It exploded his ad revenue, of course. And for TikTok, that targeting algorithm is the source of all their power. What Musk almost did at X — exposing how the algorithm works, exposing how the Community Notes algorithm works — is really a powerful gesture in the interest of transparency. There has to be some dimension of commercial incentives, though. Building AI foundational models is very expensive. Rebecca:When you say commercial incentives, what do you mean? Aron:The analogy is drug development: the development of pharmaceutical drugs is very important socially, and science builds upon knowledge, so should the recipes be completely exposed publicly? Well, patent protection lasts for 20 years and gives a monopoly right to commercialize a drug, make a profit, and then in due course the patents expire and generics become available. There might be an argument that AI models could follow a similar convention: closed for a short period, probably measured in months not years, during which they can be exploited commercially, and then open sourced so that human knowledge can be built upon them. Rebecca:Would you say the same thing about anonymous sources? Aron:Using that train of logic — well, you’d say the anonymous source gets to be anonymous for a period of time. Rebecca:No, I don’t like that. We’ll have to agree to disagree on that one. I’m curious why the tribunal model of ethical judgment, which feels like a shaming mechanism against individual journalists? You mention science a lot — do individual scientists have a public truth score? Aron:In fact, they do. It’s called an impact factor rating. The methodology is very similar structurally to what we use, based on chess ELO metrics — it’s about how well cited your article is. If other highly-cited scientists are citing your article, that raises your impact factor rating. This is actually the primary metric by which scientists are judged. Rebecca:But again, that’s peer review, replication, institutional processes. Scientists are not given a universal truth score by a startup. Why is your model closer to Yelp than to science? Aron:Science was once a startup. In the medieval era, scientists were viewed as servants of the courts of great monarchs, exploring ideas in private. Then in the early 1700s, the Royal Society of London convened the first scientific journal and created the process of peer review. People could submit articles, they would be circulated among eminent peers who would post their comments publicly, and only after passing rigorous peer review was the article published in the Proceedings of the Royal Society, which over 300 years later is still one of the preeminent scientific journals. It took well over 100 years to move from the old model to the new. With each technological leap, there are new models of truth-telling. Rebecca:Why should people trust your system when fact-checking organizations, Community Notes, defamation law, and basic public scrutiny already exist? I think a lot of mistrust in media is also just a fundamental misunderstanding of how media operates. Aron:People can see the full architecture of our system. It’s fully transparent — how every decision is made, how the algorithms work, it’s not a black box. It’s fully documented, and I hope eminent mathematicians, lawyers, and academics will look at those white papers and tear them to shreds and give me feedback. We’ll improve our algorithm. We aspire to a very high level of transparency. I didn’t let journalists into the office at the Enhanced Games because we were doing something that was pushing boundaries, and I thought journalists would be very critical of the sausage-making. But if you want to come sit in our office one day, see how every decision is made, see the source code, talk about the incentives, we’d be very happy to have you. Rebecca:Maybe I’ll take you up on that. I imagine you wouldn’t be as frank in front of a journalist as you would be in private — which is exactly the problem with whistleblowers. If they knew that their identity might be exposed or downgraded in credibility, they wouldn’t come forward. I’ve done enough interviews with enough companies to know that they hold a lot back, and that’s why we require anonymous sources to figure out what’s actually going on. But let’s not harp on that. What was your actual catalytic moment for this? Not the abstract philosophy. Was it Gawker? With your power and influence, why spend your time building this specifically instead of a transparency tool for AI companies, for social platforms, for political misinformation? Aron:The Objection platform works for all of those — for AI, for politicians, for legacy journalists. You can take an output from ChatGPT and file an objection against it. We do hope our human-based investigations and AI adjudication will form part of AI training data, and I hope someone will file an objection against content produced by AI models. I wish that Sam or Elon will reply, because I think that would be a very powerful test of the system. Rebecca:Do you think the Pentagon Papers were not trustworthy because they relied on anonymous sources? Watergate? Deep Throat? Aron:Those are easy examples to cherry-pick in history of situations where anonymous sourcing worked. But you’re also giving examples from a time— Rebecca:I don’t have to cherry-pick, I can keep going. Aron:From a time when, say, the Graham family owned the Washington Post, lavished it financially, and were great stewards of that institution. Would that same thing happen in an era of operating at a loss? Rebecca:I have a lot of feelings about Jeff Bezos owning the Washington Post. But those aren’t just big historical cases. There have been several Pulitzer Prize-winning stories relying on anonymous sources published way more recently. Aron:I think the glory days of investigative journalism were in an era where classified ads in particular sustained high-quality reporting, and economic incentives have changed dramatically since then. Rebecca:I think attention has changed dramatically. We went to the moon this week and nobody cares, because there’s just too much going on. There’s still been high-quality investigative journalism. Hannah Dreier from the New York Times onmigrant child laborin the U.S.The New Yorkeron how felony murder laws imprison people for crimes they didn’t commit.The Washington Poston the history and impact of the AR-15. Aron:Do you think the journalism establishment would give a YouTuber like Nick Shirley, who did amazing investigative work on Medicare-Medicaid fraud in Minnesota, a Pulitzer Prize? Rebecca:I agree that there’s gatekeeping with prizes. But that’s about prizes — not about truthful reporting. Aron:Well, Pulitzer Prize-winning stories are a heuristic for truth-telling and high-quality investigative journalism. Rebecca:That’s the standard we all hold ourselves to. That’s what most journalists want to achieve: journalism that makes an impact, that protects sources, that highlights injustice. Aron:And I agree, and I think virtually all journalists have very good intentions. But it’s the incentives of media proprietors, social media algorithms, and AI amplification that have created the problem. Rebecca:Social media algorithms, AI amplification. So why not attack those things? If your system makes important stories highlighting injustice and holding power to account harder to publish, is that an acceptable outcome for you? Aron:If it raises the standards of transparency and trust, that’s a good thing. If those stories which are so important to the functioning of our democracy get passed through another filter, and on the other side the public says, you know what, we trust journalism more — the fundamental point is that only 30% of Americans trust journalists today. Rebecca:I think we’re just in a post-trust era because they don’t trust AI companies either. Aron:I think there is a valid criticism of AI companies. Rebecca:You’re one of them now, technically. So why would anyone trust you? And why would journalists listen? What happens if nobody gives a shit? Aron:It’s actually a two-sided market, which is the important thing to note. Subjects of media reporting care. I’ve talked to people whose lives have been ruined, whose careers have been destroyed by one article. Rebecca:Did they do anything wrong to incur the wrath of said article? Aron:Sometimes they were just an easy scapegoat, an easy heuristic. Rebecca:More often than not, people highlighted in the press — sometimes there can be a disproportionate reaction, but often when you’re reporting about someone’s misdeeds, it’s unfair to blame the messenger. Aron:But we also live in an era where it’s impossible to forget. If you did something bad when you were young in the 90s, maybe it was written about in the local newspaper on microfilm at the local library. Now, one quick Google search away, it’s the first thing that comes up. Rebecca:I think that’s a very human thing. In tribes, if someone misbehaved, they would be banished. Aron:There’s something very powerful about forgiveness. Think of a particular example — a young man, about 30, who applied for a job at the Enhanced Games. Really smart, passed the interview with flying colors. My head of HR Googled him and found a couple of stories from when he was in college, an allegation, never convicted. And I thought: what if a journalist found this? A sexual assault allegation from college, and the Enhanced Games hired him — it could become a whole story. We did not hire him. Rebecca:Yeah, but sometimes things are a whole story and then nothing happens.Big Balls, for example — part of Elon Musk’s DOGE team — a young man who frequently posted misinformation, misogynist and antisemitic content. That didn’t lead to his firing, that didn’t lead to anything. There have been countless stories attacking wrongdoing and nothing happened. Aron:I didn’t talk to Big Balls, but I did talk to another DOGE staff member — I won’t name him — a young man who had one article written about him in the news media, no other public profile whatsoever, and now when I had a meeting with a New York venture capital fund, it was the first thing that came up. Rebecca:He’s not banished. He works at a venture capital fund. Were the articles about him truthful? Aron:There were dimensions that were factually correct, but it did paint him in a very negative personal light. People deserve to be forgiven over time. We’re adjusting to an era where it is not possible to forget, and that’s something we need to structurally address. Rebecca:So if someone has an accurate article, but the framing paints someone in a bad light because of truthful things they did — things people don’t agree with — or because the journalist chose to highlight certain actions and maybe missed the part where they volunteer at a soup kitchen every weekend, on your system, would they get a bad rating? Aron:Not at all. But distinguishing fact from opinion is very important. Rebecca:One thing that bothers me is that people think the opinion section of the New York Times is the same as reporting. That’s not the same as journalism. Aron:I look at the Times and an opinion piece is almost intermeshed on the homepage — it’s in tiny font labeled “opinion” at the top. People say, “the New York Times said X,” as if it were fact, when it’s actually just some lobbyist’s opinion. The Gallup surveys, time and time again, show that distinguishing fact from opinion is essential. Rebecca:But do you not agree that people have a right to their free speech? Giving them scores, attacking them at a systematic level, where people with money can really hammer you down — how does that not hinder free speech? Aron:Do you think the Michelin Guide rating restaurants is socially negative? Chefs are artists, should they be subjected to a ratings and scoring system? Rebecca:They have to opt in to the Michelin system, don’t they? Aron:I don’t believe it’s an opt-in system. You can reject a Michelin star, sure, but the Michelin Guide is a very useful tool for finding good, trustworthy restaurants. That said, restaurants don’t wield power in the same way The New York Times does. Rebecca:These are very different stakes. Aron:But in almost every profession of great importance, there is some kind of regulator. To be a nail technician in the United States, you need a license— Rebecca:There is a regulator for journalists. If you print lies, you will be held accountable. Aron:That’s actually a lot less true in the United States in particular. If you publish lies, that’s not necessarily something you can be found liable for, because under New York Times v. Sullivan, you have to show— Rebecca:Libel, defamation — you have to show the journalist had malicious intent. Aron:Actual malice. The standard for a civil judgment in the United States is basically impossible to reach, versus in the UK, continental Europe, Australia, and New Zealand, where there is no actual malice standard — it’s much easier to get sued for printing something. Rebecca:That’s also our First Amendment right to free speech. Aron:It’s also very effective lobbying done in the 1970s by major media outlets. Rebecca:Don’t talk to me about lobbying. We’ve got Greg Brockman and a lot of your friendsspending tens of millions of dollars in lobbyingfor small congressional district seats, just to pass laws with the slightest scrutiny for AI companies — who are by some estimation about to become the arbiters of truth. Aron:I would say yes to that — and we’re friends, right? Mark Zuckerberg has shown himself not to be an arbiter of truth. As has Rupert Murdoch. I don’t think since the 1960s that we’ve really said we trust any single arbiter of truth, and I think that’s very damaging. At any rate, I better go get dinner. [End of interview]
View

OpenAI updates its Agents SDK to help enterprises build safer, more capable agents
Agentic AI is the tech industry’s newest success story, and companies like OpenAI and Anthropic are racing to give enterprises the tools they need to create these automated little helpers. To that end,OpenAI has now updated its agents software development toolkit (SDK), introducing a number of new features designed to help businesses create their own agents that run on the backs of OpenAI’s models. The SDK’s new capabilities include a sandboxing ability, which allows the agents to operate in controlled computer environments. This is important because running agents in a totally unsupervised fashion can beriskydue to their occasionally unpredictable nature. With the sandbox integration, agents can work in a siloed capacity within a particular workspace, accessing files and code only for particular operations, while otherwise protecting the system’s overall integrity. Relatedly, the new version of the SDK also provides developers with an in-distribution harness for frontier models that will allow those agents to work with files and approved tools within a workspace, the company said. (In agent development, the “harness” is a term that refers to the other components of an agent besides the model that it’s running on. An in-distribution harness often allows companies to both deploy and test the agents running on frontier models, whichare considered to bethe most advanced, general-purpose models available.) “This launch, at its core, is about taking our existing Agents SDK and making it so it’s compatible with all of these sandbox providers,” Karan Sharma, who works on OpenAI’s product team, told TechCrunch. The hope is that this, paired with the new harness capabilities, will allow users “to go build these long-horizon agents using our harness and with whatever infrastructure they have,” he said. Such “long-horizon” tasks are generally considered to be more complex and multi-step work. OpenAI said it will continue to expand the Agents SDK over time, but initially, the new harness and sandbox capabilities are launching first in Python, with TypeScript support planned for a later release. The company said it’s also working to bring more agent capabilities, like code mode and subagents, to both Python and TypeScript. The new Agents SDK capabilities are being offered to all customers via the API, and will use standard pricing.
View

DeepL, known for text translation, now wants to translate your voice
DeepL, a translation company best known for its text tools, released a voice-to-voice translation suite today that covers use cases like meetings, mobile and web conversations, and group conversations for frontline workers through custom apps. The company is also releasing an API that lets outside developers and businesses build on top of DeepL’s tech for customized use cases, such as call centers. “After spending so many years in text translation, voice was a natural step for us,” DeepL CEO Jarek Kutylowski told TechCrunch in an interview. “We have come a long way when it comes to text translation and document translation. But we thought there wasn’t a great product for real-time voice translation.” Kutylowski said that the challenges in creating a real-time translation product center on striking a balance between reducing latency — the delay between someone speaking and the translated audio playing back — and maintaining accurate results. DeepL is releasing add-ons for platforms like Zoom and Microsoft Teams, where listeners can either hear real-time translation while others are speaking in native languages or follow real-time translated text on screen. This program is currently under early access, and the company is invitingorganizations to join a waitlist. The company also has a product for mobile and web-based conversations that can take place in person or remotely. DeepL also lets allows users participate in a group conversation in settings like a setting like training sessions or workshops, allowing participants to join through a QR code. DeepL said that its voice-to-voice tech can also learn and adapt to custom vocabulary, such as industry-specific terms and company and personal names. Kutylowski said that AI is reimagining what customer service will look like in the coming years. He noted that a translation layer helps companies provide support in languages where qualified staff are scarce and expensive to hire. The company said that it controls the entire voice-to-voice stack. However, the current system converts the speech to text, applies translation, then converts that back to speech. DeepL believes that since it has worked on text translation for years, it has an edge in translation quality. Going forward, the company wants to develop an end-to-end voice translation model that skips the text step entirely. DeepL faces competition from several well-funded startups working in adjacent corners of the space. Sanas, which last year raised$65 millionfrom Quadrille Capital and Teleperformance, uses AI to modify a speaker’s accent in real time — a tool aimed primarily at call center agents. Dubai-based Camb.AI focuses on speech synthesis and translation for media and entertainment companies Amazon Web Services, helping themdub and localize video contentat scale. Palabra, backed by Reddit co-founder Alexis Ohanian’s firm Seven Seven Six, is building a real-time speech translation engine designed to preserve both the meaning and thespeaker’s original voice, putting it in more direct competition with what DeepL is now building.
View

America's Multi-Billion-Dollar AI Data Centre Boom is Now Grinding to a Halt
The US' data centre expansion is stalling under the weight of power shortages, supply chain failures, and rising public opposition.
View

Tredence to Host Third Edition of AI Foundry in Hyderabad on May 23
The core sessions will include design thinking workshops and team presentations, culminating in insights into responsible AI practices and addressing scalability and governance challenges.
View

Google Rolls Out Gemini AI App For Mac Users
The company has launched its first native Gemini app for macOS, giving MacBook users worldwide direct access to its AI assistant without relying on browsers.
View

The 1,000-GCC Powerhouse: Why Bengaluru Remains India’s Unrivalled Tech Hub
Bengaluru’s GCCs employ over 6 lakh professionals and contribute more than $22 billion annually.
View
