AI NewsMeta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage

Meta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage

12:48 AM IST · March 6, 2026

Meta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage

Meta is facing a new lawsuit over its AI smart glasses and their lack of privacy, afteran investigationby Swedish newspapers found that workers at a Kenya-based subcontractor are reviewing footage from customers’ glasses, which included sensitive content, like nudity, people having sex, and using the toilet. Meta claimed it was blurring faces in images, but sources disputed that this blurring consistently worked,reports noted. The news prompted the U.K. regulator, the Information Commissioner’s Office, to investigate the matter. Now, the tech giant is facing a lawsuit in the United States, as well. In the newly filedcomplaint, plaintiffs Gina Bartone of New Jersey and Mateo Canu of California, represented by the public interest-focused Clarkson Law Firm, allege that Meta violated privacy laws and engaged in false advertising. The complaint alleges that the Meta AI smart glasses are advertised using promises like “designed for privacy, controlled by you,” and “built for your privacy,” which might not lead customers to assume their glasses’ footage, including intimate moments, was being watched by overseas workers. The plaintiffs believed Meta’s marketing and said they saw no disclaimer or information that contradicted the advertised privacy protections. The suit charges Meta and its glasses manufacturing partner Luxottica of America with conduct that violates consumer protection laws. Meta does not have a comment on the litigation at this time. Clarkson Law Firm, which over the years has filed other major lawsuits against tech giants, includingApple,Google, andOpenAI, points to the scale of the issues at hand. In 2025, over seven million people bought Meta’s smart glasses, which means their footage is fed into a data pipeline for review, and they can’t opt out. Meta told the BBC that when people share content with Meta AI, it uses contractors to review the information to improve people’s experience with the glasses, which is explained in its privacy policy, and pointed toSupplemental Meta Platforms Terms of Service, without specifying where this was noted. The news outlet, however, found that a mention of human review could be found inMeta’s U.K. AI terms of service. Aversion of that policythat applies to the U.S. states “In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).” The complaint mainly points to how the glasses were marketed, showing examples of ads that touted the privacy benefits, describing their privacy settings, and “added layer of security.” “You’re in control of your data and content,” one ad read, explaining that the smart glasses owners got to choose which content was shared with others. The rise of smart glasses and other “luxury surveillance” tech, like always-listening AI pendants, have prompted a broad backlash. One developer published an appcapable of detecting when smart glasses are nearby. Meta did not have a comment on the litigation itself, as it was just filed. However, spokesperson Christopher Sgro offered the following statement on the overall issue, saying, “Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.” Updated after publication with Meta’s statement.

read more

Latest AI News

View All News →
Google adds Gemini-powered dictation to Gboard, which could be bad news for dictation startups

Google adds Gemini-powered dictation to Gboard, which could be bad news for dictation startups

Google announced Rambler, a new AI-powered voice dictation feature for Gboard — its widely used Android keyboard app — at its Android Show: I/O Edition 2026 event on Tuesday morning. The launch puts Google in direct competition with the likes ofWispr Flow and Typeless, a growing crop of AI-powered dictation apps that have built audiences on desktop and mobile in recent years — most of which have yet to establish a strong foothold on Android. Just like other dictation apps, Rambler removes filler words like “ums” and “ahs.” It also understands midsentence corrections like, “I am going to meet you on Wednesday at our usual coffee shop at 3 p.m. … um, 2 p.m.” Google said it is using Gemini-based multilingual models that also support code switching. Code switching means users can move between languages midsentence — say, from English to Hindi — and Rambler will follow along without losing context. It’s a capability that reflects how many multilingual speakers actually communicate, and one that most Western dictation apps have been slow to support. The company said that Gboard will clearly indicate to its users that the Rambler feature is in use. It doesn’t store any voice recordings and uses the audio only to transcribe what users speak. Google mentioned during the briefing that, as you can use the Rambler feature across all apps, it is like “reinventing the keyboard.” Loading the player… On privacy, Ben Greenwood, director of Android Core Experiences, said Google uses a combination of on-device and cloud-based processing and has “invested significantly over many years” to ensure features are “safe and private” — a calculated message to users weighing Rambler against third-party dictation apps that may handle data differently. In the past few years, a host of dictation apps — Wispr Flow, Willow, Superwhisper, Monologue, Handy, and Typeless — have cropped up. But until now, most of that activity has been on desktop and iOS, leaving Android relatively underserved. Google itself releasedAI Edge Eloquent, an offline-first dictation app powered by its on-device Gemma AI models, on iOS last month. Rambler is Google's clearest move yet to close that gap. These new features will be limited to Samsung Galaxy and Google Pixel phones for an initial summer rollout but will eventually reach other Android devices. The core advantage here is distribution: Gboard is the default keyboard for the vast majority of Android users worldwide, meaning Rambler arrives pre-installed for hundreds of millions of people. When a platform player enters a market at the operating-system level, stand-alone apps need a compelling reason — better accuracy, deeper features, or stronger privacy guarantees — to justify a separate download. For dictation startups, the question is no longer whether they can build something good — it's whether they can build something good enough that users actively go looking for it.

1 hour ago

View

Report: Google and SpaceX in talks to put data centers into orbit

Report: Google and SpaceX in talks to put data centers into orbit

Google and SpaceX are in talks to launch orbital data centers in space,reportsThe Wall Street Journal, citing sources familiar with the matter. The potential deal comes as SpaceX gears up for its$1.75 trillion IPOlater this year, selling investors on the idea that data centers in space will be the cheapest place to put AI compute within the next few years. It also followsSpaceX’s deal with Anthropiclast week to use computing resources from xAI’s data center in Memphis, Tennessee, with the potential to work together on orbital ones in the future. (SpaceX acquired xAI in February.) Google is reportedly talking to other rocket-launch companies, as well. The company also plans to launch prototype satellites by 2027 as part of an initiative called Project Suncatcher, announced late last year. Elon Musk hascreated hypefor orbital data centers, claiming they are cheaper to operate. Advocates also point out they are free from local backlash that U.S. ground-based buildouts attract. However, asTechCrunch recently reported, today’s terrestrial data centers are much cheaper than those in orbit once satellite construction and launch costs are factored in. Google invested $900 million in SpaceX in 2015, according toregulatory filings. TechCrunch has reached out to Google and SpaceX for comment.

1 hour ago

View

Anthropic warns investors against secondary platforms offering access to its shares

Anthropic warns investors against secondary platforms offering access to its shares

As investors scramble to get their hands on shares of AI companies of all stripes, Anthropic this weekupdated its websiteto warn investors that a slew of private and secondary investment platforms that offer access to shares in the AI company are not, in fact, allowed to do so. The company named Open Doors Partners, Unicorns Exchange, Pachamama Capital, Lionheart Ventures, Hiive (new offerings), Forge Global (new offerings), Sydecar and Upmarket as companies that are not authorized to provide access to buy or sell its shares. “Any sale or transfer of Anthropic stock, or any interest in Anthropic stock, offered by these firms is void and will not be recognized on our books and records,” the company’ssupport pagereads. Reached for comment, Forge Global claimed to have been included erroneously. “We are working with Anthropic to remove Forge’s name from this alert,” the platform told TechCrunch. “Forge does not facilitate transactions in any private company’s shares without the explicit approval of the company.” Sydecar, meanwhile, said it only acts in an administrative capacity. “The company does not buy or sell securities or solicit transactions in any private companies. Further, Sydecar requires sponsors to attest that they have reviewed relevant documents relating to the transferability of shares and that they have the required approvals and consents from the company,” the company said in an emailed statement. Anthropic’s update comes alongside a rise in the number of investment platforms offering exposure to AI companies’ shares (and thus their growth) via secondary markets where existing shareholders sell their shares, “tokenized” securities, special purpose vehicles (SPVs), or secondary market holdings. Anthropic, rumored to beraising fresh funding at a $900 billion valuation, hasespecially been in demand, with some secondary market brokers telling TechCrunch last month that it’s one of the “hardest” stocks to source. "Anthropic is right to take seriously concerns around unauthorized share sales and investment scams," Hiive spokesperson Dakota Betts said in an emailed statement. "We share those concerns. They are a major reason why Hiive invested heavily in legal, compliance, and diligence infrastructure from the beginning, and all share transfers facilitated by Hiive are approved by the issuer." Over the past year, some crypto companies, likecrypto exchange OKX, have spun up investment products selling exposure to AI companies. These often take the form of pre-IPO perpetual futures contracts, which are derivative instruments that track the value of private companies on secondary markets but don't offer ownership of actual shares. SPVs are different from those derivative systems, offering investors a chance to buy shares of an entity that holds at least some stake in Anthropic. That equity could be from an official investor, or have been acquired when an investor is forced to liquidate its holdings, as happened duringthe bankruptcy of FTX. In other cases, the equity claim may be entirely fraudulent. Anthropic says both its preferred and common stock are subject to transfer restrictions, which means any share sale or transfer not approved by its board of directors will be considered invalid. According to Anthropic, any third-party platform (specifically SPVs and retail investment firms) that claims to sell its shares directly or using forward contracts are unauthorized to do so. "We do not permit special purpose vehicles (SPVs) to acquire Anthropic stock and any transfer of shares to an SPV are void under our transfer restrictions," the company's blog reads. "Offers to invest in Anthropic’s past or future financing rounds through an SPV are prohibited." Note: This story was updated to include comments from Hiive and Sydecar.

1 hour ago

View

Musk mulled handing OpenAI to his children, Altman testifies

Musk mulled handing OpenAI to his children, Altman testifies

OpenAI CEO Sam Altman finally took the stand this morning to defend himself against his former cofounder Elon Musk’s lawsuit challenging OpenAI’s corporate structure. Altman was asked out of the gate what he thought of Musk’s allegation that OpenAI’s other founders “stole a charity” when they launched a for-profit subsidiary to market products based on the company’s AI models. “It feels difficult to even wrap my head around that framing,” Altman said after several seconds of silence. “We created one of the largest charities in the world. This foundation is doing incredible work and will do much more.” Musk’s attorneys have been at pains to point out that OpenAI’s foundation, which now has assets on the order of $200 billion, didn’t have full-time employees until earlier this year. OpenAI board chair Bret Taylor testified today that was simply because of the challenge of converting OpenAI equity to cash, which was accomplished with the organization’s most recent restructuring in 2025. The central question posed by Musk’s lawyers is whether the company’s commitment to safety had been left behind as its commercial power grew. But Altman said that in 2017, duringa pivotal periodwhen the founders wrestled with how to obtain the funding to power their AI models, Musk’s “specific plans on safety made me worry.” He described a “particularly hair-raising moment” in the debate when Musk was asked what would happen if he died while controlling a hypothetical OpenAI for-profit. In Altman’s telling, Musk said “maybe OpenAI should pass to my children.” Altman said that Musk’s focus on controlling the initial for-profit gave him pause because OpenAI was dedicated to keeping advanced AI out of the hands of a single person, and Altman, with his experience running the prominent startup accelerator Y Combinator, knew “founders who had control usually did not give it up.” Altman also testified that Musk's management tactics, which might have worked for engineering and manufacturing, didn't work at OpenAI. "I don't think Mr. Musk understood how to run a good research lab," Altman said. "He had demotivated some of our most key researchers. He had at one point required Greg and Ilya to make a list of the researchers and list out their accomplishments and stack rank them and take a chainsaw through a bunch. That did huge damage for a long time to the culture of the organization." Indeed, Altman cast himself as defending the "sweat equity" of fellow cofounders Greg Brockman and Ilya Sutskever, the two people effectively running OpenAI at the time while Musk and Altman had other jobs. After that clash went unresolved, Musk ultimately left OpenAI's board and started competing AI initiatives at Tesla and his own AI startup, xAI. But Altman kept in touch with the mercurial businessman, updating him on OpenAI's work and seeking his funding and advice. OpenAI's lawyers noted that Musk had been kept up to date and asked to participate in the investments that his lawsuits now claim corrupted the non-profit. During one discussion of a Microsoft investment into OpenAI in 2018, Altman said that "unlike a lot of meetings with Mr. Musk, this was a good vibes meeting," where Musk spent a "long conversation showing us memes on his phone."

1 hour ago

View