AI NewsRing’s Jamie Siminoff has been trying to calm privacy fears since the Super Bowl, but his answers may not help

Ring’s Jamie Siminoff has been trying to calm privacy fears since the Super Bowl, but his answers may not help

12:53 PM IST · March 9, 2026

Ring’s Jamie Siminoff has been trying to calm privacy fears since the Super Bowl, but his answers may not help

When Ring founder and CEO Jamie Siminoff decided to use the company’s first-ever Super Bowl commercial to introduce Search Party — an AI-powered feature that uses Ring camera footage to help find lost dogs — he expected Americans to love it. Instead, the TV spot set off a firestorm. In fact, practically since the moment it aired in February, Siminoff has been making the rounds on CNN, NBC, and in the pages of the New York Times, explaining that his critics fundamentally misunderstand what Ring is building. He sat down with TechCrunch a few days ago to make his case again, and while he was candid and plainly eager to reframe the narrative, some of his answers may well raise fresh questions among those already uneasy about the growth of home surveillance. The feature at the center of the controversy is fairly mundane on the surface, and something we covered in astraightforward waywhen it was first released. A dog goes missing; Ring alerts nearby camera owners to ask whether the animal shows up in their footage; users can respond or ignore the request entirely and stay invisible to everyone involved. Siminoff leaned heavily on this throughout our conversation — the idea that doing nothing counts as opting out, that no one is conscripted into anything. “It is no different than finding a dog in your backyard, looking at the collar and deciding whether or not to call the number,” he said. What he believes actually prompted the backlash was the visual in the Super Bowl spot: a map showing blue circles pulsing outward from house after house as cameras switched on across a neighborhood grid. “I would change that,” he said. “It wasn’t our job to try to poke anyone to try and get some response.” But Ring picked a rocky moment to make its case. Nancy Guthrie — the 84-year-old mother of Today Show anchor Savannah Guthrie — had vanished from her Tucson home in late January. Footage from a Google Nest camera at the property, capturing a masked figure trying to smother the lens with foliage, had swept across the internet and plopped home surveillance cameras squarely into the center of a national argument about safety, privacy, and who gets to watch whom. Siminoff leaned into the Guthrie case rather than away from it. In aseparate interviewwith Fortune, he contended it was an argument for putting more cameras on more houses. “I do believe if they had more [footage from Guthrie’s home], if there was more cameras on the house, I think we might have solved” the case, he said. Ring’s own network, he noted, had turned up footage of a suspicious vehicle two and a half miles from the Guthrie property. Whether you find that heartening or disturbing depends on your point of view. Siminoff clearly believes video is an unqualified social good, but some might look at the same statements and see a company founder using a kidnapping to get more of his products into consumers’ hands. Either way, the discomfort with Search Party isn’t simply about those blue concentric circles in the ad. The feature sits alongside two others — Fire Watch, which crowdsources neighborhood fire mapping, and Community Requests, which allows local law enforcement to ask Ring users in a given area whether they have relevant footage from an incident. Ring relaunched Community Requests in September through a partnership with Axon, the company that makes police body cameras and tasers, and operates the evidence management platform Evidence.com. (Axon and Ring announced the partnership in April of last year, shortly after Siminoffrejoined the companyafter stepping away in 2023.) A previous version of thatpartnershipinvolved Flock Safety, which operates AI-powered license plate readers. Ringended that arrangementseveral days after the Super Bowl ad aired, with Siminoff citing the “workload” it would create when he talked with us. Asked directly, Siminoff declined to address whether Flock’s reported data-sharing with U.S. Customs and Border Protection also played a role. (Dozens of towns across the U.S. have cut ties with Flock over exactly those concerns.) Still, the timing of Ring’s decision was notable. While Siminoff believes some customers are misreading his products, he knows Ring can’t afford to dismiss their anxieties, particularly right now. None of this is happening in isolation. Just days ago, NPR reported on itsown investigationcompiled from dozens of accounts from people who found themselves caught in the Department of Homeland Security’s expanding surveillance apparatus, including U.S. citizens with no immigration status issues at all. O One woman, a constitutional observer trailing an ICE vehicle in Minneapolis in late January, described a masked federal agent leaning out the window, photographing her, and then calling out her name and home address. “Their message was not subtle,” she told NPR. “They were, in effect, saying, we see you. We can get to you whenever we want to.” Siminoff seems to understand deeply that his answers about Ring’s own data practices take on added weight as a result. When we talked, he pointed to end-to-end encryption as Ring’s strongest privacy protection and confirmed that when it’s enabled, not even Ring employees can view the footage since decryption requires a passphrase tied to the user’s own device. He described this as an industry first for residential camera companies. The facial recognition question is where things get more tangled. Ring rolled out a feature called Familiar Faces in December, two months before the Super Bowl ad aired. It allows users to catalog up to 50 frequent visitors — family members, delivery drivers, neighbors — so that instead of a generic motion alert, Ring owners get a notification that reads “Mom at Front Door.” Siminoff described the feature enthusiastically during our conversation, saying that he gets alerts, for example, when his teenage son pulls into the driveway. He compared it to the facial recognition now routine at TSA checkpoints – the implication being that the public has already made its peace with this kind of thing. When asked about consent from people who appear on a Ring camera but never agreed to be catalogued, he said simply that Ring adheres to applicable local and state laws. He was also careful when asked whether Amazon draws on Ring’s facial recognition data. “Amazon does not access that data,” he said, then he added: “In the future, if we could see a feature where the customer wanted to opt in to do something with that, maybe you could see that happening.” He further volunteered that end-to-end encryption is an opt-in feature: users have to manually enable it in the Ring app’s Control Center. But according to Ring’s ownsupport documentation, the tradeoff for enabling it is steep. The full list of features disabled by end-to-end encryption includes event timelines, rich notifications, quick replies, video access on Ring.com, shared user access, AI video search, 24/7 video recording, pre-roll, snapshot capture, bird’s eye view, person detection, AI video descriptions, video preview alerts, virtual security guard, and Familiar Faces, which requires processing in the cloud. In other words, the two things Ring is actively promoting as flagship capabilities — AI-powered recognition of who’s at your door, and true privacy from Ring itself — are mutually exclusive. You can have one or the other but not both. As for whether Ring users should worry about their footage ending up in front of a federal immigration agency, Siminoff said no — community requests run only through local law enforcement channels — and pointed to Ring’s transparency report on government subpoenas. He didn’t take up what happens when that boundary proves porous. Unsurprisingly, Siminoff is building toward something bigger than doorbell cameras. Ring has more than 100 million cameras in the field and is now quietly dipping a toe into enterprise security with a new “elite” camera line and a security trailer product. He said that small businesses have been pulling Ring into their spaces already, whether Ring markets to them or not. He’s also open to outdoor drones — “if we could get the cost in a place where it made sense” — and on license plate detection, which Ring’s now-former-partner Flock Safety has made its core business, he declined to say never. (Ring is “definitely not” working on it today, he’d said when asked whether it’s something Ring might explore. After a beat, he added that “it’s very hard to say we’re never going to do something in the future.”) Siminoff frames all of it through a belief he says he has held from the start of the company, that each home is a node controlled by its owner, and residents should be able to choose whether to participate in neighborhood-level cooperation when something happens. Alas, in a moment when an NPR investigation has documented federal agents photographing and identifying civilians who were doing nothing more than observing arrests, and when a kidnapping case has become a national talking point about both cameras and privacy, the question isn’t just about whether Ring’s opt-in framework is designed well. It’s whether what Ring is building — including a network of tens of millions of cameras, AI-powered search, and facial recognition — can remain as benign as Siminoff may well intend it, regardless of who is in power, what partnerships get struck, and how the data flows.

read more

Latest AI News

View All News →
The Android Show I/O Edition: Google Showcases Gemini Intelligence on Android With New AI-Backed Widget Creation Tool

The Android Show I/O Edition: Google Showcases Gemini Intelligence on Android With New AI-Backed Widget Creation Tool

Google is bringing Gemini Intelligence to Android, its new suite of AI-powered tools for its operating system, the Mountain View-based tech giant announced during the Android Show I/O Edition event. The company hosted the event as part of Google I/O, which is scheduled to take place from May 19 to May 20. Slated to roll out to select Android devices soon, Gemini Intelligence will expand Google's multistep task automation feature beyond the Samsung Galaxy S26 series and Pixel 10 lineup. Moreover, the company has announced that it is also integrating Gemini into Chrome on Android, similar to the browser's desktop version.

2 hours ago

View

Threads tests a Meta AI integration that works similarly to Grok

Threads tests a Meta AI integration that works similarly to Grok

Threads is testing a Meta AI integration that works similarly to X’s Grok. Users with a public account will be able to mention Meta AI in a post or a reply to get more context. The feature is currenty in beta testing in Malaysia, Saudi Arabia, Mexico, Argentina, and Singapore. Meta told TechCrunch in an email that the feature is designed to help people get real-time context about trends and breaking stories, as well as receive recommendations, all within conversations. Now, users can mention Meta AI to ask questions like, “why are people talking about the World Cup this month?, “whose Met Gala looks are trending right now?” or “how are the Knicks doing in the playoffs?” Meta AI will then process the invocation and respond as a public reply authored by the @meta.ai account. Meta AI will respond in the language used in the post it was mentioned in. By integrating Meta AI into its platform, Threads is positioning itself as not just a destination for chatting about news and trends, but also a place where you can get information and recommendations without having to leave the app. The idea is similar to Grok’s role on X, which is filled with posts of users asking the AI chatbot questions like “is this real?” or “explain this.” Of course, giving an AI chatbot this level of visibility carries risks, as seen on X whenGrok generated postspraising Hitler. Still, Meta AI notably has stronger safeguards in place than Grok, though it remains to be seen whether it will be prone to similar issues. Meta notes that if you want to see fewer Meta AI replies in your feed, you can mute @meta.ai, use the “Not interested” option on any Meta AI post, or hide a Meta AI reply that appears directly on your post. The company says it plans to learn from early feedback and will continue improving the experience before expanding it to more people.

2 hours ago

View

Google’s ‘Create My Widget’ feature will let you vibe code your own widgets

Google’s ‘Create My Widget’ feature will let you vibe code your own widgets

Google on Tuesday unveiled a new “Create My Widget” feature for Android that allows users to vibe code their own custom widgets. The feature will first launch on the latest Samsung Galaxy and Google Pixel phones this summer. To create a widget, users will be able to describe what they want using natural language. For example, you could ask the feature to “suggest three high-protein meal prep recipes every week” in order to get a custom dashboard that you can add and resize on your home screen. Or, if you’re a cyclist who only cares about wind speed and rain, you can create a weather widget that just surfaces those exact stats on your home screen. Gemini can also pull information from the web and connect with Google apps like Gmail and Calendar to build a single, personalized dashboard. For instance, if you’re planning a family reunion in Berlin, it can gather your flight and hotel details, surface restaurant reservations, and even add a countdown. The feature signals Google’s latest push to bring generative AI deeper into the Android experience, as tech companies race to make customization tools more accessible to everyday users. “This is like you asking your personal assistant a question, and having them just bring you the answer on repeat,” said Ben Greenwood, Director, PM, Android Core Experiences, during a briefing with reporters. “So think of it as asking Gemini things about the world, things about its knowledge of what’s going on and events, as well as things about your personal data. Those are sort of the two areas that unlock an enormous number of use cases that we’re super excited about.” The company announced the new feature alongside the unveiling of Gemini Intelligence, which will bring additional features like advanced autofill, an AI-powered voice dictation feature for Gboard, and more.

2 hours ago

View

The AI legal services industry is heating up. Anthropic is getting in on the action.

The AI legal services industry is heating up. Anthropic is getting in on the action.

Anthropic announced Tuesday that it is launching a host of new chatbot features designed to provide automated assistance to law firms. The new features expand Claude for Legal — the law-focused offering thatlaunched earlier this year— offering users a new set of legal plugins and MCP connectors designed for specific areas of law. The new tools come amid hot competition in the legal AI space. In March, the AI law startup Harvey, which uses agentic AI to automate legal workflows,raised $200 millionat a valuation of $11 billion. Last month, a rival startup, Legora,raised a $600 millionseries D, and launcheda high-profile ad campaignfeaturing Jude Law. Legora offers similar services to Harvey — automated solutions built to simplify the often byzantine law processes that have traditionally involved entire teams of humans. Anthropic’s new tools are designed to help law firms automate specific clerical functions — things like document search and review, case law resources, deposition prep, document drafting, and other related areas. The plugins — which represent a bundle of functions and automated tools — are designed to work across legal fields like commercial, privacy, corporate, employment, product, and AI governance, Anthropic says. Anthropic is also offering a number of model context protocol connectors. MCPs connect specific data sources and third-party systems to AI models, allowing the models to interact with them directly. In this case, the new MCP connectors integrate Claude into a variety of software applications that are already routinely used by law firms — applications for document management like DocuSign and file search platforms like Box. Legal research sites like Thomson Reuters (which operates Westlaw) can also be connected. The new connectors and plugins are being made available to all paying Claude customers, the company said. The new features also build upon other plugins designed for the legal industrythat the company launchedin February. “The legal sector is facing mounting pressure to adopt AI, and the firms and in-house teams that move are pulling ahead fast,” a spokesperson for the company said. “Claude is making a deeper push into knowledge work, with the legal sector emerging as one of its most significant and fastest-growing industries.” As AI companies have sought to court law firms, AI-related failures have caused real problems in court. Dozens of lawyershave been caughtusing AI to generate error-ridden legal documents, as has at least onemajor law firm. Last year, Californiaissued a first-of-its-kind fineagainst an attorney who had used ChatGPT to draft an appeal riddled with fake quotes. Federal judgeshave also been caughtusing it to draft rulings, a trend thatdrew the scrutinyof Congressional leaders last year. Meanwhile,AI-generated lawsuitsare said to be clogging the arteries of justice — overwhelming courts with stacks of bizarrely argued legal “slop.”

2 hours ago

View