Latest AI News

What to Expect at AWS Summit Bengaluru 2026?

What to Expect at AWS Summit Bengaluru 2026?

The summit will feature 150+ sessions, hands-on workshops, and networking opportunities for business leaders, and developers.

17 days ago

View

SpaceX is working with Cursor and has an option to buy the startup for $60 billion

SpaceX is working with Cursor and has an option to buy the startup for $60 billion

SpaceX said it has struck a deal with Cursor to develop a next generation “coding and knowledge work AI,” which includes a surprising provision—an option to buy the popular software development platform for $60 billion later this year. Partnering with and potentially purchasing a leader in the hottest AI product category can only be seen in the context of SpaceX’s much-anticipated public offering. Investors seeking more value in the IPO might see its engagement with Cursor as another way to extract value from Elon Musk’s increasingly sprawling tech conglomerate. The deal won’t shock those who follow the industry closely. Last week, it was reported that xAI would beginrenting computing powerfrom its data centers to Cursor, with the coding startup using tens of thousands of xAI chips to train its latest AI model. And last month, two of Cursor’s most senior engineering leaders, Andrew Milich and Jason Ginsberg,left the company to join xAI, where both report directly to Musk. SpaceX described the partnership as a project combining Cursor’s “product and distribution to expert software engineers” with SpaceX’s Colossus supercomputer, which the company claims has the equivalent compute power of a million Nvidia H100 chips. SpaceX also said that at some undisclosed point later this year, it will either pay Cursor $10 billion for its work or acquire the company for $60 billion. Last week, TechCrunchreportedthat Cursor was eying a $50 billion valuation in an upcoming private fundraising round. That figure itself reflects an astonishing series of leaps. Cursor was valued at just $2.5 billion in January of last year, climbed to $9 billion by last May May, and was assigned a $29.3 billion post-money valuation when it closed on $2.3 billion in Series D funding in November. Either figure would represent a significant expense for SpaceX, which is widely seen to be losing money following the acquisition of xAI and the social media network X and is planning extensive capital investment. The brief statement did not say if either deal could be paid in SpaceX stock. In the meantime, the move could shore up weaknesses at each company, but it also reveals them. Neither Cursor nor xAI has proprietary models that can match the leading offerings from Anthropic and OpenAI — the same companies now competing directly with Cursor for the developer market. Cursor still uses and sells access to Claude and GPT models even as both firms roll out their own coding tools, an awkward arrangement that this new SpaceX partnership may be designed to eventually escape.

17 days ago

View

Unauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos, report claims

Unauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos, report claims

A group of unauthorized users has reportedly gained access toMythos, the cybersecurity tool recently announced by Anthropic. Much has been made of Mythos and its purported power — an AI product designed for enterprise security that, in the wrong hands, could become a potent hacking tool, according to the company. Now, Bloomberghas reportedthat a “private online forum,” the members of which have not been publicly identified, has managed to gain access to the tool through a third-party vendor. “We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson told TechCrunch. The company said that, so far, it has found no evidence that the supposedly unauthorized activity has impacted Anthropic’s systems in any way. The unauthorized group tried a number of different strategies to gain access to the model, including using “access” enjoyed by the person who was interviewed by Bloomberg. That person is currently employed at a third-party contractor that works for Anthropic, the outlet reported. Members of the group are part of a Discord channel that seeks out information about unreleased AI models, the outlet reported. The group has been using Mythos regularly since gaining access to it, and provided evidence to Bloomberg in the form of screenshots and a live demonstration of the software. Bloomberg reports that the group, which supposedly gained access to the tool on the very same day it was publicly announced, “made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models.” The group in question is “interested in playing around with new models, not wreaking havoc with them,” the source told the outlet. Mythos was released to a select number of vendors, including big names like Apple, as part of an initiative called Project Glasswing. The limited release of the model was designed to prevent its use by bad actors. The tool could be weaponized against corporate security instead of bolstering it, Anthropic said. If true, unauthorized use of Mythos could spell trouble for Anthropic, which provided the exclusive release to allay the company’s concern for enterprise security.

17 days ago

View

Meta will record employees’ keystrokes and use it to train its AI models

Meta will record employees’ keystrokes and use it to train its AI models

Meta has found a new source of training data for its AI models: its own employees. The company plans to use data culled from the mouse movements and keystrokes of its own staff in its pursuit to build more capable and efficient artificial intelligence. The story, which was firstreported by Reuters, shows the lengths to which tech companies are going to find new sources of training data — the lifeblood of AI models that helps the programs learn how to more effectively carry out tasks and respond to user queries. When reached for comment by TechCrunch, a Meta spokesperson provided the following statement: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how peopleactuallyuse them — things like mouse movements, clicking buttons, and navigating dropdown menus. To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose.” This trend would seem to reveal the troublesome privacy implications of the AI industry, as yesterday’s internal corporate communications are increasingly becoming fodder for a new corporate supply chain. Last weekit was reportedthat old startups were being scavenged for their corporate communications (from Slack archives, Jira tickets and other internal messaging platforms), which could be converted into AI fuel.

17 days ago

View

Clarifai deletes 3 million photos that OkCupid provided to train facial recognition AI, report says

Clarifai deletes 3 million photos that OkCupid provided to train facial recognition AI, report says

The AI platform Clarifai deleted 3 million photos that it says it got from OkCupid to train its facial recognition AI, according toReuters. The company also deleted any models that were trained using that data. Per the FTC’s investigation, Clarifai asked OkCupid — whose executives had invested in the company — to share data in 2014. The dating app then provided these user-uploaded photos,reports say, along with other demographic and location data. Per OkCupid’s own privacy policies, this behavior should have been prohibited. “We’re ⁠collecting data now and just realized that OKCupid must have a HUGE amount of awesome data for this,” Clarifai founder and CEO Matthew Zeiler wrote in an email to OkCupid co-founder Maxwell Krohn, according to court documents reviewed by Reuters. Though this incident appears to have taken place 12 years ago, the FTCdid not open an investigationuntil 2019, whena New York Times articleabout Clarifai mentioned that the company had used images from OkCupid to build an AI tool that could estimate someone’s age, sex, and race based on their face. The FTC and OkCupid, which is owned by Match Group,settled the lawsuitlast month. At the time, OkCupid and Match Group did not admit to the allegations that it deceived users by violating its own privacy policies, but Clarifai’s confirmation that it has deleted the data implies that the company did indeed get access to those photos. The FTC also alleged that since 2014, Match Group and OkCupid deliberately concealed this behavior and attempted to obstruct its investigation. OkCupid and Clarifai did not immediately respond to TechCrunch’s requests for comment. While the FTC isnot able to fine companiesfor this type of first-time offense, the agencydeclaredthat OkCupid and Match are “permanently prohibited from misrepresenting or assisting others in misrepresenting” the nature of their data collection and sharing. So, OkCupid and Match are prohibited from partaking in these behaviors, which are already not allowed by the FTC.

17 days ago

View

Sam Altman throws shade at Anthropic’s cyber model, Mythos: ‘fear-based marketing’

Sam Altman throws shade at Anthropic’s cyber model, Mythos: ‘fear-based marketing’

OpenAI and Anthropic continue to take swipes at each other. This week, during a podcast appearance, OpenAI CEO Sam Altman called out his competitor’s new cybersecurity model, noting that the company was using fear to make its product sound more impressive than it actually is. AnthropicannouncedMythos earlier this month, releasing the model to a small cohort of enterprise customers. The company has claimed that Mythos is too powerful to be released to the public out of concern that cybercriminals will weaponize it. Critics have saidthis rhetoric is overblown. Duringan appearanceon the podcast Core Memory, Altman implied that Anthropic’s “fear-based marketing” was a good way to keep AI in the hands of a small and exclusive elite. “There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people,” he said. “You can justify that in a lot of different ways.” “It is clearly incredible marketing to say, ‘We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,’” he added. Fear-based marketing was not invented by Anthropic. Arguably, much of the AI industry has leveragedscare tacticsand hyperbole to make its tools sound powerful. Ongoing rhetoric about how AI may lead to the end of the world hasn’t just come from Luddite doomer activists; it has also come from the people selling this technology to the public —Altman included.

17 days ago

View

ChatGPT’s new Images 2.0 model is surprisingly good at generating text

ChatGPT’s new Images 2.0 model is surprisingly good at generating text

It used to be easy enough to distinguish between human-made and AI-generated imagery — just two years ago, you couldn’t use image models tocreate a menu for a Mexican restaurantwithout inventing new culinary delights like “enchuita,” “churiros,” “burrto,” and “margartas.” Now, when I ask the brand new ChatGPT Images 2.0 model for a menu of Mexican food, it creates something that could immediately be used in a restaurant without customers noticing that something’s off. (However, ceviche priced at $13.50 might make me question the quality of the fish.) For comparison, here’s the result I got from DALL-E 3 two years ago (at the time, ChatGPT did not generate images): AI image generators havehistorically struggled to spellbecause they generally used diffusion models, which work by reconstructing images from noise. “The diffusion models […] are reconstructing a given input,” Asmelash Teka Hadgu, founder and CEO of Lesan AI,told TechCrunchin 2024. “We can assume writings on an image are a very, very tiny part, so the image generator learns the patterns that cover more of these pixels.” Researchers have since explored other mechanisms for image generation, likeautoregressive models, which make predictions about what an image should look like and function more like an LLM. Unfortunately, OpenAI declined to answer a question in a press briefing this week about what kind of model is powering ChatGPT Images 2.0. The company did, however, explain that the new model has “thinking capabilities,” which give it the ability to search the web, make multiple images from one prompt, and double-check its creations — this allows Images 2.0 to create marketing assets in various sizes, as well as multi-paneled comic strips. OpenAI also says that Images has a stronger understanding of non-Latin text rendering in languages like Japanese, Korean, Hindi, and Bengali. The model’s knowledge cuts off in December 2025, which could impact how accurately it can generate certain prompts involving recent news. “Images 2.0 brings an unprecedented level of specificity and fidelity to image creation. It can not only conceptualize more sophisticated images, but it actually brings that vision to life effectively, able to follow instructions, preserve requested details, and render the fine-grained elements that often break image models: small text, iconography, UI elements, dense compositions, and subtle stylistic constraints, all at up to 2K resolution,” OpenAI said in a press release. These capabilities mean that image generation isn’t as rapid as typing a question to ChatGPT, but generating something complex like a multi-paneled comic still takes just a few minutes. All ChatGPT and Codex users will be able to access Images 2.0 starting Tuesday; paid users will be able to generate more advanced outputs. The company will also make the gpt-image-2API available, with pricing dependent on the quality and resolution of outputs.

17 days ago

View

AI research lab NeoCognition lands $40M seed to build agents that learn like humans

AI research lab NeoCognition lands $40M seed to build agents that learn like humans

Investors are aggressively courting AI researchers to build startups that can make AI more reliable and efficient. Yu Su, an Ohio State professor leading an AI agent lab, said he initially resisted the pressure from VCs to commercialize his work. He finally took the leap last year and spun out his work into a startup when he saw that foundational model advances could make agents truly personalized. NeoCognition, a startup Su describes as a research lab developing self-learning AI agents, has just emerged from stealth with $40 million in seed funding. The round was co-led by Cambium Capital and Walden Catalyst Ventures, with participation from Vista Equity Partners and angels, including Intel CEO Lip-Bu Tan and Databricks co-founder Ion Stoica. “Today’s agents are generalists,” Su (pictured right) told TechCrunch. “Every time you ask them to do a task, you take a leap of faith.” According to Su, the issue lies in a lack of consistency. Current agents, whether from Claude Code, OpenClaw or Perplexity’s computer tools, successfully complete tasks as intended only about 50% of the time, he said. Since agents are still so unreliable, they are not ready to be trusted, independent workers, Su told TechCrunch. NeoCognition intends to change that by developing an agent system that can self-learn to become an expert in any domain, similar to how humans learn. Su argues that while human intelligence is broad, its real power is our ability to specialize. When we enter a new environment or profession, we can rapidly master its unique rules, relationships, and consequences. NeoCognition is building agents to mirror this exact approach. “For humans, our continued learning process is essentially the process of building a world model for any profession, any environment,” Su said. “We believe for agents to become experts, they need to learn autonomously to build a model of any given micro world.” Su views this capacity for rapid specialization as the critical missing link to getting AI to work reliably on its own. While it is possible to train agents for autonomous tasks, they must be custom-engineered for a specific vertical. NeoCognition is different because it’s building agents that are generalists capable of self-learning and specializing in any domain. NeoCognition intends to sell its agent systems primarily to enterprises, including established SaaS companies, which can use them to build agent-workers or to enhance existing product offerings. Su highlighted that an investment from Vista Equity Partners is especially valuable for this reason. As one of the largest private equity firms in the software space, Vista can provide NeoCognition with direct access to a vast portfolio of companies looking to modernize their products with AI. NeoCognition currently has about 15 employees, the majority of whom hold PhDs.

17 days ago

View

Apple’s John Ternus will run one of the world’s most powerful companies; the job is a minefield

Apple’s John Ternus will run one of the world’s most powerful companies; the job is a minefield

Over his 15-year reign as Apple’s top banana, Tim Cook has become instantly recognizable, powerful beyond imagination, and exceedingly wealthy. Most estimates peg Cook’s current net worth at roughly $3 billion, assets that he amassed largely through performance-based equity awards as Apple’s market cap has grown more than 11x on his watch to roughly $4 trillion. But the job comes with plenty of baggage, too. Cook has also had to navigate two Trump administrations and one Biden administration – each with its own posture toward Big Tech, China, and regulation. Cook also faced down the FBI over encryption, spent years in court defending the App Store against accusations that Apple had turned the iPhone into an illegal monopoly, and made compromises to stay in the Chinese market that attracted a whole lot of unwanted attention from human rights groups. Not last, Cook watched the company’s most ambitious hardware bet — the Vision Pro headset — bomb with consumers. That’s saying nothing of AI, where the outcome is still unknown. Incoming CEOJohn Ternusinherits all of it. Here’s a walk through some of Cook’s biggest battles over the years: Surely we all remember that 2016 FBI encryption fight? After a mass shooting at a holiday gathering in San Bernardino, California, the FBI demanded that Apple help unlock the gunman’s iPhone.Cook refused, arguing that encryption was the only meaningful countermeasure against exposing people’s private data and that being forced to break it would set a dangerous precedent. The standoff eventually ended when the FBI found another way in, but it cemented Apple’s identity as a privacy company and set up years of tension with governments around the world. Ternus will inherit that identity and the obligations that come with it. The App Storeantitrust warshaven’t been a walk in the park for Cook, either. Epic Games sued Apple in federal court over its requirement that apps use Apple’s in-app payment system and its 30% cut of sales (and when the judge pressed Cook on why users couldn’t simply pay developers directly at lower prices,his answersdid little to deflect her skepticism). Apple largely prevailed in 2021, with the court declining to call it a monopoly, but it was ordered to allow developers to link to external payment options. It complied in the narrowest sense, charging a 27% commission on those external purchases (some discount!), and courts found it in contempt. The Ninth Circuit Court of Appeals upheld that ruling in late 2025, and after a rehearing request was denied last month, Apple is now preparing to petition the Supreme Court, which had already declined to hear its prior appeal. A lower court still must determine what fee Apple can actually charge. The Epic saga is just one front in a much wider antitrust war. The U.S. Department of Justice sued Apple in March 2024, accusing it of unlawfully dominating the smartphone market by restricting third-party app and device developers — think competing smartwatches, digital wallets, and messaging services — in ways that make it harder for users to switch away from the iPhone. A federal judge denied Apple’s motion to dismiss that case, meaning it could grind through the courts for years. And just this week, Apple revealed it faces a potential $38 billion fine in India, where regulators have found it guilty of abusing its dominant position in the app market and say Apple has refused to hand over required financial data — a case complicated by the fact that Apple’s market share in India is still relatively modest, around 9%, giving it an unusual angle to contest the findings. Ternus inherits this fight mid-stream, with the App Store’s revenue model under direct judicial threat. China has been a constant and increasingly uncomfortable balancing act, too. Cook built Apple’s manufacturing operation around Chinese supply chains, making the company deeply dependent on a country whose government grew both more assertive and less predictable over time. He also made uncomfortable concessions to operate in the Chinese market — most notablyremoving VPN appsfrom the Chinese App Store and storing Chinese users’ iCloud data on state-controlled servers. Cook proved adept during Trump’s first term at insulating Apple from tariffs and trade war risks, in part by cultivating a personal relationship with Trump – who remarked upon news of Cook’s retirement that he’s “an incredible guy!” Apple has already signaled that Cook will continue to help Ternus negotiate geopolitical terrain as executive chairman — an acknowledgment that these relationships are tricky and that Cook’s institutional knowledge remains highly valuable. Yet AI is perhaps the most immediate and unresolved challenge that Ternus is being handed. Apple’s AI chief, John Giannandrea, formally leaves the company this month followingnumerousdelaysin the rollout of a more capable AI-powered Siri. Rather than relying solely on its own models, Apple has turned to bothGoogle’s GeminiandOpenAI’s ChatGPTto power some Apple Intelligence features. Longtime market research analyst Bob O’Donnelltold Reuterson Monday that Ternus’s biggest challenge will likely be “getting a better AI story and offering together that relies more on Apple’s own capabilities and less on third parties,” though some have argued that the company will look smarter in hindsight for waiting out the expensive competition playing out currently among today’s biggest AI outfits. Last but not least, executive turnover at Apple more broadly is less discussed but meaningful. Ternus is inheriting a largely rebuilt leadership team following the recent departures of several other Apple execs over the last year of less, including itslongtime COO,general counsel, andhead of UI design. It’s a challenge and an opportunity that will require him to put his own stamp on things relatively quickly. The through line connecting most of these challenges is that Cook’s greatest skill was his ability to manage complicated relationships with governments and partners while keeping the business humming. Whether Ternus has that same skill, or Cook’s continued presence as executive chair is meant to cover for any gaps there, may prove among the more interesting questions of the transition. A much scarier question hanging over Ternus’s tenure is whether the world that made Apple the most valuable company on the planet could actually end. Many industry watchers believe AI agents will become the primary way people interact with services, rendering the App Store and its 30% cut a distant memory. Couple that with the possibility of compelling new hardware that erodes the iPhone’s grip on our lives, like whatever OpenAI has in the works, and Ternus could find himself maneuvering through much more than complex relationships and litigation.

17 days ago

View

GRAI believes AI can make music more social, not replace artists

GRAI believes AI can make music more social, not replace artists

Today’s AI music startups, like Suno and Udio, offer technology that leverages AI for music generation. But a new company,GRAI, believes that most people don’t want to use AI to generate music from scratch — they’d rather do other things like remix tunes, share them with friends, or play around with tracks by doing things like changing a track’s style, just for fun. Of course, whether or not an artist wants anyone to play around with their tracks, or to what extent, is something they should get to decide. Music lab GRAI, now backed by a $9 million seed round, wants to put that control in artists’ hands, while also capitalizing on the power of AI to transform how consumers engage with music. The company, built by Belarusian founders who previouslysold their video-creation app Vochi to Pinterest, is experimenting with new AI music products. Today, this includes apps like the remixing appMusic with friends for iOSandanother AI music playground for Android. These apps, and others that may ship in the future, will help to inform the company how consumers want to engage with music beyond AI-enabled creation or listening alone. “The idea that we’re building the company around is what the next thing can be in music AI interaction and consumption,” explains GRAI co-founder and CEOIlya Liasun, who is currently based in Poland alongside much of the team. He says the main reason the founders started GRAI is that music has become one of the last major consumer categories that hasn’t gone “creator-first.” “We have problems — discovery is broken, listening is passive, and social context is almost non-existent,” Liasun notes. Meanwhile, he doesn’t think that AI will kill artists and labels, as some fear. Instead, the team at GRAI believes that AI could lead to new ways to engage with music, beyond just creating a tune through generative AI technology. The company intends to aim its products at Gen Z and Gen Alpha users who tend to discover new music through culture, meaning friends, fandoms, and through short-form content, like TikTok. These users don’t want to be creators or music producers; they just want to participate somehow. To power its social apps, GRAI developed its own taste and participation graph as well as its own infrastructure. It’s building a “derivatives pipeline” as well as real-time audio systems that will preserve the identity of original tracks while allowing them to be transformed. As Liasun puts it, the company’s goal is to work with artists and their labels to make this type of activity legal. And the end result isn’t more unwanted AI music. “We don’t want to share new genAI slop with the streaming services. We actually focus on the interaction part,” Liasun says. The idea is that users could play with tracks inside GRAI’s apps, perhaps remixing a favorite tune, or changing its style. Ultimately, those modified tracks could create a new source of royalty payments to the artists and labels. Rather than building first and seeking permission later, the company says it is talking to the labels up front. “The main idea here is that we want to build a future system in which artists will have the ability to opt in and opt out.” That, he says, is a core belief at GRAI: “first, ask owners, and then integrate it.” (Liasun declined to disclose if it already has agreements in place or with what companies.) If this type of music remixing activity becomes popular, GRAI believes it could help people discover new artists and songs outside of larger platforms like Reels, TikTok, or YouTube. With its initial apps, GRAI hopes to receive consumer feedback — even negative feedback — to help it find out what works and what doesn’t. Thecompany, co-founded by CTODima KamarouskiandAndrei Avsievich(president), is now backed by $9 million in seed funding in a round co-led by Khosla Ventures and Inovo VC. Other investors also participated, including Tensor Ventures,Tiny.VC, Flyer One Ventures, a16z Scout Fund, and various angels, such as Andrew Zhai (ML in Cursor, co-founder of Genova Labs, ex-Pinterest); Greg Tkachenko (founder of Unreal Labs, ex-Snap); Rob Reid (Founder of Rhapsody), and Dima Shvets (of MirAI and Reface).

17 days ago

View

YouTube expands its AI likeness detection technology to celebrities

YouTube expands its AI likeness detection technology to celebrities

YouTube is expanding its new “likeness detection” technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the companyannouncedon Tuesday. The technology works similarly to YouTube’s existingContent ID system, which detects copyright-protected material in users’ uploaded videos, allowing rights owners to request removal or share in the video’s revenue. Likeness detection does the same, but for simulated faces. The feature is meant to help protect creators and other public figures from having their identities used without their permission — a common problem for celebrities who find their likenesses have been used in scam advertisements. The technology was first made available to asubset of YouTube creatorsin a pilot program last year beforeexpanding more broadly, to includepoliticians, government officials, and journaliststhis spring. Now, YouTube says the technology is now being made available to those in the entertainment industry, including talent agencies, management companies, and the celebrities they represent. The company has support from major agencies like CAA, UTA, WME, and Untitled Management, which offered feedback on the new tool. Use of the likeness detection tool does not require entertainers to have their own YouTube channels. Instead, the feature scans for AI-generated content to detect visual matches of an enrolled participant’s face. Users can then choose to request removal of the video forprivacy policy violations,submit a copyright removal request, or do nothing. YouTube notes that it won’t remove all content, as it permits parody and satire content under its rules. In the future, the technology will support audio as well, the company says. Related to this, YouTube has also been advocating for similar protections at a federal level, with its support for theNO FAKES Actin Washington D.C. This would regulate the use of AI to create unauthorized recreations of an individual’s voice and visual likeness. The company hasn’t yet said how many removals of AI deepfakes have been managed by the tool so far, but noted in March that the amount of removals was still “very small.”

17 days ago

View

Bond, a new social media platform, wants to use AI to help you kick your doomscrolling habit

Bond, a new social media platform, wants to use AI to help you kick your doomscrolling habit

Legacy social media sites have been designed to keep us hooked to our devices, eyes glued endlessly to retina-frying feeds of memes and dumb videos in order to create more engaged platforms for advertisements. In recent years, however, a swell of companies have sought to capitalize on users’ burnout, pushing users to engage in IRL experiences, or offering products without addictive features like endless scroll. Bond, which officially launched on Tuesday, is one of those sites. Dino Becirovic, Bond’s co-founder and CEO, says that his site offers an AI-powered solution to Americans’ screen addiction. The site works like this: Much like a normal social media platform, users post about what they’ve been up to lately. Bond allows users to update their profiles, posting what it calls “memories,” via a variety of mediums, including pictures, video, and audio files. Unlike other sites, Bond is designed to act as a kind of idea generator for what the user should go and do in the real world. Experiences stored within Bond become fodder for its AI system, which then gets trained on what kind of personalized, event-based recommendations to make to the user, Becirovic says. For instance, if you’ve been posting a lot about how much you like Pho and how you haven’t had it in awhile, Bond’s system might recommend a nearby Vietnamese restaurant that is getting good reviews. Or, if you’re into heavy metal, Bond might point out that Iron Maiden is coming to your city next week. The more you post about your experiences, the more the system can feed you better recommendations, Becirovic says. In other words, the system is designed to get you off the app and back out into the real world, where you can do more stuff, instead of just “bed rotting” and “doomscrolling,” as the kids these days say. The layout looks a little bit like Instagram, although there is no actual feed. Instead, user profiles are presented in a kind of cluster formation. Clicking on a profile brings up the user’s current stories. These stories disappear from your public-facing profile after 24 hours, Becirovic said, but they then get stored in your private profile. Users can search through their own archive of memories whenever they want. Bond’s team includes people who previously built major social media apps, including TikTok, Twitter, and Facebook,the company says. Becirovic previously worked at Kleiner Perkins and Index Ventures, while Bond’s founding researcher, Arthur Bražinskas, co-led integration of user signals at Google Gemini. What is the revenue path for a company like this? Most social media sites are justgiant vehicles for advertising— and that’s where they make the lion’s share of their revenue. Bond doesn’t have ads, so how’s it going to turn a buck? Interestingly enough, Becirovic envisions a scenario in which — eventually — users can license their own data from Bond’s archives, selling it to companies that want to use it for AI-training purposes. In this scenario, Bond would take a very small cut of the profits via a licensing fee, thus generating ongoing revenue and positioning itself as a data provider to AI companies that are looking to tune up their models. “The idea behind this licensing model is that you can monetize your memories,” he said. “If we become this platform with the right incentive structure to get billions of people to create about their daily lives, we will naturally become a really attractive place for people to want to train GPT six and seven, all the other variants that are going to come future.” In another scenario, Bond would use its accumulated data to act as a product recommendation tool that integrates with e-commerce sites. “Our users would opt into this experience. If we are able to do this, we believe we could capture some value from the transaction with merchants by enabling a better user experience, driving conversion, and/or increasing throughput,” Becirovic told TechCrunch in an email. Becirovic said that Bond would never sell users data for the purposes of advertising, and users can “delete any memories by either deleting them in the Memory tab or using natural language in Memory chat.” He added: “Users can also delete their profile if they are not getting value from Bond. As the product grows, we will introduce more privacy control features to our users for them to manage their data.” Becirovic said Bond will improve its encryption over time, though he is a little vague about the platform’s current protections: “E2EE encryption is a priority for us in the near-future after launch. In the meantime, we store all user data securely in our database and ensure it is protected,” he said. At the moment, Becirovic seems mostly focused on making Bond cool. “Monetization is not a short-term priority,” he said. “Our initial focus is on creating an application users get more value from the more they capture their memories.”

17 days ago

View

PreviousPage 32 of 150Next