Latest AI News

The Facebook insider building content moderation for the AI era

The Facebook insider building content moderation for the AI era

When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, the social media giant was in the thick of theCambridge Analyticafallout. At the time, he thought he could simply fix Facebook’s content moderation problem with better technology. The problem, he quickly learned, ran deeper than technology. Human reviewers were expected to memorize a 40-page policy document that had been machine-translated into their language, he said. Then they had about 30 seconds per piece of flagged content to decide not just whether that  content violated the rules, but what to do about it: block it, ban the user, limit the spread. Those quick calls were only “slightly better than 50% accurate,” according to Levenson. “It was kind of like flipping a coin, whether the human reviewers could actually address policies correctly, and this was many days after the harm had already occurred anyway,” Levenson told TechCrunch. That sort of delayed, reactive approach is not sustainable in a world of nimble and well-funded adversarial actors. The rise of AI chatbots has only compounded the problem, as content moderation failures have resulted in a string of high-profile incidents, like chatbots providing teens withself-harm guidanceorAI-generated imageryevading safety filters. Levenson’s frustration led to the idea of “policy as code” — a way to turn static policy documents into executable, updatable logic tightly coupled to enforcement. That insight led to the founding ofMoonbounce, which announced on Friday it has raised $12 million in funding, TechCrunch has exclusively learned. The round was co-led by Amplify Partners and StepStone Group. Moonbounce works with companies to provide an additional safety layer wherever content is generated, whether by a user or by AI. The company has trained its own large language model to look at a customer’s policy documents, evaluate content at runtime, provide a response in 300 milliseconds or less, and take action. Depending on customer preference, that action could look like Moonbounce’s system slowing down distribution while the content awaits a human review later, or it might block high-risk content in the moment. Today, Moonbounce serves three main verticals: Platforms dealing with user-generated content like dating apps; AI companies building characters or companions; and AI image generators. Moonbounce is supporting more than 40 million daily reviews and serving over 100 million daily active users on the platform, Levenson said. Customers include AI companion startup Channel AI, image and video generation company Civitai, and character roleplay platforms Dippy AI and Moescape. “Safety can actually be a product benefit,” Levenson told TechCrunch. “It just never has been because it’s always a thing that happens later, not a thing you can actually build into your product. And we see our customers are finding really interesting and innovative ways to use our technology to make safety a differentiator, and part of their product story.” Tinder’s head of trust and safetyrecently explainedhow the dating platform uses these types of LLM-powered services to reach a 10x improvement in accuracy of detections. “Content moderation has always been a problem that plagued large online platforms, but now with LLMs at the heart of every application, this challenge is even more daunting,” Lenny Pruss, general partner at Amplify Partners, said in a statement. “We invested in Moonbounce because we envision a world where objective, real-time guardrails become the enabling backbone of every AI-mediated application.” AI companies are facing mounting legal and reputational pressure after chatbots have been accused of pushing teenagers and vulnerable users towardsuicideand image generators like xAI’s Grok have been used to createnonconsensualnude imagery. Clearly, safety guardrails internally are failing, and it’s becoming a liability question. Levenson said AI companies are increasingly looking outside their own walls for help beefing out safety infrastructure. “We’re a third party sitting between the user and the chatbot, so our system isn’t inundated with context the way the chat itself is,” Levenson said. “The chatbot itself has to remember, potentially, tens of thousands of tokens that have come before…We’re solely worried about enforcing rules at runtime.” Levenson runs the 12-person company with his former Apple colleague Ash Bhardwaj, who previously built large-scale cloud and AI infrastructure across the iPhone-maker’s core offerings. Their next focus is a capability called “iterative steering,” developed in response to cases like the2024 suicide of a 14-year-old Florida boywho became obsessed with a Character AI chatbot. Rather than a blunt refusal when harmful topics arise, the system would intercept the conversation and redirect it, modifying prompts in real time to push the chatbot toward a more actively supportive response. “We hope to be able to add to our actions toolkit the ability to steer the chatbot in a better direction to, essentially, take the user’s prompt and modify it to force the chatbot to be not just an empathetic listener, but a helpful listener in those situations,” Levenson said. When asked whether his exit strategy involved an acquisition by a company like Meta, bringing his work on content moderation full circle, Levenson said he recognizes how well Moonbounce would fit into his old employer’s stack, as well as his own fiduciary duties as a CEO. “My investors would kill me for saying this, but I would hate to see someone buy us and then restrict the technology,” he said. “Like, ‘Okay, this is ours now, and nobody else can benefit from it.’”

1 month ago

View

Google Introduces Gemma 4 Open-Source AI Model, Enables Building Autonomous Agents

Google Introduces Gemma 4 Open-Source AI Model, Enables Building Autonomous Agents

Google, on Thursday, introduced Gemma 4 artificial intelligence (AI) model. The first in the Gemma 4 family comes with several improvements over its predecessors. While Gemma 3 focused on text and visual reasoning capabilities, the Mountain View-based tech giant says the latest iteration brings agentic capabilities and advanced reasoning to the open-source model. Available in four different sizes, the latest large language model (LLM) will be available across Google's developer platforms and can be downloaded via third-party repositories to run locally.

1 month ago

View

Google Vids Will Now Let All Users Generate Veo 3.1 AI Videos for Free, New Features Added

Google Vids Will Now Let All Users Generate Veo 3.1 AI Videos for Free, New Features Added

Google Vids, the company's artificial intelligence (AI)-powered video creation platform, has received a massive update. In August 2025, the Mountain View-based tech giant expanded the platform to all users, but the ability to generate AI videos from text prompts was kept limited to paying subscribers. However, on Thursday, the company announced that everyone can now generate Veo 3.1-powered AI videos for free. Although the maximum number of free video generations is limited, the update opens up the use case of the platform to non-paying users.

1 month ago

View

OpenAI Brings ChatGPT to Apple CarPlay, but It Cannot Access Navigation and Live Location Data

OpenAI Brings ChatGPT to Apple CarPlay, but It Cannot Access Navigation and Live Location Data

OpenAI has announced that ChatGPT is now available in Apple CarPlay, bringing its voice-based assistant experience to users on the move. The rollout enables iPhone users to access the chatbot directly from supported car infotainment systems using voice commands while driving. It works with devices running iOS 26.4 or later and is available across all ChatGPT plans. The integration adds in-car support for conversations and ongoing chats, extending how users can interact with the service beyond traditional smartphone use.

1 month ago

View

How One Developer is Rethinking Go Using Rust

How One Developer is Rethinking Go Using Rust

Lisette is the answer to a burning question: how far can Go be evolved without losing its productivity?

1 month ago

View

Microsoft Launches Three Models to Reduce Dependence on OpenAI

Microsoft Launches Three Models to Reduce Dependence on OpenAI

All three models are available through Microsoft Foundry and the MAI Playground in the US.

1 month ago

View

Google DeepMind Launches Gemma 4 Amid Competition from Chinese Open Models

Google DeepMind Launches Gemma 4 Amid Competition from Chinese Open Models

The models can be fine-tuned efficiently across Android devices, laptop GPUs, developer workstations and accelerators for research and production use.

1 month ago

View

Microsoft takes on AI rivals with three new foundational models

Microsoft takes on AI rivals with three new foundational models

Microsoft AI, the tech giant’s research lab, announced the release ofthree foundational AI modelson Thursday that can generate text, voice, and images. The release signals Microsoft’s continued push to build out its own stack of multimodal AI models — and compete with rival AI labs — even though it remains tied to OpenAI. MAI-Transcribe-1 transcribes speech across 25 different languages into text and is 2.5 times faster than Microsoft’s Azure Fast offering, according to a company press release. MAI-Voice-1 is an audio-generating model. This voice model allows users to generate 60 seconds of audio in one second and allows users to create a custom voice. MAI-Image-2 is a video-generating model. MAI-Image-2was originally released on MAI Playground, a new large language model testing software, on March 19. Now, all three models are being released on Microsoft Foundry and the transcription and voice models are available in MAI Playground as well. The models were developed byMicrosoft’s MAI Superintelligence team, an AI research team led by Mustafa Suleyman, the CEO of Microsoft AI, that was formed and announced in November 2025. “At Microsoft AI, we’re building Humanist AI. We have a distinct view when creating our AI models — putting humans at the center, optimizing for how people actually communicate, training for practical use,” Suleyman wrote in theblog post. “You’ll see more models from us soon in Foundry and directly in Microsoft products and experiences.” In an increasingly crowded LLM market, MAI hopes a selling point for these models is that they are cheaper than those from Google and OpenAI, the company wrote in the blog post. MAI-Transcribe-1 starts at $0.36 per hour. MAI-Voice-1 starts at $22 per 1 million characters, and MAI-Image-2 starts at $5 for 1 million tokens for text input and $33 for 1 million tokens for image output. Despite releasing its own models, Suleyman reaffirmed Microsoft’s commitment to its partnership with OpenAI in aninterview with VentureBeat— although a recent renegotiation of that partnership allowed Microsoft to truly pursue this superintelligence research,Suleyman told The Verge. Microsoft has invested more than$13 billion into the AI research laband hosts its models in its various products through a multi-year partnership.Microsoft takes the same stance with chips; it both produces its own and buys from outside players as well.

1 month ago

View

OpenAI acquires TBPN, the buzzy founder-led business talk show

OpenAI acquires TBPN, the buzzy founder-led business talk show

OpenAI has acquired popular tech industry talk show TBPN — Technology Business Programming Network — making this the AI giant’s first acquisition of a media company. The show will report to OpenAI’s chief political operative, Chris Lehane. TBPN, hosted by former tech founders John Coogan and Jordi Hays, is a daily live show that airs on YouTube and X for three hours, focusing on tech, business, AI, and defense. The show has gained a cult following in Silicon Valley, a safe space where industry power players can speak candidly and be questioned by fellow insiders. The show has a reputation for being something of a Sports Center for the tech industry — a place where top tech CEOs like Mark Zuckerberg, Satya Nadella, Marc Benioff, and, yes, Sam Altman, come to chop it up, react to the news of the day, and occasionally make some of their own. TBPN will continue to live on as its own brand, which OpenAI will help scale. Not that it necessarily needed help on that front; TBPN has grown into an empire that’s on track to pull in more than $30 million this year, according toThe Wall Street Journal. OpenAI already has its ownpodcastfor long-form conversations with the people building tech at the company. OpenAI will also tap the founders’ “amazing comms and marketing instincts” outside the show, according to OpenAI’s head of AGI deployment, Fidji Simo, who said TBPN will “bring AI to the world in a way that helps people understand the full impact of this technology on their daily lives.” Simo went even further, noting that TBPN’s prowess is necessary for an atypical company like OpenAI where “the standard communications playbook just doesn’t apply.” She said TBPN will have editorial independence and continue to “run their programming, choose their guests, and make their own editorial decisions.” Still, the acquisition might give some pause. After all, OpenAI is a valuable AI lab on the brink of an IPO buying a buzzy talk show that often discusses the company and its competitors. And once the deal closes, TBPN will operate under OpenAI’s strategy team and report toChris Lehane, the man who invented the phrase “vast right-wing conspiracy” as a tool to deflect press scrutiny of the Clinton White House. Lehane, who has been described as a master of the “political dark arts,” is also behind the crypto industry super PAC Fairshake, which spent hundreds of millions to kneecap anti-crypto candidates in the 2024 election. He joined OpenAI that same year and has been in President Trump’s ear ever since, whispering recommendations for sweeping and controversial policies likepreventing states from regulating AIandeasing environmental restrictionsthat might slow data center construction. OpenAI CEO Sam Altman, who said in asocial media postthat TBPN is his favorite tech show, seems to believe the acquisition won’t change TBPN’s commentary and even criticism of the company. “I don’t expect them to go any easier on us, am sure I’ll do my part to help enable that with occasional stupid decisions,” he wrote. TBPN, meanwhile, sees the acquisition as a means to do more than just commentary. “While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right,” Hays said in a statement. “Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.” Got a tip or documents about the AI industry? From a non-work device, contact Rebecca Bellan confidentially at [email protected] or Signal: rebeccabellan.491.

1 month ago

View

Slack Upgrades Slackbot With New AI Features to Turn It Into an Enterprise Agent

Slack Upgrades Slackbot With New AI Features to Turn It Into an Enterprise Agent

Slack is bringing more than 30 new artificial intelligence (AI) features to Slackbot. The new capabilities for the Salesforce-owned platform's AI assistant are aimed at turning it into an AI agent for enterprise needs. Most of the new features take advantage of the existing apps inside Slack, which connect third-party enterprise tools and platforms with the workspace messaging platforms. Now with the agentic extension, Slackbot can perform complex work-related tasks autonomously on behalf of the user.

1 month ago

View

Google now lets you direct avatars through prompts in its Vids app

Google now lets you direct avatars through prompts in its Vids app

Google on Thursday added new features to its video editor app Vids, including directing and customizing avatars through text prompts, Veo 3.1 support, the ability to export videos to YouTube, and recording with a Chrome extension. Users will be able to use natural language prompts to direct avatars to “act” in a scene. This can include the avatar interacting with a product, a prop, or a piece of equipment. The company said that despite the dynamic nature of the output, Vids maintains character consistency. Google said that based on the theme of the video, users can customize characters by tweaking appearance, changing apparel, and creating new backgrounds through prompts. Last month, Google added itsLyria 3andLyria 3 Promusic creation models to Vids to let users add sound effects or music to their clips. With this rollout, Google is bringing Veo 3.1 video generation model, which can create eight second clips within the video editing tool. The company is giving out 10 free generations per month to all users. The company said Google AI Ultra and Workspace AI Ultra accounts can generate up to 1,000 Veo videos per month. What’s more, Google is adding the ability to export finished videos directly to YouTube, saving the hassle of downloading and uploading them to the channel. All the exported videos are by default private, so you can review the video before making it public. The company is also adding a new screen recording Chrome extension to the video suite, allowing users to capture the screen with audio or video. Google has constantly added features to Vids after first unveiling the product in2024 to cater to enterprise content creation. Last year, the company broughtAI avatars to Vids and expanded access to consumers. In February, the company added 2D and 3D cartoon-style avatars and added language support for seven new voiceover languages, includingFrench, German, Italian, Korean, Portuguese, Spanish, and Japanese. Google Vids faces competition from the likes ofSynthesia,HeyGen,D-ID, andLemon Slice.

1 month ago

View

No One Wants to Talk About the Dirty Secret Messing Up Indian IT M&As

No One Wants to Talk About the Dirty Secret Messing Up Indian IT M&As

Cultural clashes, client churn, and weak integration rigour derail value in IT acquisitions.

1 month ago

View

PreviousPage 64 of 151Next