Latest AI News

The White House wants AI companies to cover rate hikes. Most have already said they would.
The proliferation of AI data centers plugging into the national electrical grid has helped increase consumer electricity prices, driving up the average national electricity price by more than 6% in the last year. That’s not a good look for the incumbents ahead of this fall’s elections, and President Donald Trump addressed the challenge in his State of the Union speech last night. “We’re telling the major tech companies that they have the obligation to provide for their own power needs,” Trump said. “They can build their own power plants as part of their factory, so that no one’s prices will go up.” The hyperscalers in question don’t need to be told. They have already made public commitments in recent weeks to cover electricity costs by building their own power sources, paying higher rates, or both, part of a broader effort to solve PR problems around data center expansion and win over skeptical communities. On January 11, Microsoftannouncedits policy “to ensure that the electricity cost of serving our datacenters is not passed on to residential customers.” January 26, OpenAIcommittedto “paying its own way on energy, so that our operations don’t increase your energy prices.” On February 11, Anthropic made thesame pledgeto “cover electricity price increases that consumers face from our data centers.” Yesterday, Googleannouncedthe largest battery project in the world yesterday to support a data center in Minnesota. What these commitments means in practice, and who will determine which data centers are responsible for which price increases, remains unknown. The White House has not released the text of the proposed pledge. “A handshake agreement with Big Tech over data center costs isn’t good enough,” Arizona Democratic Senator Mark Kellysaidon social media. “Americans need a guarantee that energy prices won’t soar and communities have a say.” White House spokesperson Taylor Rodgerssaidthat next week, companies will send representatives to formally sign the pledge at the White House. Amazon, Google, Meta, Microsoft, xAI, Oracle and OpenAI are reportedly among those set to attend. However none of the companies have confirmed their attendance. Even if tech companies commit to taking on electricity costs, on-site power plants may not be a panacea—they can still have adverse impacts on thesurrounding environment, and will stress supply chains for natural gas, turbines, photovoltaics and batteries, depending on how companies aim to power their compute.
View

3 days left: Save up to $680 on your TechCrunch Disrupt 2026 ticket
Time is running out! Just 3 days left before Super Early Bird pricing ends on February 27 at 11:59 p.m. PT. This is your last chance to secure the lowest ticket rates forTechCrunch Disrupt 2026. If 2026 is your year to fundraise, hire, scale, or launch, you cannot afford to miss it. Lock in your pass nowbefore prices jump. This is the moment to act. From October 13–15 at Moscone West in San Francisco, 10,000+ founders, operators, and investors gather for three days of high-signal conversations, deal-making, and actionable insights. Disrupt is not just content — it’s access to accelerated growth. You don’t attendDisruptto sit in the audience. You go to gain leverage. Every session, every conversation, and every connection is designed to accelerate your growth and compound your momentum. You’ll get: Last year, more than 20,000 curated meetings took place on-site. In 2026, upgraded tools will make those connections even more targeted and efficient. One conversation can change your trajectory — and at Disrupt, that is the point. Disrupt has long been the stage for founders and investors who define eras. Past speakers have included category-defining leaders and top-tier VCs, such as: In 2025, Disrupt featured 200+ onstage conversations with 250+ tech and VC leaders across AI, hardware, space, startup growth, and venture. Expect the same high-caliber content this year and check theevent pageas the 2026 agenda rolls out. Startup Battlefieldreturns with 200 pre-Series A companies competing for $100,000 in equity-free funding, global visibility, and direct investor access. Alumni include Discord, Cloudflare, and Trello. If you want to see what’s next and hear directly from top VCs on scaling a viable startup, the Disrupt Stage is where it happens first. With300+ startup exhibitors, the venue, especially the Expo Hall, is where deal flow and discovery collide. You won’t just observe trends; you’ll see them before they scale. You’ll be able to: From October 11 to 17, Disrupt Side Events take place across the Bay Area, including breakfasts, cocktail hours, panels, and founder meetups that extend the connections beyond the main stage. The main event is powerful. The surrounding ecosystem makes it even stronger. Super Early Bird pricing ends this Friday, February 27, at 11:59 p.m. PT. If you want to be in the room where capital moves, companies scale, and ideas turn into breakthroughs, now is the time to lock in your discounted ticket. Register now before it’s too late. Save up to $680 on yourindividual pass, or up to 30% ongroup passes.
View

About 12% of US teens turn to AI for emotional support or advice
AI chatbots have become embedded in the lives of American teenagers, according toa reportpublished Tuesday by the Pew Research Center. While the most common uses of AI among this demographic include searching for information (57%) and getting help with schoolwork (54%), teens are also using AI to fill roles that would typically be occupied by friends or family. Sixteen percent of U.S. teens say they use AI for casual conversation, while 12% use AI chatbots for emotional support or advice. Some teens may find solace in talking to chatbots, but mental health professionals are wary. General-purpose tools like ChatGPT, Claude, and Grok are not designed for such uses, and in the mostextreme cases, these chatbots can have life-threatening psychological effects. “We are social creatures, and there’s certainly a challenge that these systems can be isolating,” Dr. Nick Haber, a Stanford professor researching the therapeutic potential of LLMs,told TechCrunch recently. “There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating — if not worse — effects.” Pew’s survey also shows a discrepancy between teenagers’ self-reported AI usage and the extent to which their parents think they engage with this technology. About 51% of parents said that their teen uses chatbots, while 64% of teens reported using them. The majority of parents are okay with their teens using AI to search for information (79%) or get help with schoolwork (58%), but far fewer parents approve of their teens using AI chatbots for casual conversation (28%) or to get emotional support or advice (18%). In fact, 58% of parents are not okay with their child using AI for such purposes. AI safety is acontentioustopicamong leading tech companies, to say the least. But one popular chatbot maker, Character.AI, made the choice todisablethe chatbot experience for users under the age of 18. This decision followed public outcry and lawsuits filed overtwo teenagers’suicides, which took place after prolonged conversations with the company’s chatbots. OpenAI, meanwhile, made the decision to sunset its particularly sycophanticGPT-4o model, which sparked backlash from people who had come to rely on the model for emotional support. Though a majority of teens use AI chatbots in some way, they have mixed feelings about the impact of this kind of technology on society. When asked how they think AI will impact society over the next 20 years, 31% of teens said the impact would be positive, while 26% said it would be negative.
View

Have hard-won scaling lessons to share? Take the stage at TechCrunch Founder Summit 2026
If you’ve built, backed, or operated inside high-growth startups, your experience could shape how the next wave of founders scales. On June 9 in Boston,TechCrunch Founder Summit 2026will bring together 1,000+ founders and investors for a focused day on the realities of growth. We’re inviting seasoned founders, VCs, and startup operators to lead interactive roundtable discussions rooted in real-world execution — the wins, the missteps, and the lessons that only come from doing the work. Submit your topic by April 17 to be considered.Learn more and apply today. Whether you lead an interactive roundtable or a Q&A-style breakout, every session at Founder Summit is built for depth. Each is a 30-minute, discussion-driven conversation led by two to four speakers, depending on format. No slides. No polished decks. Just candid insight and practical takeaways founders can apply immediately. If you’ve scaled revenue from zero to $50 million, navigated a difficult fundraise, rebuilt a team after hypergrowth, expanded internationally, or redefined your go-to-market strategy, this is the room to share what actually works. Speaking at TC Founder Summit gives you: TechCrunch will also amplify your participation through agenda placement, editorial inclusion on TechCrunch.com, and social promotion across its channels. TC Founder Summit takes place on June 9, and speaker selections are made well ahead of the event. If you have scaling insight founders need to hear, now is the time to submit your topic. Have more than one strong idea? Submit them all. Lead the conversation. Share what you’ve learned. Help founders build smarter.Submit to speakbefore the April 17 deadline.
View

OpenClaw creator’s advice to AI builders is to be more playful and allow yourself time to improve
Peter Steinberger, the creator of theviral AI agent OpenClawwho has sincebeen hired by OpenAI,has some advice for those experimenting with AI technology, including AI agents. From his own experience, the best way to build today is to explore, be playful, and not expect to be an expert at what you do right away. “I wish I could say that I had the unified plan in the beginning, but a lot of it was just exploration,” Steinberger said. “I wanted things, and those things didn’t exist, and … let’s say I prompted them into existence.” The developer was chatting with OpenAI’s Head of Developer Experience, Romain Huet, on the first episode ofthe company’s new Builders Unscripted podcast. Here, he spoke about what OpenClaw was like in its early days and how he didn’t have a plan when he got started. Steinberger explained he began by building a tool that would integrate with WhatsApp, but then set it aside for a bit and focused on other things, as he assumed the AI labs would build something like what he was working on in the near future. “I just experimented a lot. My mission was, kind of like, to have fun and inspire people,” Steinberger noted. By last November, however, the developer was surprised that no AI labs had started to build what he wanted to use. That led him to create the initial prototype of what’s now OpenClaw. “Where it really clicked was where I was at this weekend trip in Marrakesh, and I found myself using it way more because it was so convenient … There was no really good internet. [But] WhatsApp just works everywhere,” he said. The tool made it easy for him to find restaurants, look up things on his computer, send texts to friends, and more. The more he played with the technology, Steinberger realized how good modern AI models have become at problem-solving, much like coders are. “Now they can just, like, actually come up with the solutions themselves, even though you never programmed them at all,” he noted. Throughout the process of building, Steinberger said that his workflow improved — and he stresses to other developers that’s something that can take time, so don’t give up. “There’s these people that … write software in the old way, and the old way is going to go away,” he pointed out. They then decide to try vibe coding but are disappointed with the results. “I think vibe coding is a slur,” said Steinberger, basically suggesting that it’s not as simple a process at first as the term makes it sound. “They try AI, but they don’t understand that it’s a skill,” he said, then compared the process of coding with AI to learning guitar. “You’re not going to be good at guitar on the first day,” he said. Instead, he recommends that people approach learning with a more playful attitude. If he writes a prompt now, he has a gut feeling as to how long it will take, and if it takes longer, he reflects on what may have gone wrong and adapts. “My … advice always is, approach it in a playful way. Build something that you always wanted to build. If you’re at least a little bit of a builder, there has to be something on the back of your mind that you want to build. Like, just play.” This ability to experiment and have fun is what’s most important, especially at a time when people are worried their jobs will be overtaken by AI. “If your identity is: I want to create things. I want to solve problems. If you’re high agency, if you’re smart, you will be in more demand than ever,” Steinberger said.
View

OpenAI COO says ads will be ‘an iterative process’
Last month, OpenAI said that it is going to introduceads to users of the free and Go tiers in ChatGPT. The company rolled out adsto U.S.-based users earlier this monthamid criticism from rivals like Anthropic, which publisheda string of Super Bowl ads. On the sidelines of the India AI summit, TechCrunch askedOpenAI COO Brad Lightcapabout how the company is approaching ads. Lightcap said that the process is iterative and the company has to get user privacy and trust right. “Well, this is going to be an iterative process for sure. This is something we are committed to getting right. What does that look like? It means obviously maintaining user trust at a very high level. It means getting privacy right,” Lightcap said. He also noted that ads can add to the product experience of users if they are done right. He urged to give OpenAI a few months to see how the company fares in rolling out the product. “It means really creating a delightful product experience. We think ads done right can be additive to a product experience. And so it’ll take iteration, it’ll take time, but we’re just starting out. So maybe give us a few months and see how it goes,” he said. Lightcap didn’t specify if the company is thinking about rolling out ads beyond the U.S. market at the moment. Earlier this month, Sam Altman hit back at Anthropic witha long post on Xabout the Super Bowl ads, calling the OpenAI rival “dishonest” and accusing them of making an expensive product that serves “rich people.” “More importantly, we believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than the total number of people who use Claude in the US, so we have a differently-shaped problem than they do,” Altman wrote. Variousoutletshave reported that OpenAI is charging $60 for 1,000 impressions, an unusually high rate. Last month, Adweek noted that OpenAI is asking for $200,000 of minimum commitment from advertisers. Earlier this week,The Informationreported that Shopify is allowing its merchants to advertise on ChatGPT through its Shop Campaigns ad network, joining early testers like Target, Williams Sonoma, and Adobe.
View

Gemini can now automate some multi-step tasks on Android
Google on Wednesday announced a series of updates to its Gemini AI-powered features on the Android operating system, the most notable being a new way to use the AI to handle multi-step tasks like ordering an Uber or food delivery. These automations join other Gemini improvements shipping today, including an expansion of scam detection for phone calls and Circle to Search updates that now let you identify all the items on your phone’s screen. The automations, explains Google, allow users to essentially offload their to-do list to Gemini. In practice, however, the types of things that Gemini can manage are still limited. The company says that the feature, which is in beta, will initially support select apps in the food, grocery, and rideshare categories. It will also be limited to the Gemini app on certain devices, including the Pixel 10, Pixel 10 Pro, and Samsung Galaxy S26 series. And it will initially be available only in the U.S. and Korea. AI-powered automations could potentially go wrong, of course, so Google has added some protections. For starters, the automations can’t be kicked off without an explicit command from the device’s owner. As they run, you can watch their progress in real time and stop the task if it’s making a mistake or getting stuck. Google notes also that the automations take place in a secure, virtual window on your phone where they can only access limited apps, not the rest of the data on your device. The feature ties into the growing trend of using AI to automate more tasks in users’ personal lives. ChatGPT, for instance, letsusers create tasksthat can be run on schedules or at specific times, as well as offeringan agentthat can complete a variety of computer-based tasks like navigating a calendar, generating a slideshow, or running code.Anthropic’s Cowork, meanwhile, brings the capabilities of its Claude AI to non-coding tasks, letting non-developers automate everyday file and task management. And, of course, an AI tool calledOpenClawrecentlywent viralfor its ability to manage everyday tasks like sending emails, managing calendars, checking into flights, and more. Another Gemini update arriving now is the expansion of a Scam Detection feature for phone calls, which is becoming available on Samsung Galaxy S26 series devices in the U.S. (The feature is already offered on Pixel phones in the U.S., Australia, Canada, India, Ireland, and the U.K.) Google is also using its Gemini on-device model to detect scam texts in the U.S., Canada, and the U.K. on Pixel 10 series devices, and soon on the Galaxy S26 series phones, as well. Finally, Google says its Circle to Search feature, which lets you use gestures like scribbles and circling to initiate searches, can now search for everything you’re seeing on the phone screen, not just a single object. That means you can search every item of clothing and every accessory in an outfit you like, or learn more about a group of things and the related topic on the screen. Google has been steadily releasing Gemini updates to its Android ecosystem at regular intervals through new operating system updates and updates targeted toward its flagship phone, the Google Pixel, via its frequent updatesknown as Pixel Drops. Meanwhile, Apple has beenstruggling to releasea more comprehensive AI feature set, which is set to include an AI-powered Siri — a launch that was recentlypushed back againto later in the year.
View

Perplexity AI Unveils ‘Perplexity Computer’ to Orchestrate Multiple AI Models
The product is available to Perplexity Max subscribers.
View

Khosla’s Keith Rabois backs Comp, which wants to bolster HR teams with AI
After graduating from Cornell University, Christophe Gerlach spent nearly two years investing exclusively in HR tech startups for General Atlantic. Investing was exciting, but Gerlach was yearning to get back into entrepreneurship. While at Cornell, Gerlach (pictured above, right) built and sold afood delivery startupalongside classmate Pedro Bobrow (pictured above, left), a Brazilian native. Then in late 2022, Gerlach and Bobrow (previously a product manager at Lyft) teamed up again, merging their sector expertise and cultural roots to launch Comp, an HR tech startup focused on Brazil. Comp is building AI-powered HR software that can assist with tasks like recruiting, setting compensation policies and designing performance review systems. The startup also provides “forward-deployed” experts — former HR executives — who work with customers to design strategies for compensation, performance, and recruiting. While companies in Brazil often hire compensation consultants, Gerlach says its forward-deployed HR executives shouldn’t be viewed as consultants, but rather as extensions of existing HR teams. These executives also play a critical role in refining Comp’s technology. “Our forward-deployed HR execs do all the work manually at first, and then they use that work to train the AI how to think in best practices,” Gerlach said. The idea, of course, is that over time, Comp’s AI agents will become fully autonomous and capable of performing traditional HR functions. While Comp currently offers AI-supported HR services augmented by professionals, its goal is to displace both traditional consultancies and HR software. As Gerlach puts it: “Rippling sells software to junior HR teams to make them more productive. We become the HR team.” To that end, Comp this week raised a $17.25 million Series A round led by Khosla Ventures, marking the VC firm’s first-ever investment in a Brazilian company. Khosla general partner Keith Rabois has joined Comp’s board of directors as part of the deal. Comp is positioning itself as an AI alternative to traditional compensation consultancies like Mercer, Korn Ferry, and Willis Towers Watson. It also competes with global HR platforms such as Rippling and Workday. Gerlach says Comp launched in Brazil because many companies in the country lack traditional HR software, which has allowed the startup to introduce a new, automated model rather than competing with established platforms. The business model seems to have already gaining traction in Brazil: its clients include Nubank, QuintoAndar, Creditas, and “pretty much every unicorn in Brazil,” Gerlach said. The startup is now eyeing an expansion into the U.S. and other countries. Other investors in Comp’s Series A included existing backers Kaszek and Canary, as well as new investors Abstract Ventures and Endeavor Catalyst.
View

Amazon’s AI-powered Alexa+ gets new personality options
Amazon is introducing a new feature that will allow users to change the personality of its AI assistant, Alexa+. On Wednesday, the company launched three new Alexa+ personality styles — Brief, Chill, and Sweet — that will change the AI assistant’s tone. In the Brief style, Alexa will respond with shorter, direct responses, while the Chill style will see Alexa answer more like a laid-back friend. Enabling the Sweet style, meanwhile, will have Alexa become warmer and more enthusiastic, offering encouragement and positivity, says Amazon. The idea of infusing an AI with a personality has been a complicated issue for model makers. For some chatbot users, a flattering and affirming AI model, like OpenAI’s GPT-4o,led some users to develop an unhealthy dependency on the technology. In a few cases, it even exacerbated the user’s existingmental health issues, leading to crisesor even suicides, multiple lawsuits have alleged. Still, chatbot users have shown a preference for controlling how their AI responds, even writingcustominstructionsto dictate the AI’s personality. To address this need, OpenAI launched new ChatGPT features in December that allow users toadjust the AI’s base style and tonein terms of its warmth, enthusiasm and use of emoji, among other things. Despite this, some users arecomplainingthat the latest model istoo reassuringby default. Amazon says its new styles for Alexa have been built on five dimensions that contribute to its personality: expressiveness, emotional openness, formality, directness, and humor. Each style represents specific levels of all five factors. For instance, Brief isn’t just concise, it’s also casual, direct, and uses minimal humor, the company says. To change Alexa’s style, you can either speak to the AI assistant via a device, like an Echo speaker, or access the feature in the Alexa app’s Device Settings under “Personality Style.” The company notes that these three styles are only the first to ship, and others will be on the way in the future.
View

Adobe Firefly’s video editor can now automatically create a first draft from footage
The video editor in Adobe Firefly is getting a new feature called Quick Cut that uses AI to edit footage and B-roll to create a first draft of the final video based on user instructions. Typically, you have to upload your footage and B-roll into a video editor, and manually arrange transitions. With Quick Cut, users can describe what they want the video to be in natural language, and the tool will automatically edit out irrelevant parts of the footage, and put together the different takes while using appropriate footage to make transitions between cuts. Users can also pick frames from the B-roll and use one of the video models available within Firefly to create short transitions. You can use the prompt box within the Firefly video editor to specify settings like aspect ratio and pacing between transitions, or add optional B-roll footage. Users can apply Quick Cut to the entire project, a particular timeline, or selected clips. Loading the player… Adobe stressed that the aim of Quick Cut is to deliver a first draft, so editors will still need to adjust elements, paste takes together, and work on transitions to put together the video. “As we talk to our users, who are creators and marketers, the biggest problem they actually communicate is the need for fast turnaround, the need for time-saving techniques that just let them get to their creative vision as fast as possible,” Mike Folgner, product lead for AI and next-generation video tools, told TechCrunch. “One thing we do know is that some of the mundane parts that come with video [editing], like just getting the selects in order, that’s not really where they find joy and difference. They find joy in putting their spin on it. So Quick Cut is meant to help creators who have a set of media find the story very quickly and just get to a story cut as fast as possible,” he added. Adobe has been pushing regular updates to its video-related tools. In December, it rolled outa new timeline-based video editorthat brought layers and prompt-based editing — the editor treats different objects as layers and allows you to edit them using prompts, or use tools like resize and rotate. The company has also addedprompt-based editing capabilitiesto Firefly, letting users tell the video model how to edit video elements, colors, and camera angles, as well as a timeline view that lets you adjust frames, sounds, and other characteristics easily.
View

Jira’s latest update allows AI agents and humans to work side by side
Enterprise software giant Atlassian is rolling out a new way for humans and AI agents to work together that it hopes will help teams produce “10x the work without 10x the chaos.” Atlassian announced “agents in Jira” on Wednesday. This update gives users of the company’s project management software Jira the ability to assign and manage work for their digital agents from the same dashboard they use for their human employees. Agents in Jira allows enterprises to assign tasks and tickets to AI agents, just as they would to people. It also tracks how the work is coming along, and sets deadlines, among other metrics. Users can now also loop in AI agents during the middle of an existing project too. This feature is now available in open beta. This update is meant to give users the same visibility into the work their agents are doing as their human employees Tamar Yehoshua, Atlassian’s new chief product and AI officer, told TechCrunch. “Atlassian has been in the business, for decades, of collaboration software helping people get work done,” Yehoshua said. “Now, you enter agents, and agents are now doing a lot of that work, and so you want to be able to coordinate between humans and agents.” But Atlassian understands that just giving people more avenues to automate doesn’t necessarily mean less work, Yehoshua said. That’s why the key part of this update is that everything happens within the same dashboard, she said. “You’ve been hearing in the zeitgeist lately that all of these agents are creating more work for people, and in some ways, more chaos,” Yehoshua said. “What we’re really good at is putting order to that chaos.” As enterprises continue to figure out how and where they can find a return on investment from investing in AI tools, this kind of view could prove beneficial. The ability to compare the work of agents versus humans on the same project could help enterprises figure out where to deploy agents to begin with and what tasks should remain human-led. This announcement is just the first of many, Yehoshua said, as the company looks to increasingly add AI tools into its existing software products. “The goal is to enable people to work more productively with AI and I think this is a step,” Yehoshua said. “It’s only the beginning of the journey. It’s a long journey, but this is a really important step of how to integrate AI into the workflows that you already have, which I’m really excited about.”
View
