Latest AI News

How to use Gemini-Powered Auto Browse Feature on Google Chrome
Gemini in Google Chrome was first launched in September 2025 as a side panel integration in the browser that could answer questions about webpages. However, it was not until January that the feature really began gaining popularity. The reason? The new Auto Browse feature. Similar to Perplexity's Comet or the Dia browser, it can perform tasks such as booking tickets and making a purchase autonomously by taking control of the browser temporarily. Gemini in Chrome is currently only available in the US, but is expected to be expanded to more regions soon.
View

Microsoft’s New Copilot Cowork Can Take Actions and Autonomously Complete Tasks
Microsoft introduced Copilot Cowork, an agentic AI tool for enterprises, on Monday. Built using Anthropic's Claude artificial intelligence (AI) models and the Work IQ intelligence layer, the tool is aimed at transforming Copilot's capabilities from chats to actions. In practice, it will gain agentic capabilities by drawing context from the Microsoft 365 suite of applications and then performing these tasks without needing connectors or complex integration with enterprise systems. The Redmond-based tech giant says that users will be able to automate a range of tasks using Copilot Cowork.
View

Sandbar secures $23M Series A for its AI note-taking ring
Sandbar, a startup by former Meta employees Mina Fahmi and Kirak Hong, attracted much attention last year when it showed off its note-taking wearable, the Stream ring. The company has now raised $23 million in a Series A funding round led by Adjacent and Kindred Ventures. The company’s smart ring is focused on note-taking, similar to products byPlaudorOmi, and not health-tracking like Oura’s products. The ring has a microphone that’s off by default, but can be activated using a flat, touch-sensitive panel at the top. You can hold this touch panel to record notes, chat with an AI assistant on the accompanying phone app, and access media controls like play, pause, skip tracks, and control the volume. Notably, the mic on the ring seems to be tuned for proximity, so you have to lift your hand to your face in order to take notes. Fahmi, who previously worked at startups like CTRL-Labs and Magic Leap, said Sandbar has been working on the ring for over two years, before coming out of stealth last year following a testing phase with friends and early adopters. “The response [to the launch] was a lot warmer than we expected, which is really encouraging and meaningful,” Fahmi told TechCrunch. “A lot of people said they could see themselves wearing this.” Fahmi said the startup is seeing promising traction from its early users, with the first batch of pre-orders for the ring selling out last year, which spurred Sandbar to open up a second batch to meet demand. He said some users use the ring over 50 times a day for tasks like planning presentations, trips or meals. The startup plans to start shipping the smart ring this summer. Sandbar said it is focusing on refining its app experience and what users can do with their recorded notes. The company is working on a web platform, improving its user interface, and reducing the latency of model responses. In the long term, the company wants to enable agentic workflows to enable users to take action using their notes. Fahmi pointed out that Sandbar is working on implementing conversational exchanges into its product, as many of its users ask the app’s AI assistant about notes they didn’t manage to complete recording. “Something that we think is necessary is back and forth conversation. Unlike a lot of experiences where you just say one command and it’s either transcribed or acted upon, like, via a smart speaker, Stream is really good at iterative tasks which begin, maybe in conversation or editing a note, but hopefully expand to multi-turn conversations, where you’re Claude Coding in your terminal and you are clarifying things [via voice],” Fahmi said. Sandbar’s phone app currently only works with the Stream ring, but the company said it is considering opening up access to people who don’t own the ring. The app can be used on its own to take notes in case the ring is charging or has been misplaced. Sandbar currently has 15 employees, who have previously worked at companies like Amazon, Fitbit, Equinox, Google, and Apple. With the new fundraising, it plans to double its software and machine learning teams, and hire marketing staff. The category of hardwaredevices for notetaking is growing. Companies likePlaudare producing devices that can take notes for meetings, andPebbleaims to ship a cheap $75 ring this year. Then we have startups likeTaya, which are taking a premium approach by designing their products as jewelry to target a wider user base. Adjacent’s Nico Wittenborn has experience in investing in voice-focused startups — he backed Blinkist, which can summarize entire books, when he was with Insight Venture Partners. He thinks that Sandbar’s Stream has a better form factor than other note-taking devices, and the action of lifting your hand to take a note signals the intent of a private use case, unlike other note-takers that might record conversations around you. Wittenborn also thinks some of the hardware out there caters to only “tech bros,” and Sandbar’s form factor makes it suited for widespread adoption. The startup previously raised$13 million from True Ventures last November. Sandbar has raised $36 million in funding to date.
View

Google rolls out new Gemini capabilities to Docs, Sheets, Slides, and Drive
Googleannouncedon Tuesday that it’s bringing a slew of new Gemini-powered AI capabilities to Docs, Sheets, Slides, and Drive. The new features let users do things like quickly generate fully formatted first drafts, slides, and sheets based on information from their Gmail, Chat, and Drive. The tools are designed to make the apps more personal and capable of helping users get things done faster, right within the platforms themselves, instead of needing to switch to a separate tool or chatbot. A new “Help me create” tool in Docs lets users describe what they want to create, and Gemini will follow their instructions and gather information from Drive, Gmail, and Chat to generate a first draft. For example, you can ask Gemini to “draft a newsletter for our neighborhood association using the meeting minutes from my January HOA meeting and the list of upcoming events.” Once you have a first draft, Gemini can help refine specific sections without regenerating the entire document. You can also use the “Help me write” tool to do things like improve clarity or add details where needed. Additionally, if you have multiple people working on a draft with differing voices and tones, you can now use a new “Match writing style” feature to help unify the documents. Gemini will suggest edits to make the tone and voice consistent throughout the draft. Docs is also getting a new “Match the format” tool that lets you mirror the structure and style of another document. For example, if you find a travel itinerary template you like, Gemini can fill it in with your own trip details by pulling information from your emails, such as flight confirmations, hotel bookings, and rental car reservations. As for Sheets, Gemini is evolving from a tool you work in to a collaborative partner, Google says. With a single prompt, it will pull relevant data from across your Gmail, Chat, and Drive to quickly create a fully formatted spreadsheet. For example, you could ask it to“organize my upcoming move to Chicago. Create a checklist for packing by room, a contact list for utilities, and a spreadsheet to track moving company quotes from my inbox.” For more complex tasks, you can now use a “Fill with Gemini” tool to populate tables even faster. The feature can instantly generate custom text, categorize and summarize data, or pull in real-time information from Google Search. For instance, if you’re managing your college applications, you might have a tracker for all your application details. Instead of manually looking up each school’s deadlines, tuition, and other information, you can set up column headers for the details you need, then let Gemini fill in the table automatically by pulling relevant information from the web. Over on Sheets, you can now have Gemini generate a fully editable slide in your deck that matches your overall theme, drawing on context from your files, emails, and the web. If you don’t like a slide, you can ask Gemini to adjust it by asking it to do things like “match the colors to the rest of my deck” or “make this more minimal.” In the future, Google says Slides will let you create a complete presentation from a single prompt, using relevant context when needed. For instance, you will be able to ask Gemini to “create a 5-slide deck for my upcoming Tokyo trip.” Google also announced that it’s making Drive no longer just a place to store your files, but more of an active collaborator. Now, when you search in Drive using natural language, Gemini will surface an “AI Overview” at the top of your results, like the ones you see on Google Search. The overview summarizes the most relevant information from your files, while citing its sources, so you don’t need to open a document to find what you’re looking for. A new “Ask Gemini in Drive” feature lets you ask complex questions across your documents, emails, calendar, and the web. For example, you could select all of your tax-related files and ask, “What should I ask my tax advisor before filing this year’s taxes?” and get a detailed answer based on your actual data. All the new features are rolling out today in beta and will first be available to Google AI Ultra and Pro subscribers. They’re available in English worldwide for Docs, Sheets, and Slides, and in the U.S. for Drive.
View

Zoom introduces an AI-powered office suite, says AI avatars for meetings arrive this month
Zoom’s AI-powered avatars, which can represent users in online meetings, will be available to use later this month, the company announced on Tuesday, alongside news of other tools and services. Notably, the company is introducing its own AI Docs, Slides, and Sheets apps, an AI agent builder for non-technical people, and a voice translator for meetings. The company said its AI-powered productivity apps will be available as a preview in the spring. The AI avatars,announced last year, are thelong-anticipatedphotorealistic avatars that can mimic your appearance, expressions, lip and eye movements. Designed to mine your actions when you’re not “camera-ready,” Zoom says the avatars will work in online meetings as well as in its asynchronous video messaging product. Alongside the AI avatars, the company is adding a deepfake detection technology for meetings to alert participants of possible audio or video impersonation. Other new tools include a suite of AI-powered office apps, including AI Docs, Slides, and Sheets. The company said that, based on meeting transcripts and data from other services, users can create document drafts, spreadsheets with data, or presentations. In addition, Zoom’s AI Companion 3.0 is now coming to its desktop app, after first arriving on the web in September. The company said that the AI Companion’s monthly active users more than tripled in Q4 FY 2026 year-over-year. Workvivo, its app for employee communication, will receive the AI assistant, as well. This assistant can connect to services such as Slack, Salesforce, ServiceNow, Gmail, Outlook, Asana, and Jira to let users ask questions across different knowledge bases. Zoom is not alone in creating AI-first office software. There are established companies likeCanvaand new startups likeContextthat are trying to do the same. Salesforce-owned Slack has also beenadding more AI featuresto its team communication apps. To address the growing interest in agentic workflows, users are now able to create custom agents using natural language prompts that work across surfaces. After creations, users can mention their agents in chat to get tasks done. For developers, Zoom is making its speech, vision, and language intelligence APIs that could be deployed on-prem or in the cloud. Plus, the company is updating its chat experience by using AI to surface key insights and summarize threads. To complement these changes, Zoom says it plans to unify design across different surfaces, such as desktop, mobile, and web, for easier access to AI tools like notes, meeting questions, and transcriptions.
View

Adobe is debuting an AI assistant for Photoshop
Adobe announced on Tuesday that its AI assistant for Photoshop is becoming available to users in beta on the web and in the mobile apps. The company is also adding new AI-powered image editing capabilities toFirefly, its tool for media generation and editing. The creative tooling company first announced anAI assistant for Photoshop during its MAX event in October. The feature, now rolling out to users, can help them remove objects or people from images, change colors, or adjust lighting through prompts. Users can also use natural language to instruct the AI assistant to add soft glow, crop in a specific format, enhance shadows, or transform the background to give a different look to your image. Adobe said that paid users of Photoshop will be able to create unlimited generations with the AI assistant through April 9, and free users will get 20 generations to start with. In addition, the company is adding a new feature called AI markup in public beta, which lets people draw markers on the screen and use the AI assistant to transform those objects. For instance, you can draw a flower or mark an object to remove to modify the background. What’s more, Adobe is adding new image editing tools to its Firefly media creation tool. Firefly is getting Generative Fill, which has been present inPhotoshop for a few years now, for replacing or adding objects and modifying the background accordingly. Firefly is also gaining a generative remove feature for object removal, generative expand for increasing an image size using AI, and generative upscale features. What’s more, the company is adding a one-click tool to remove the background from images. Loading the player… The company said in February that it isallowing unlimited generationsfor Firefly subscribers to encourage increased usage. Over time, it has also added more than 25 third-party video and image generation models, including Google’s Nano Banana 2, OpenAI’s Image Generation, Runway’s Gen-4.5, and Black Forest Labs’ Flux.2 Pro.
View

YouTube expands AI deepfake detection to politicians, government officials, and journalists
YouTube is expanding its likeness detection technology, which identifies AI-generated deepfakes, to a pilot group of government officials, political candidates, and journalists, the companyannouncedTuesday. Members of the pilot group will gain access to a tool that detects unauthorized AI-generated content and lets them request its removal if they believe it violates YouTube policy. The technology itselflaunched last yearto roughly 4 million YouTube creators in the YouTube Partner Program, followingearlier tests. Similar to YouTube’s existingContent ID system, which detects copyright-protected material in users’ uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people’s perception of reality, as they leverage the deepfaked personas of notable figures — like politicians or other government officials — to say and do things in these AI videos that they didn’t in real life. With the new pilot program, YouTube aims to balance users’ free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. “This expansion is really about the integrity of the public conversation,” said Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy, in a press briefing ahead of Tuesday’s launch. “We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it,” she noted. Miller explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it’s advocating for these protections at a federal level, too, with its support for theNO FAKES Actin D.C., which would regulate the use of AI to create unauthorized recreations of an individual’s voice and visual likeness. To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time. These AI videos will be labeled as such, but the placement of these labels isn’t consistent. For some, the label appears in the video’s description, while videos focused on more “sensitive topics” will apply the label to the front of the video. This is the same approach YouTube takes with all AI-generated content. “There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,” explained Amjad Hanif, YouTube’s Vice President of Creator Products, as to the label’s placement. “It could be a cartoon that is generated with AI. And so I think there’s a judgment on whether it’s a category that maybe merits from a very visible disclaimer,” he said. YouTube isn’t currently sharing how many removals of these sorts of AI deepfakes have been managed by this deepfake detection technology in the hands of creators, but noted that the amount of content removed so far has been “very small.” “I think for a lot of [creators], it’s just been the awareness of what’s being created, but the volume of actually removal requests is really, really low because most of it turns out to be fairly benign or additive to their overall business,” Hanif said. That may not be the case with deepfakes of government officials, politicians, or journalists. In time, YouTube intends to bring its deepfake detection technology to more areas, including recognizable spoken voices and other intellectual property like popular characters.
View

Legora reaches $5.55 billion valuation as AI legaltech boom endures
Legora, an AI platform for lawyers, is now valued at $5.55 billion following a $550 million Series D set to fuel its growth in the U.S. That’s despite growing competition with rivalHarvey, but also with Microsoft Copilot and generalist large language models (LLMs). Publicly listed legal software companiessaw their stocks dropwhen Anthropicunveiled a legal plugin for Claude. Legora is built on top of LLMs, andmostly on Claude, but its positioning as a platform that supports lawyers with complex cases gives CEO Max Junestrand some peace of mind. “It’s amazing that everybody can have their own pocket lawyer in Claude, but we’re not solving for the same use case,” he said via livestream at the Techarena conference in Stockholm. With a focus on embedding itself into its clients’ workflows, Legora’s platform is now used by 800 law firms and legal teams — and investors took note. Its Series D was led by Accel, with participation from existing investors Benchmark, Bessemer, General Catalyst, ICONIQ, Redpoint Ventures, and Y Combinator; and new backers including Alkeon Capital, Bain Capital, Firstmark Capital, Menlo Ventures, Salesforce Ventures, Sands Capital, and Starwood Capital. There are other signs that investors are bullish about AI legaltech. Legora’s Series D and valuation jump come just a few months after its October 2025$150 million Series C roundled at a $1.8 billion valuation. Its competitor, Harvey, which is backed by a16z, is alreadyvalued at $8 billion, and is now reportedlyseeking to raise at a $11 billion valuation. According toDealroom, they are also on almost identical trajectories with regard to revenue. Both are also branching out globally; Harvey ispushing hard into Europe, and Legora in the opposite direction.Formerly known as Judilica, then Leya, the startup is an alum of Stockholm’s SSE Business Lab, aknown breeding ground for unicorns. But after participatingin YC’s winter 2024 batch, Legora is now headquartered in New York and keen to keep on pushing in the U.S. market, where its growth exceeded its expectations coming out of Europe. “It’s nine to one in terms of legal spending; it turns out the Americans love to sue each other much more than we like to do in Europe,” Junestrand joked while speaking to Techarena’s audience. But the team has grown globally — from 40 to 400 team members over the past year, according to a press release. In addition to New York and Stockholm, Legora has offices in Bangalore, London, and Sydney, with more to follow. Alongside its Series D, Legora announced it would open offices in Houston and Chicago, with plans to open additional local hubs and grow to more than 300 employees across its U.S. offices by the end of 2026.
View

Google gives in to users’ complaints over AI-powered ‘Ask Photos’ search feature
In a slight capitulation to those who don’t want AI infused into their everyday apps, Google said it’s now offering a toggle that will allow users of its Google Photos app to return to the previous and often faster “classic” search experience instead of the newer AI-powered option known as “Ask Photos.” The Ask Photos feature,launchedin the U.S. in 2024, lets users search their photos using natural language queries, including complex requests. The product’s rollout was brieflypausedlast summer as the company worked to address issues around latency, following user feedback. Some Google Photos users never warmed up to the AI-powered experience,complainingthat Ask Photos still failed to find some of their photos and that searches were less accurate than before. While Google offered anoption to disablethe use of Gemini in Google Photos, it was buried in the settings and was often overlooked. The company said it will offer users an easier and more visible way to switch between the two search experiences. Via a new toggle button on the search screen, users can turn the Ask Photos AI search off and view the classic results instead. Google said it will still lead with whichever results best fit the user’s query, however. In the announcement,sharedby Google Photos lead Shimrit Ben-Yair, the company suggested that the move was driven by users’ complaints about the Ask Photos feature. In a post on X, Ben-Yair wrote, “We’ve heard your feedback that you want more control over the type of results you see when searching in Google Photos.” The exec also noted that Google had improved the quality of some of the most popular searches, also based on user feedback. “We know search in Photos is one of the most loved and used features and we’re committed to getting this experience right, so please keep the feedback coming! It helps us build a more magical experience for everyone,” she said.
View

Thinking Machines Lab inks massive compute deal with Nvidia
OpenAI co-founder Mira Murati’s two-year-old AI research lab has signed a sizable deal with semiconductor giant Nvidia. Murati’s Thinking Machines Lab announced it entered into amulti-year strategic partnership with AI semiconductor giant Nvidiaon Tuesday. The size of the deal was not disclosed and includes the AI research lab deploying at least one gigawatt ofNvidia’s Vera Rubin systems, which was released earlier this year, starting in 2027. Nvidia is also making a strategic investment in Thinking Machines Lab, which has raised more than $2 billion since its February 2025 founding from investors including Andreessen Horowitz, Accel, and Nvidia, among others, including rival chipmaker AMD’s venture arm. The seed-stagecompany is valued at more than $12 billionand is working to build AI models that create reproducible results. The company has not released any products. TechCrunch reached out to Thinking Machines Lab and Nvidia for more information regarding the specifics surrounding the deal terms and investment. Thinking Machines Lab declined to comment beyond the release. The partnership also includes a commitment to develop training and serving systems for Nvidia architecture, according to an Nvidia press release. “Nvidia’s technology is the foundation on which the entire field is built,” Murati said in the deal’s blog post. “This partnership accelerates our capacity to build AI that people can shape and make their own, as it shapes human potential in turn.” Thinking Machines Lab has seen a number of recent high-profile exits in its young history. The company’s co-founder,Andrew Tulloch, left the startupfor a role at Meta in October. Earlier this year,three additional co-founders, Barret Zoph, Luke Metz, and Sam Schoenholz, left to return to OpenAI. This deal comes as AI companies remain hungry for any compute that they can get. Nvidia CEO Jensen Huang predicted that companies could spend$3 trillion to $4 trillion on AI infrastructureby the end of the decade. While we don’t know the value of this specific deal, it’s believable. In 2025, rival OpenAI allegedly inked a historic$300 billion compute deal with Oracle.
View

Nasscom to Train 1.5 Lakh Developers in AI Skills, Host ₹10 Lakh AI Hackathon
Three-phase program to culminate in national Agentic AI hackathon with ₹10 lakh prize.
View

The Hardest Part of AI Adoption Isn’t the Technology
Deloitte’s Rohit Tandon believes organisations often treat AI as an experiment rather than a business transformation opportunity.
View
