Skip to main content

Google Chrome has its own version of Window’s troubled Recall feature

google chrome version of recall blog header
Google

Google has announced a number of AI features for the Chrome web browser, one of which can search through your browsing history using plain language. It’s a bit like a toned-down version of Microsoft’s Recall feature, which did this on the level of the entire operating system.

The example given entails typing the following question into your search history: “What was that ice cream shop I looked at last week?” Chrome will then dig through and pull up sites relevant to your question. It’ll then suggest a website as the “AI Best Match.”

Recommended Videos

Like with Recall, Google clarifies that using this feature is entirely optional and can easily be turned off in settings. It also noted that it doesn’t include browsing data from incognito mode.

The search history AI being shown on Google Chrome.
Google

While this does, actually, seem useful, many of the same concerns that Recall faced are applicable. Google says that the feature is powered by the “latest Google AI and Gemini models,” but it doesn’t indicate whether Google’s AI is aware of every website you visit. It also doesn’t indicate if you can turn off access to certain sites, especially those with sensitive data such as medical records or banking information.

Another unknown is if Google is only aware of the title of what you searched or if is it contextually conscious of things you do on the site. For example, if you asked it something like, “what was the app I was talking to my friend, Luke Larsen, on” or “what was the site I bought a laptop on,” I’m curious if it would it be able to provide an answer.

These caveats are important, as the lack of privacy and security is ultimately what gave Microsoft so much trouble with Recall, which still hasn’t been released after it was pulled from the Copilot+ PC release.

According to Google, the free update will be available in the U.S. in the coming weeks.

An animated image of Google Lens being used to identify a plant.
Google

In addition to the search history feature, Google also announced that it is bringing some new Google Lens features to the Chrome desktop app.

Similar to how it works on mobile devices, you can now use the Google Lens icon in the address bar to unlock these capabilities. From there, you can select just about any object from a photo or video and ask further questions about it. You can even use multisearch to refine it further by color or other details.

The obvious example might be to search for an object in an image to shop for yourself, but you could also do something like solve an equation written in a YouTube video or identify a plant in a photo. Google indicates that in some cases, you may even get an AI Overview as a response.

Luke Larsen
Former Digital Trends Contributor
Luke Larsen is the Senior Editor of Computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
This upcoming AI feature could revolutionize Google Chrome
Google's Gemini logo with the AI running on a smartphone and a PC.

One of the latest trends in the generative AI space is AI agents, and Google may be prepping its own agent to be a feature of an upcoming Gemini large language model (LLM).

The development, called Project Jarvis, is an AI agent based within the Google Chrome browser that will be able to execute common tasks after being given a short query or command with more independence than before. The inclusion of AI agents in the next Chrome update has the potential to be the biggest overhaul since the browser launched in 2008, according to The Information.

Read more
Google expands its AI search function, incorporates ads into Overviews on mobile
A woman paints while talking on her Google Pixel 7 Pro.

Google announced on Thursday that it is "taking another big leap forward" with an expansive round of AI-empowered updates for Google Search and AI Overview.
Earlier in the year, Google incorporated generative AI technology into its existing Lens app, which allows users to identify objects within a photograph and search the web for more information on them, so that the app will return an AI Overview based on what it sees rather than a list of potentially relevant websites. At the I/O conference in May, Google promised to expand that capability to video clips.
With Thursday's update, "you can use Lens to search by taking a video, and asking questions about the moving objects that you see," Google's announcement reads. The company suggests that the app could be used to, for example, provide personalized information about specific fish at an aquarium simply by taking a video and asking your question.
Whether this works on more complex subjects like analyzing your favorite NFL team's previous play or fast-moving objects like identifying makes and models of cars in traffic, remains to be seen. If you want to try the feature for yourself, it's available globally (though only in English) through the iOS and Android Google App. Navigate to the Search Lab and enroll in the “AI Overviews and more” experiment to get access.

You won't necessarily have to type out your question either. Lens now supports voice questions, which allows you to simply speak your query as you take a picture (or capture a video clip) rather than fumbling across your touchscreen in a dimly lit room. 
Your Lens-based shopping experience is also being updated. In addition to the links to visually similar products from retailers that Lens already provides, it will begin displaying "dramatically more helpful results," per the announcement. Those include reviews of the specific product you're looking at, price comparisons from across the web, and information on where to buy the item. 

Read more
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more