Skip to main content

Google Gemini can now tap into your search history

Google Gemini app on Android.
Nadeem Sarwar / Digital Trends

Google has announced a wide range of upgrades for its Gemini assistant today. To start, the new Gemini 2.0 Flash Thinking Experimental model now allows file upload as an input, alongside getting a speed boost.

The more notable update, however, is a new opt-in feature called Personalization. In a nutshell, when you put a query before Gemini, it takes a peek at your Google Search history and offers a tailored response.

Recommended Videos

Down the road, Personalization will expand beyond Search. Google says Gemini will also tap into other ecosystem apps such as Photos and YouTube to offer more personalized responses. It’s somewhat like Apple’s delayed AI features for Siri, which even prompted the company to pull its ads.

Please enable Javascript to view this content

Search history drives Gemini’s answers

Gemini personalization feature.
Google

Starting with the Google Search integration, if you ask the AI assistant about a few nearby cafe recommendations, it will check whether you have previously searched for that information. If so, Gemini will try to include that information (and the names you came across) in its response.

“This will enable Gemini to provide more personalized insights, drawing from a broader understanding of your activities and preferences to deliver responses that truly resonate with you,” says Google in a blog post.

Giving Search history access to Gemini.
Google

The new Personalization feature is tied to the Gemini 2.0 Flash Thinking Experimental model, and will be available to free as well as paid users on a Gemini Advanced subscription. Rollout begins today, startling with the web version and will soon reach the mobile client, too.

Google says the Personalization facility currently supports more than 40 languages and it will be expanded to users across the globe. The feature certainly sounds like a privacy scare, but it’s an opt-in facility with the following guardrails:

Warning banner in Gemini.
Google
  1. It will only work when users have connected Gemini with their Search history, enabled Personalization, and activated the Web & App Activity system.
  2. When Personalization is active in Gemini, a banner in the chat window will let users quickly disconnect their Search history.
  3. It will explicitly disclose the details of user data, such as saved info, previous chats or Search history, currently being used by Gemini.

To make the responses even more relevant, users can tell Gemini to reference their past chats, as well. This feature has been exclusive to Advanced subscribers so far, but it will be extended to free users worldwide in the coming weeks.

Integrating Gemini within more apps

App that work across Gemini.
Nadeem Sarwar / Digital Trends

Gemini has the ability to interact with other applications — Google’s as well as third-party — using an “apps” system, previously known as extensions. It’s a neat convenience, as it allows users to get work done across different apps without even launching them.

Google is now bringing access to these apps within the Gemini 2.0 Flash Thinking Experimental model. Moroever, the pool of apps is being expanded to Google Photos and Notes, as well. Gemini already has access to YouTube, Maps, Google Flights, Google Hotels, Keep, Drive, Docs, Calendar, and Gmail.

Users can also enable the apps system for third-party services such as WhatsApp and Spotify, as well, by linking with their Google account. Aside from pulling information and getting tasks done across different apps, it also lets users execute multiple-step workflows.

For example, with a single voice command, users can ask Gemini to look up a recipe on YouTube, add the ingredients to their notes, and find a nearby grocery shop, as well. In a few weeks, Google Photos will also be added to the list of apps that Gemini can access.

Multi-app workflow in Gemini.
Screenshot Google

“With this thinking model, Gemini can better tackle complex requests like prompts that involve multiple apps, because the new model can better reason over the overall request, break it down into distinct steps, and assess its own progress as it goes,” explains Google.

Moreover, Google is also expanding the context window limit to 1 million tokens for the Gemini 2.0 Flash Thinking Experimental model. AI tools such as Gemini break down words into tokens, with an average English language word translating to roughly 1.3 tokens.

The larger the token context window, the bigger the size of input allowed. With the increased context window, Gemini 2.0 Flash Thinking Experimental can now process much bigger chunks of information and solve complex problems.

Nadeem Sarwar
Nadeem is a tech and science journalist who started reading about cool smartphone tech out of curiosity and soon started…
Gemini might soon drive futuristic robots that can do your chores
DIGIT sensors mounted on a robot hand manipulating glass marbles.

The inevitable outcome of artificial intelligence was always its use in robots, and that future might be closer than you think. Google today announced Gemini Robotics, an initiative to bring the world closer than ever to "truly general purpose robots."

Google says AI robotics have to meet three principal qualities. First, they should be able to adapt on the fly to different situations. They must be able to not only understand but also respond to changing environments. Finally, the robots have to be dexterous enough to perform the same kind of tasks that humans can with their hands and fingers.

Read more
Google Gemini to play larger role at your workplace
Gemini running on the Google Pixel 9 Pro Fold.

AI will soon be entering the work chat. Google has announced that it will be bringing new Gemini features to Google Workspace.

The company announced in a blog post that advanced new Gemini features will be added to Google Meet and Google Chat starting today. The AI program will help spruce up virtual workplace meetings in Google Workspace.

Read more
Google Gemini set to close gap on ChatGPT with rumored new feature
Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app displaying the transcribe of a conversation and the steps taken

The Gemini app offers a whole bunch of useful things, but it's lacking one thing: Video analysis based on uploads from your PC or phone. That might be about to change, though, as looking into the APK code reveals that Google is working on a video upload feature. This could soon help Gemini analyze and summarize videos uploaded directly by users; it'd also help it rival ChatGPT, which already offers such a feature.

Android Authority went on a deep dive into the APK source code of the Google app beta and came up with some interesting findings. Given that this was found in the official Google app, there's a good chance it'll eventually make it into Gemini, but just to be extra safe, read the following with a little bit of skepticism.

Read more