Skip to main content

Gemini in Google Maps may be the best use of mobile AI yet

Google Maps on the Asus Zenfone 11 Ultra.
Andy Boxall / Digital Trends

We scarcely need reminding that Google is putting AI into everything, but its latest push is probably one of the most interesting and immediately helpful yet, as Google Maps has now entered its Gemini era.

Vast experience

Before going into the many AI updates happening across all of Google’s “Geo” (the collective name for all its mapping tools) departments, it’s helpful to understand just how rich Google’s location data already is. Collected over the last 20 years, Google’s mapping expertise is available in more than 250 countries and territories across the world, and Google Maps alone has more than 2 billion active users each month.

Recommended Videos

Google already uses AI in its mapping products, such as the Lens overlay in Maps’ AR mode, which puts live place information in front of you on the map. Google is now using AI to improve the photo-realistic 3D tour in Immersive View, a feature first launched last year, with live data on the location, including weather, parking, and turn-by-turn information. Immersive View is now set to launch in 150 cities this week and will include university campuses, too.

While helpful, it’s in Google Maps where AI and Google Gemini are likely to make the most impact. A new feature called Ask Maps allows you to use the search bar for more complex questions, such as asking about things to do in a city, and Gemini will return a curated list of ideas and suggestions. Further down the results page, Gemini will summarize user reviews, and you’ll be able to ask further questions about granular details, such as whether a location is quiet or noisy.

Driving with Maps and Waze

Screenshots from new AI features in Google Maps.
Google

The Ask Maps feature leverages another huge strength of Google Maps, as there are more than 500 million contributors and editors and more than 100 million map updates daily. It creates a vast, real-time database for Gemini to pull from when you search for the latest information, in addition to using sources from across the web. Don’t worry, the summaries will be balanced, so you’ll get to see the good and bad.

Ask Maps will launch in the U.S. this week, and it joins some updates to driving navigation. Further leaning on its successful crowdsourcing system, drivers using Maps will be able to add real-time information on any weather disruptions. Another new feature, Enhanced Navigation, examines a planned route and can make recommendations on places to stop along the way while expanding information on lane details, restrictions, crosswalks, and more. When you arrive, Maps will highlight parking options and alert you to save the location when you stop.

The Enhanced Navigation suite will launch in 30 U.S. cities in November, while the Explore and weather reporting system will be available globally this week. Google-owned Waze will also begin testing a new reporting system, which will use Gemini for conversational voice control. For example, you can verbally tell Waze there’s a double-parked car holding you up, and Waze will add an alert to the map that there’s an object blocking the road for others to note. Waze also shares this data, along with information it brings in from city partners, across other Google Geo products. It will launch as a beta product for trusted testers globally first, and an expanded launch is expected early next year.

Gemini has already replaced Google Assistant on many Android smartphones and is being used extensively across other Google products, from Google Drive to Gmail. Not all of its functionality is immediately helpful to everyone, but Gemini in Maps looks to be one of the more interesting additions many will find a way to try to use quickly.

Andy Boxall
Andy is a Senior Writer at Digital Trends, where he concentrates on mobile technology, a subject he has written about for…
Thanks to Gemini, you can now talk with Google Maps
Gemini’s Ask about place chip in Google Maps.

Google is steadily rolling out contextual improvements to Gemini that make it easier for users to derive AI’s benefits across its core products. For example, opening a PDF in the Files app automatically shows a Gemini chip to analyze it. Likewise, summoning it while using an app triggers an “ask about screen” option, with live video access, too.
A similar treatment is now being extended to the Google Maps experience. When you open a place card in Maps and bring up Gemini, it now shows an “ask about place” chip right about the chat box. Gemini has been able to access Google Maps data for a while now using the system of “apps” (formerly extensions), but it is now proactively appearing inside the Maps application.

The name is pretty self-explanatory. When you tap on the “ask about place” button, the selected location is loaded as a live card in the chat window to offer contextual answers. 

Read more
Gemini app finally gets the world-understanding Project Astra update
Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open

At MWC 2025, Google confirmed that its experimental Project Astra assistant will roll out widely in March. It seems the feature has started reaching out to users, albeit in a phased manner, beginning with Android smartphones.
On Reddit, one user shared a demo video that shows a new “Share Screen With Live” option when the Gemini Assistant is summoned. Moreover, the Gemini Live interface also received two new options for live video and screen sharing.
Google has also confirmed to The Verge that the aforementioned features are now on the rollout trajectory. So far, Gemini has only been capable of contextual on-screen awareness courtesy of the “Ask about screen” feature.

Project Astra is the future of Gemini AI

Read more
Cost-cutting strips Pixel 9a of the best Gemini AI features in Pixel 9
Person holds Pixel 9a in hand while sitting in a car.

The Pixel 9a has been officially revealed, and while it's an eye candy, there are some visible cutbacks over the more premium Pixel 9 and 9 Pro series phones. The other cutbacks we don't see include lower RAM than the Pixel 9 phones, which can limit the new mid-ranger's ability to run AI applications, despite running the same Tensor G4 chipset.

Google's decision to limit the RAM to 8GB, compared to the 12GB on the more premium Pixel 9 phones, sacrifices its ability to run certain AI tasks locally. ArsTechnica has reported that as a result of the cost-cutting, Pixel 9a runs an "extra extra small" or XXS variant -- instead of the "extra small" variant on Pixel 9 -- of the Gemini Nano 1.0 model that drives on-device AI functions.

Read more