Developers can now add live Google Maps data to Gemini-powered AI app outputs


Google is adding a new feature for third-party developers building on top of its Gemini AI models that rivals like OpenAI’s ChatGPT, Anthropic’s Claude, and the growing array of Chinese open source options aren’t likely to get it anytime soon: ground with Google Maps.
With this addition, developers can connect the reasoning capabilities of Google’s Gemini AI models with live geospatial data from Google Maps, allowing applications to provide detailed, location-relevant answers to user questions, such as hours, reviews or the atmosphere of a specific location.
Using data from more than 250 million places, developers can now build more intelligent and responsive location-aware experiences.
This is especially useful for applications where proximity, real-time availability, or location-specific personalization are important, such as local search, delivery services, real estate, and trip planning.
When the user’s location is known, developers can provide the latitude and longitude in the request to improve response quality.
By tightly integrating real-time and historical map data into the Gemini API, Google enables applications to generate informed, location-specific responses with factual accuracy and contextual depth uniquely possible through its mapping infrastructure.
Merging AI and geospatial intelligence
The new feature can be accessed in Google AI Studio, where developers can try a live demo powered by the Gemini Live API. Models that support grounding with Google Maps include:
-
Twin 2.5 Pro
-
Gemini 2.5 Flash
-
Gemini 2.5 Flash Lite
-
Gemini 2.0 Flash
In one demonstrationasked a user for recommendations for Italian restaurants in Chicago.
Using Maps data, the assistant pulled up the top-rated options and clarified a misspelled restaurant name before locating the correct location with accurate business data.
Developers can also get a context token to embed a Google Maps widget into their app’s UI. This interactive component displays photos, reviews, and other familiar content typically found in Google Maps.
Integration is handled via the generateContent method in the Gemini API, where developers include googleMaps as a tool. They can also enable a Maps widget by setting a parameter in the request. The widget, rendered using a returned context token, can provide a visual layer alongside the AI-generated text.
Use cases across industries
The Maps grounding tool is designed to support a wide range of practical use cases:
-
Generate itinerary: Travel apps can create detailed daily plans with route, timing and location information.
-
Personalized local recommendations: Real estate platforms can highlight listings near kid-friendly amenities like schools and parks.
-
Detailed location questions: Applications can provide specific information, such as whether a café offers outdoor seating, using community reviews and Maps metadata.
Developers are encouraged to enable the tool only when the geographic context is relevant, to optimize both performance and costs.
According to the developer documentation, the price starts at $25 per 1,000 grounded prompts – a high amount for those who trade in numerous queries.
Combine search and maps for improved context
Developers can use Grounding with Google Maps alongside Grounding with Google Search in the same request.
While the Maps tool provides factual data such as addresses, opening hours and reviews, the Search tool adds broader context from web content, such as news or event listings.
For example, if you’re asked about live music on Beale Street, the combined tools provide location data from Maps and event times from Search.
According to Google, internal testing shows that using both tools together leads to significantly improved response quality.
Unfortunately, it doesn’t appear that the Google Maps grounding includes live vehicle traffic data – at least not yet.
Customization and flexibility for developers
The experience is built for customization. Developers can customize system prompts, choose from different Gemini models, and configure voice settings to tailor interactions.
The demo app in Google AI Studio is also remixable, allowing developers to test ideas, add features, and iterate designs within a flexible development environment.
The API returns structured metadata (including source links, place IDs, and citation ranges) that developers can use to build inline citations or verify AI-generated output.
This supports transparency and increases trust in user-facing applications. Google also requires that Maps-based resources be clearly attributed and linked back to the source using their URI.
Implementation considerations for AI builders
For engineering teams integrating this capability, Google recommends the following:
-
Providing user location context if known, for better results.
-
Displaying Google Maps source links directly below the relevant content.
-
Only enable the tool if the search has a clear geographical context.
-
Monitoring latency and disabling grounding when performance is critical.
Grounding with Google Maps is currently available worldwide, although banned in several areas (including China, Iran, North Korea, and Cuba) and not authorized for emergency use.
Availability and access
Grounding with Google Maps is now generally available via the Gemini API.
With this release, Google continues to expand the capabilities of the Gemini API, allowing developers to build AI-powered applications that understand and respond to the world around them.




