When Pokémon Go launched in 2016, millions of players wandered around parks, streets, and city landmarks chasing virtual creatures. What most people didn’t realise at the time is that those same gameplay moments were quietly building one of the largest visual datasets of the real world ever created.
Nearly a decade later, that massive pool of images is being used to help delivery robots find their way through cities. According to new reporting by MIT Technology Review, the data collected through Pokémon Go has helped train an AI system that can determine location with centimetre-level precision just by analysing nearby buildings and landmarks.
Pokémon Go Players Helped Create A 30 Billion Image Dataset
Over the years, Pokémon Go players have collectively captured billions of images while interacting with real-world landmarks inside the game. These scans happened whenever players visited PokéStops, gyms, or completed in-game tasks that asked them to photograph statues, buildings, and other points of interest.
The result is an enormous dataset. According to reporting from MIT Technology Review, Niantic’s augmented reality games, including Pokémon Go and Ingress, have generated roughly 30 billion images tied to precise location metadata.
Each image doesn’t just contain a picture. It also includes information about where the phone was positioned, the direction it was facing, how the device was moving, and other sensor data. Combined across millions of players, that information forms detailed multi-angle views of real-world locations.
Many of these images are concentrated around more than one million landmark locations that players frequently visited in-game, such as battle arenas or notable public sites.
Niantic Spatial Is Turning That Data Into A World Model
The dataset is now being used by Niantic Spatial, an AI company created by Niantic to develop advanced mapping technology.
Its core system is called a Visual Positioning System, or VPS. Instead of relying only on GPS signals, VPS determines location based on what a camera sees in the surrounding environment. By comparing real-world images to its database, the system can estimate a position within a few centimetres.
That level of precision is difficult for traditional GPS to achieve in dense cities. Tall buildings can cause signals to bounce and interfere with each other, which often leads to location errors. In urban environments, a GPS marker on a phone can drift by dozens of metres, sometimes placing a person on the wrong street.
Niantic Spatial’s approach aims to solve that problem by using visual landmarks rather than satellite signals alone.
Delivery Robots Are The First Real-World Test
The first major deployment of the technology comes through a partnership with Coco Robotics.
Coco operates roughly 1,000 small sidewalk delivery robots across cities such as Los Angeles, Chicago, Miami, Jersey City, and Helsinki. These machines typically travel at around five miles per hour and carry items like takeaway meals or groceries.

Image Credit: Niantic Spatial
So far, Coco says its robots have completed more than half a million deliveries while travelling millions of miles.
However, navigation has always been one of the hardest challenges for delivery robots. Dense city areas with underpasses, high-rise buildings, and busy streets often disrupt GPS signals, making it difficult for autonomous machines to determine their exact position.
With Niantic Spatial’s VPS system, Coco’s robots use four cameras to scan their surroundings.

Image Credit: Coco Robotics
By matching those images with Niantic’s massive database, the robots can more accurately determine where they are and where they need to go.
That precision could help robots stop in the correct pickup spot outside restaurants and arrive directly at a customer’s doorstep instead of a few metres away.
How Pokémon Go Gameplay Became Mapping Data
Pokémon Go’s design made it particularly effective at generating real-world mapping data.
Players were encouraged to visit specific locations and point their phone cameras at landmarks from different angles. Over time, the game even introduced features that rewarded users for scanning locations in detail.
Because millions of people contributed images of the same places at different times of day, lighting conditions, and weather situations, Niantic ended up with a diverse and highly detailed dataset.
According to Niantic Spatial CTO Brian McClendon, the company now has thousands of images for many of these locations. That allows the system to recognise landmarks reliably even when conditions change.
The Technology Could Go Beyond Delivery Robots
The Coco Robotics partnership is only the beginning of what Niantic Spatial calls a “living map” of the world.
The idea is to continuously update digital representations of cities as new data comes in. Robots navigating streets could capture additional images and feed them back into the system, gradually improving its accuracy.
Niantic Spatial CEO John Hanke says the long-term goal is to create a model that helps machines better understand real-world environments. While the technology originally aimed to improve augmented reality experiences, the rapid growth of robotics has opened up a different use case.
As robots begin to share sidewalks and public spaces with humans, systems that understand real-world surroundings with high precision could become essential for navigation and safety.
For now, though, the most immediate result is a surprising one. The same game that once sent players out hunting Pikachu may soon help robots deliver pizza more accurately.
