We’ve all been there — your GPS says there’s supposed to be a turn coming up, and you trust the machine. Then, you find yourself going the wrong way or stuck on a private road with nowhere to turn around. Getting accurate street-level maps is difficult, and even technically accurate maps can be confusing without sufficient details. Researchers from MIT and the Qatar Computing Research Institute (QCRI) have combined two neural network architectures to generate more accurate GPS maps with easily accessible satellite images.
It takes time and money to gather accurate mapping data, and municipalities around the world are constantly making tweaks to their roadways. Only companies with access to mountains of user data and fleets of mapping vehicles have any hope of updating on a regular basis, but even Google can’t keep its data on all parts of the world up-to-date.
One possible solution is to use satellite images to generate accurate street maps, but buildings, trees, and overpasses often obscure important details like lane counts and exits. The new paper from MIT and QCRI explains how a tool called RoadTagger can predict lane locations even when they’re not visible.
If you were to look at a satellite image of an obscured roadway, you could probably guess how many lanes there are and which one you’d need to be in to take a particular exit. Teaching a machine to do that with millions of images is a major computational problem, though. RoadTagger consists of two parts: a convolutional neural network (CNN) often used for image recognition tasks and a graph neural network (GNN) that understands relationships between data points.
RoadTagger’s CNN scans the raw image data and identifies roads. Then, the GNN splits each road into 20-meter segments. Each segment becomes a “tile” in a separate graph node. The CNN looks at each graph node and collects features like road type and lane counts. It shares the data with the adjacent nodes, propagating along the entire length of the road. If a tile is obscured or unclear, the GNN can use the CNN data from other nodes to estimate the conditions in that section.
The team tested RoadTagger with real-world images from 20 US cities. RoadTagger managed to correctly identify hidden lane locations 77 percent of the time. It also correctly guessed road types 93 percent of the time. A future version is already planned that will boost that accuracy and add support for identifying features like parking lots and bike lanes.
Now read:
- ‘Universal Lego Sorter’ Uses AI to Recognize Any Lego Brick
- OpenAI Releases Fake News Bot It Previously Deemed Too Dangerous
- MIT Taught Self-Driving Cars to See Around Corners with Shadows
from ExtremeTechExtremeTech https://www.extremetech.com/extreme/305225-mit-uses-ai-to-create-updated-street-maps-from-satellite-imagery
No comments:
Post a Comment