Apple has been granted a patent (number 10,760,922) for “augmented reality maps” on an iPhone. The idea is to take advantage of a portable electronic device’s imaging and display capabilities and combine a video feed with data describing objects in the video.
In some examples, the data describing the objects in the video can be the result of a search for nearby points of interest. For example, a user visiting a foreign city can point an iPhone and capture a video stream of a particular view. He/she can also enter a search term, such as museums. The system can then augment the captured video stream with search term result information related to nearby museums that are within the view of the video stream. This allows a user to supplement their view of reality with additional information available from search engines.
Apple notes that, however, if a user desires to visit one of the museums, he/she must currently switch applications, or at a minimum, switch out of an augmented reality view to learn directions to the museum. The tech giants says that such systems can fail to orient a user’s with a poor sense of direction and force the user to correlate the directions with objects in reality. What’s more, in some instances, street signs might be missing or indecipherable, making it difficult for the user to find the directed route. Apple wants to alleviate these problems.
Here’s how the augmented reality maps feature would work: a user points an iPhone to capture and display a real-time video stream. The smartphone detects geographic position, camera direction, and tilt of the image capture device. The user sends a search request to a server for nearby points of interest. The handheld communication device receives search results based on the search request, geographic position, camera direction, and tilt of the handheld communication device.
The iPhone visually augments the captured video stream with data related to each point of interest. The user then selects a point of interest to visit. The iPhone visually augments the captured video stream with a directional map to a selected point of interest in response to the user input.