Categories: Patents

Apple granted patent for ‘single-pass object scanning’

Apple has been granted a patent (11580692 B2) for “single-pass object scanning.” It involves the ability of Macs, iPads, and iPhones to scan an object and create a 3D image of it

About the patent

The patent relates to generating three-dimensional geometric representations of physical environments, and in particular, to systems, methods, and devices that generate geometric representations based on depth information detected in physical environments.

In the patent Apple notes that physical environments and objects therein have been modeled (e.g., reconstructed) by generating three-dimensional (3D) meshes, utilizing 3D point clouds, and by other means. The reconstructed meshes represent 3D surface points and other surface characteristics of the physical environments’ floors, walls, and other objects. Such reconstructions may be generated based on images and depth measurements of the physical environments, e.g., using RGB cameras and depth sensors.

However, Apple says that existing techniques for generating 3D models based on images of a physical environment and depth information detected in the physical environment may be inaccurate and inefficient using a mobile device, for example, based on a user capturing photos or video or other sensor data while walking about in a room. 

What’s more, existing techniques may fail to provide sufficiently accurate and efficient object detection in real time environments. Apple wants to overcome such limitations.

Summary of the patent

Here’s Apple’s abstract of the patent: “Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.”

Dennis Sellers

Dennis Sellers is the editor/publisher of Apple World Today. He’s been an “Apple journalist” since 1995 (starting with the first big Apple news site, MacCentral). He loves to read, run, play sports, and watch movies.

Recent Posts

Five new games arriving on Apple Arcade, including ‘Legends of the Multiverse’

Five new games are arriving on Apple Arcade, including "Legends of the Multiverse."

6 hours ago

The iPhone 15 Pro Max was the Best-selling Smartphone in quarter one

The iPhone 15 Pro Max was the best-selling smartphone in quarter one.

7 hours ago

Worldwide tablet sales have increased ‘modestly,’ and iPad sales are expected to shoot up this year

Worldwide tablet sales have increased "modestly," and iPad sales are expected to shoot up this…

7 hours ago

Apple TV+’s ‘Argylle’ and ‘Sugar’ continue to be hits among streaming movies, TV shows

Apple TV+s “Argylle” movie ranks seventh on the list of most watched streaming TV movies…

7 hours ago

Apple again rumored to be working on a 20.3-inch Mac/iPad hybrid with a foldable display

Apple is again rumored to be working on a 20.3-inch Mac/iPad hybrid with a foldable…

7 hours ago

iPhones may not get under-display Face ID until 2026

In a tweet, analyst Ross Young says under-panel Face ID many not arrive on Apple’s…

7 hours ago