Apple has been granted a patent (number 10,733,804) for a “method and system for representing a virtual object in view of a real environment.” It involves upcoming augmented reality (AR) and lighting features in iPhones, iPads, and, possibly, Macs and the rumored “Apple Glasses,” an AR head-mounted display.
In AR, a view of a real environment, such as a video image of the real environment, is combined with an overlay of one or more virtual objects in a spatial relationship to the real environment. In AR apps, the virtual objects ideally integrate in the view so that real and virtual objects are indistinguishable.
Apple says that this means it’s important to illuminate or display the virtual objects with the same lighting conditions visible in the real world as well as let the virtual objects change the illumination for example by casting shadows onto parts from the real scene. However, for augmented reality scenes, the lighting conditions are typically unknown and arbitrary, so it’s difficult or even impossible to have consistent lighting for the real and virtual objects.
Apple says a possible way to have consistent lighting for the real and virtual objects in AR applications is to estimate light emitted from the real environment. Common approaches in state of the art require additional setups, e.g. mirrors, or special cameras, e.g. fish eye camera, in order to estimate environment light. This definitely restricts applicability of these approaches. Further, most of the common approaches could only estimate directions of environment light, but not positions of any light source.
There are different solutions for dealing with the issue, but Apple apparently doesn’t find any of them satisfactory. The company wants to provide a method of representing a virtual object in a view of a real environment that’s capable of enhancing applicability of an augmented reality application, particularly in environments with unknown lighting conditions.
Here’s the summary of the invention: “The invention relates to a method of representing a virtual object in a view of a real environment which comprises the steps of providing image information of a first image of at least part of a human face captured by a first camera, providing at least one human face specific characteristic, determining at least part of an image area of the face in the first image as a face region of the first image, determining at least one first light falling on the face according to the face region of the first image and the at least one human face specific characteristic, and blending in the virtual object on a display device in the view of the real environment according to the at least one first light. The invention also relates to a system for representing a virtual object in a view of a real environment.”