Apple Visual Intelligence is a feature on newer iPhones, iPads, and Macs that uses the camera to identify and provide information about objects, text, and places around you.
It allows you to quickly access details like translations, summaries, or even contact information simply by pointing your phone at something; essentially acting as a real-time visual search tool powered by AI.
To use Visual Intelligence in iPadOS do the following:
° Press and hold the Camera Control button.
° Point and Take: Direct your camera at the object you want to learn more about and take a photo using the Camera Control button.
° Tap “Ask” to query ChatGPT about the object in the photo.
° Tap “Search” to perform a Google search for similar images or information about the object.
° If you point your camera at text, you can use the options to summarize, translate, or have the text read aloud.
When using Visual Intelligence, you should note that:
° It requires taking a photo. It doesn’t work with a live camera view.
° You can’t use previously taken photos.
Visual Intelligence is integrated across features in your apps with iPadOS 18.3 and later. If you have an earlier version of iPadOS 18, go to Settings > Apple Intelligence & Siri, then tap Get Apple Intelligence.