Apple patent involves a ‘Customizable Chip for AI Applications’

FIG. 1 is a diagram illustrating operations performed by layers of a chip customized for AI applications.

Apple is expected to go all in with artificial intelligence this year. Now the company has been granted a patent (number US 11907823 B2) for a “Customizable Chip for AI Applications.”

About the patent 

Apple is quietly working on artificial intelligence (AI) tools that could challenge those of OpenAI Inc., Alphabet Inc.’s Google and others, but the company has yet to devise a clear strategy for releasing the technology to consumers, according to Bloomberg‘s Mark Gurman in an July 2023 report.

The company has built its own framework to create large language models — the AI-based systems at the heart of new offerings like ChatGPT and Google’s Bard — the article adds. With that foundation, known as “Ajax,” Apple also has created a chatbot service that some engineers call “Apple GPT,” according to Gurman. Since the tech giant makes its own A.x and M-x processors for iPhones, iPads, Macs, and other devices, making a customizable cap for AI applications seems a likely next step.

Apple’s patent involves neutral networks. In information technology a neural network is defined as a system of programs and data structures that approximates the operation of the human brain. A neural network usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory. Apple patent involves a neutral network implemented as a customized integrated circuit.

Example applications include security, access control, identification/authentication, machine vision, artificial intelligence, engineering, manufacturing, robotics, systems control, autonomous vehicles, and other situations involving some form of object or pattern recognition, object detection, or automated decision-making based on an image.

Summary of the patent 

Here’s Apple’s (very technical) abstract of the patent: “In one embodiment, a computing device includes an input sensor providing an input data; a programmable logic device (PLD) implementing a convolutional neural network (CNN), wherein: each compute block of the PLD corresponds to one of a multiple of convolutional layers of the CNN, each compute block of the PLD is placed in proximity to at least two memory blocks, a first one of the memory blocks serves as a buffer for the corresponding layer of the CNN, and a second one of the memory blocks stores model-specific parameters for the corresponding layer of the CNN.”

Dennis Sellers
the authorDennis Sellers
Dennis Sellers is the editor/publisher of Apple World Today. He’s been an “Apple journalist” since 1995 (starting with the first big Apple news site, MacCentral). He loves to read, run, play sports, and watch movies.