Wednesday, December 18, 2024
Opinions

Apple’s AI- What data does it use and does it promise privacy?

Apple Intelligence — the personal intelligence system for iPhone, iPad, and Mac — combines the power of generative models with personal context to deliver intelligence that’s useful and relevant to the user.

By Shigraf Aijaz 

Ever since its advent, AI has been rapidly evolving, specifically after the groundbreaking launch of OpenAI’s ChatGPT in November 2022. However, with this rapid AI development, privacy has become a pressing concern. Specifically, as we see AI systems being integrated into our personal lives, they ultimately gain access to more of our personal information. This has ultimately left every privacy-conscious and tech-savvy person wondering whether they can use powerful AI systems without sacrificing their privacy. 

The advent of the new Apple Artificial Intelligence system seems to be the answer to the problem. Like all of its other products, Apple promises the utmost privacy and security with its AI as well. This article breaks down Apple AI and gauges how well Apple is adamant on fulfilling its privacy promises. 

Understanding Apple’s AI: What Does It Do?

Apple’s AI is a homegrown generative AI model by Apple that promises users a vast array of exciting features. The AI is developed in collaboration with OpenAI and will consist of ChatGPT integration within Siri, making the voice assistant “smarter” than it used to be. The new Siri comes with a new voice that is designed to sound more natural, along with the ability to understand the context of conversations. 

Apart from making changes in Siri, Apple AI is also designed to help people with their everyday lives, while focusing on privacy. Like every other AI chat model, Apple AI is also designed to help users write emails, create memory movies based on prompts and perform natural language search. 

It also offers Writing Tools to improve your writing and it can rewrite, proofread and summarise texts for you. Apple AI remains aware of your incoming mail messages and notifications and sorts them according to what is important to you. The AI tool also summarises your notifications, conversations, audio recordings and messages. You can record phone calls or notes in the Notes app and choose to receive only the summary later on. 

Other implementations of Apple AI are in image creation. Apple has introduced the new Image Playground app and tool which provides relevant images on prompts. For example, if a friend is talking about shopping, the tool will provide an image of a “shopping mall.”

Another such interesting AI feature by Apple is a mimic of Google search to put it crudely. The feature, known as the Visual Intelligence tool, works with the new iPhone 16’s Camera Control Button. The feature allows you to get search results regarding any specific thing you are pointing your camera at. 

Apple’s Privacy Promise: Does It Hold Up?

Apple has long prided itself as a champion of user privacy and has often gone as far as positioning it as a fundamental human right. Therefore, with the launch of Apple’s AI the most pressing question is, whether Apple will be able to balance AI functionality and privacy. 

That is primarily because every generative AI model so far has been known to slurp up its users’ personal information. Which is why it is crucial to understandthe amount of your personal information that is available online and take proactive measures to remove your information from the internet. However, Apple has taken big steps to maintain its reputation, integrity and user’s trust. 

To ensure user privacy and security, Apple has enforced an on-device data processing setup and an extensive custom infrastructure known as the Private Cloud Compute (PCC). The PCC is designed for the cloud services Apple Intelligence uses when the system cannot fulfil a query locally on the local’s device. Here is a breakdown of both the security parameters:

How Apple Uses On-Device Processing for Privacy?

Apple has ensured a privacy-first approach with its on-device processing feature. Which means that unlike every other AI system that sends your data for cloud analysis, Apple Intelligence does most of the heavy work by itself. 

In other words, the on-device data processing or ‘local’ processing ensures that the user’s data never leaves their phone or computer. This significantly narrows down the attack surface and limits the paths an attacker could take to steal your data. 

This approach means that all the essential and personal information such as your messages, emails and photos never leave your device. Unlike with other systems, it ensures that your data is not handed over to third party platforms where it could be kept in a server, potentially left vulnerable to breaches or abuse. 

However, this does not mean that an attack will never occur. The only thing this method ensures is that it narrows down the attack surface in contrast to having your data stored into third-party cloud storage. This approach allows Apple to be transparent about its data processing methods. 

Additionally, it also ensures that you have complete control over your data and you don’t have to blindly trust a company with your personal information. But this approach also means that you have to proceed with excessive caution and ensure that your device remains private and secure.  Even with Apple’s focus on on-device processing, users should be proactive about their data privacy across the internet.

Apple’s PCC: How Secure Is It?

The Private Cloud Computer is developed with a wide array of cloud security technologies. The PCC runs on a custom built Apple processor and a custom PCC server operating system that is a stripped down, hybrid version of the macOS and iOS. 

Moreover, the PCC servers don’t include “persistent storage” which means that they don’t have a hard drive that keeps processed data for long-term. However, these servers do have Apple’s “Secure Enclave” which is a dedicated hardware encryption key manager that randomises each file system’s encryption key at every bootup. This essentially means that once a server is rebooted it is wiped of all data. As an extra precaution the entire system is also cryptographically unrecoverable. 

Apart from these security features, the PCC also uses Apple’s Secure Boot to validate the integrity of the operating system using a code verification feature.The feature was first introduced with iOS 17, but in the PCC, it is known as ‘Trusted Execution Monitor.’ However, in PCC the feature runs in a much stricter mode where once the system restarts and completes the boot sequences, the system shuts down and is unable to load any code whatsoever. 

Moreover, the PCC also does not allow privileged access, which drastically limits remote management options. Apple has also enabled secure data transit from user devices to validated PCC nodes through end-to-end encryption. 

Final Words 

As AI continues to evolve and integrate into our lives, the question of privacy remains a significant question. Apple’s privacy-first approach with Apple Intelligence has set new grounds for AI privacy and security. However, even this method comes with its own set of challenges with the production and implementation cost being a significant setback. The on-device processing requires a powerful hardware which ultimately leads to a high cost that consumers have to pay. In other words, privacy and functionality with AI is likely to come with a significant price tag!

Shigraf is an experienced cybersecurity journalist who is zealous about spreading cyber and internet security knowledge. She has extensive knowledge in writing insightful topics regarding online privacy, DevOps, AI, cybersecurity, cloud security, and much more. Her work relies on vast and in-depth research.

LinkedIn:  https://www.linkedin.com/in/shigraf-aijaz-a9354516a/ 

Twitter:  https://x.com/shigraf3?lang=en

MuckRack: https://muckrack.com/profile/portfolio

Guest Author
the authorGuest Author