December 22, 2024

What will Apple’s AI do with your data?

Apple’s splashy announcement at this week’s Worldwide Developers Conference that it’s adding artificial intelligence to its products and partnering with ChatGPT maker OpenAI has raised many questions about how Apple’s AI products will work. The confusion is understandable. Apple is simultaneously launching its own suite of AI models and integrating ChatGPT into its devices and software. So it’s natural to wonder where one ends and the other begins, and perhaps more urgently, what both companies will do with the personal data they receive from their users. The stakes are especially high for Apple, a company whose hallmarks are security and privacy. Difference between Apple Intelligence and ChatGPT If Apple has its own AI, why does it need ChatGPT? The answer is that both have different purposes. Apple Intelligence – the collective name for all of Apple’s AI tools – is intended to be a personal assistant above all else, with an emphasis on “personal”.

It records specific information about your relationships and contacts, messages and emails you send, events you attend, meetings in your calendar, and other highly personalized data about your life. And Apple hopes that it can use that data to make your life a little easier by helping you find a photo you took at a concert years ago, find the right attachment in an email, or organize your mobile notifications by priority and urgency. But while Apple Intelligence may know that you went on a hiking trip last year, it lacks what company executives call “world knowledge” — more general information about history, current events, and other things that don’t directly concern you. That’s where ChatGPT comes into play. On an opt-in basis, users can forward questions and prompts from Siri to ChatGPT, and use ChatGPT to create documents in Apple apps. Apple said it plans to eventually integrate other third-party AI models as well.

The integration essentially removes one step from accessing ChatGPT, making the platform even more seamless for Apple users. What does this mean for my data? Because Apple Intelligence and ChatGPT are used for very different purposes, the amount and type of information users send to each AI may also differ. Apple Intelligence has access to a wide range of personal information, from written communications to photos and videos you take to records of calendar events. There appears to be no way to prevent Apple Intelligence from accessing this information other than not using the feature. An Apple spokesperson did not immediately respond to questions on the subject. ChatGPT does not necessarily or automatically have access to your highly personal information, but if you choose to use ChatGPT through Apple, you can choose to share some of this data, and more, with OpenAI. In a demo on Monday, Apple showed Siri asking users for permission before sending a prompt to ChatGPT.

As part of its agreement with Apple, OpenAI made an important concession: OpenAI agreed not to store Apple users’ prompts or collect their IP addresses. However, anything is possible if you consciously sign in and have an existing ChatGPT account to connect to. Some users may do this to take advantage of their ChatGPT history or to take advantage of ChatGPT’s paid account plans. Should you trust Apple with your data? Now that we’ve sorted out what OpenAI will and won’t do with your data, what’s next for Apple? If Apple users want to use ChatGPT, they must send their personal data and AI requests to OpenAI, but Apple says that in most cases, Apple Intelligence will not send user data anywhere. Apple tries to process AI requests directly on the device, using the smallest AI model possible.

This is similar to how Apple already handles FaceID and other sensitive data. The idea is that by processing the data directly on the device, it limits dangerous disclosures. As long as the data is not sent anywhere, it cannot be intercepted or hacked by a central server. If the AI ​​task requires more processing power, Apple Intelligence sends the request and data to an Apple-controlled cloud computing platform, where a more powerful AI model will carry out the request. Here, Apple claims to have achieved great strides in data protection. The company’s announcement got relatively little airtime during Monday’s packed keynote, but the company is clearly proud of the progress it seems to have planned extensively. Apple announced Monday that it has developed a new way of cloud computing that can perform calculations on sensitive data while preventing anyone, not even the company itself, from seeing what data is being processed or what calculations are being performed.

Apple’s new architecture, known as Private Cloud Computing, inherits certain hardware and security concepts from the iPhone, such as the Secure Enclave, which already protects sensitive user data on Apple mobile devices. With Private Cloud Computing, “your data is never stored or exposed to Apple,” Craig Federighi, Apple’s senior vice president of software engineering, said during Monday’s keynote. Once a user’s AI request is complete, Private Cloud Compute erases all user data involved in the process, according to Apple. Apple argues that Private Cloud Computing is “only possible” because the company tightly controls its entire technology ecosystem, from its dedicated, proprietary computer chips to the software that integrates it all.If it’s true that Apple can’t see the personal data its large-scale AI models process (a claim Apple has submitted to researchers for testing in an effort to scrutinize its system design), Apple’s implementation differs from other companies’ implementations.

For example, with ChatGPT, OpenAI tells you that it’s using your data to further train its AI models. With Private Cloud Compute, you theoretically don’t have to take Apple’s word for it that it doesn’t use your data for AI training. What about Apple’s training data? Apple’s AI models don’t come out of nowhere. Like other companies’ models, they also required training. And that raises questions about whose data Apple used and how. In a technical document released this week, Apple said its models are “trained on licensed data, including data selected to improve certain features.” “We never use private personal information or user interactions when training our base models,” the company added. “We also apply filters to remove personally identifiable information, such as Social Security or credit card numbers, that are publicly available on the Internet.

” But Apple has acknowledged that it scours the public Internet for data used to train its own models, somewhat similar to other AI companies, some of which have faced copyright lawsuits and sparked debate about whether AI startups are unfairly profiting from people’s work. Apple has not disclosed what web-based information is included, except that publishers can add code to their websites to prevent Apple’s web crawlers from collecting their data. However, it is clear that the responsibility for protecting intellectual property rests with the publisher, not the company.