Elon Musk claims Apple’s new AI tools are a privacy risk. How much of a


On Monday, Apple revealed a suite of highly anticipated AI features — including ChatGPT — that it will soon integrate into its devices. But not everyone was thrilled at the news.

While some observers were excited at the prospect of, for example, drawing math equations on an iPad that could then be solved by AI, billionaire tech mogul Elon Musk called Apple’s inclusion of ChatGPT — which is developed by OpenAI, not Apple — an “unacceptable security violation.”

“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,” he wrote in a post on X, formerly Twitter. Musk co-founded OpenAI, but stepped down from its board in 2018 and launched a competing AI company

He said visitors to his companies “will have to check their Apple devices at the door, where they will be stored in a Faraday cage,” which is a shield that blocks phones from sending or receiving signals.

“Apple has no clue what’s actually going on once they hand your data over to OpenAI,” he wrote in a separate post. “They’re selling you down the river.”

But Musk’s posts also contained inaccuracies — he claimed Apple was “not smart enough” to build its own AI models, when it in fact had — leading to a community fact-check on X. But his privacy concerns were spread far and wide. 

But are those concerns valid? When it comes to Apple’s AI, do you need to worry about your privacy?

How privacy is built into Apple’s AI approach

Apple emphasized during Monday’s announcement at its annual developer conference that its approach to AI is designed with privacy in mind.

Apple Intelligence is the company’s name for its own AI models, which run on the devices themselves and don’t send information over the internet to do things like generate images and predict text. 

But some tasks need beefier AI, meaning some information must be sent over the internet to Apple’s servers, where more powerful models exist. To make this process more private, Apple also introduced Private Cloud Compute.

WATCH | Calls to pause development of AI: 

Elon Musk, tech experts call for pause on AI development

In an open letter citing risks to society, Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4. Some experts in Canada are also putting their name on that list.

When a device connects to one of Apple’s AI servers, the connection will be encrypted — meaning nobody can listen in — and the server will delete any user data after the task is finished. The company says not even its own employees can see the data that is sent to its AI servers.

The servers are built on Apple’s chips and use Secure Enclave, an isolated system that handles things like encryption keys, among other in-house privacy tech. 

Anticipating that people might not take it at its word, Apple also announced that it will release some of the code powering its…



Read More: Elon Musk claims Apple’s new AI tools are a privacy risk. How much of a

ApplesclaimsElonMuskprivacyrisktools
Comments (0)
Add Comment