Apple on Friday defended its new child protection features, which will check images uploaded to its cloud storage and messaging platform.
Apple announced two new features for iPads and iPhones in the United States last week.
According to the company, one can identify child sexual abuse images uploaded to its iCloud storage, while the other uses machine learning to recognize and warn children and their parents when receiving or sending sexually explicit photos on Apple’s texting app, Messages.
“We can see that it’s been widely misunderstood,” the US tech giant’s software chief Craig Federighi said of the update rollout in an interview with the Wall Street Journal published Friday.
“We wanted to be able to spot such photos in the cloud without looking at people’s photos,” he said, adding Apple wanted to “offer this kind of capability… in a way that is much, much more private than anything that’s been done in this area before.” Federighi said the new tools do not make Apple’s systems and devices less secure or confidential.
In a Friday briefing, Apple stated that it would rely on trusted groups in multiple organizations to find what images to look out for in order to ensure that searches were not being exploited for other uses.