Apple will introduce a mechanism in the United States to discover images of child sexual abuse stored on the iPhone. According to a Reuters August 6 article, Apple will match photos before people upload them to iCloud against a database to detect problematic images.
The software used here will be called “neural Match” and will be flagged by comparing images stored on an individual’s iPhone with images registered in a US law enforcement database. If so, the examination will start. Law enforcement agencies will be alerted if the photo is determined to be illegal.
According to Reuters, the system checks the photos stored on the iPhone before they are uploaded to the iCloud server.
The news was first reported by the Financial Times. Forbes is asking Apple for comment.
Last year, the U.S. Department of Justice announced to tech and social media companies “voluntary principles” aimed at strengthening measures against sexual exploitation and abuse of children, if it discovers illegal content. He called on the authorities to report immediately.
Microsoft then created photo DNA for businesses to identify images of child sexual abuse on the Internet. Facebook and Google have already introduced a system to check and flag the content of images.
Matthew Green, a security researcher at John Hopkins University, said that building a system for Apple to scan personal iPhones for “banned content” would “break the dam” and the US government. Pointed out that could lead to censorship of all people’s devices.
Green also expressed concern to the Associated Press that Apple might be under pressure from governments in other countries to scan for other information.
According to the National Council for Missing and Exploited Children, Facebook reported 20 million images of child sexual abuse to law enforcement agencies in 2020. This number includes images on Facebook and Instagram, up from 16 million in 2019.
Also Read : Not Just The Power Of AI.. The Pixel 6 Camera Powers Up To 50 Million Pixels At Once?