Just a week after The New York Times published an article about how facial recognition services can endanger kids, PimEyes have decided to ban and block searches for minors over concerns of child endangerment. It is not the first time concerns about children’s welfare have been brought up this month as Australia’s eSafety Commissioner called out massive tech giants Google and Twitter in their second report. You can read a detailed explanation of the article here.
Moving back on track, PimEyes has reportedly banned over 200 accounts for inappropriate searches of children. In one instance, writer Kashmir Hill mentioned that one parent told her she found photos of her children she had never seen before using PimEyes. However, she could not find where the image originated unless she could afford the $29.99 monthly subscription fee. In this article, we will touch on why PimEyes threatens child safety and why it is a positive first step for them to ban inappropriate minor searches.
Also Read: Australia to Require Google, Bing to Clamp Down on Artificial Intelligence Child Porn
PimEyes Struggles to Identify Children Photographs at Certain Angles Accurately
For those who have never heard of PimEyes, it is an online face search engine that searches the Internet to find pictures containing given faces. Moreover, their new detection system uses age detection AI to identify whether the person is a child. Even though it is still a work in progress, it is already one of the most influential publicly available facial recognition tools online.
However, PimEyes has massive downsides. For example, The New York Times ran some tests and found that the search engine struggles to identify children photographed at certain angles and does not always accurately detect teenagers. Moreover, the $29.99 subscription fee to see where images originated is outright wrong, as Kodye Elyse stated when she found images of her 7-year-old son that she had never seen. Apparently, a sports news site captured the photo when her ex-husband had taken their son to a soccer game without their knowledge.
Moreover, she found a toddler-aged photo of her now 9-year-old daughter being used to promote a summer camp she had attended. Therefore, such outright infringement of privacy has led to massive public backlashes towards PimEyes, as its owner, Giorgi Gobronidze, even facetiously admitted to The Intercept that the service was tailor-designed for stalkers.
PimEyes Has Done Something Google and Meta Are Afraid Of
According to Journalist Hill, Google and Meta have already developed facial recognition technology before PimEyes but are reluctant to release them to the public. In her own words, she mentioned: “Eric Schmidt, as far back as 2011, said this was the one technology that Google had developed and decided to hold back, that it was too dangerous in the wrong hands — if it was used by a dictator, for example.
While PimEyes owner Giorgi Gobronidze acknowledged the risks of his platform, he mentioned that the service has many legitimate purposes. “PimEyes can be applied for many legitimate purposes. For instance, to protect someone from scams and notice if you or your family members are under close surveillance from identity thefts.
Moreover, he reiterated that the platform does not generate people’s identity alone. “We do not identify people. We identify websites that identify images similar to the search material.” When questioned about nothing stopping anyone from running a search of anyone else at any time, he wishfully replied: “People are not as terrible as sometimes we like to imagine.”
Related: AI Copyright: Google’s Ambitious Promise to Protect Users in Lawsuits
Concluding Verdict
Despite the optimism from the PimEyes owner, he has finally relinquished his stubbornness from the latest round of pestering and implemented the protection mechanism he mentioned in 2021. According to Gobronidze, human rights organisations working to help minors can continue to search for them, while all other searches will produce images that block children’s faces.
While it is a minute victory, there are still facial recognition engines online that have been in the spotlight for privacy violations. Some law enforcement companies even blatantly use such detection engines with little oversight. According to Woodrow Hartzog, a Boston University School of Law professor, Silicon Valley watchers predict it is just a matter of time before facial recognition becomes mundane like the AI chatbot and larger tech companies have no choice but to release their own advanced search engines to compete in the market.
Although Hartzog mentions he hopes it is a future that never comes to pass, as if facial recognition is deployed widely, it is virtually the end of the ability to hide in plain sight, which we do all the time and never really think about. What are your thoughts on the matter? Let us know on our Facebook, Twitter and TikTok pages.
Frequently Asked Questions
What Other Facial Recognition Engines Online Are Violating Personal Rights?
Besides PimEyes, Clearview AI is also under fire for violating personal rights. According to research from The New York Times, law enforcement companies abuse the AI-powered face search engine with little oversight. Moreover, Clearview AI did not make anyone available for an interview, making it even more suspicious.
What Benefits Could Come From Facial Recognition Engines?
In addition to what we mentioned above, facial recognition engines can help quickly identify someone whose name you forgot and keep tabs on one’s own images on the web.
Do the Risks of Facial Recognition Engines Outweigh the Benefits?
There are some potential uses of search engines like PimEyes owner mentioned before, but the risks severely outweigh the benefits. For instance, government and private companies could deploy the technology to profile or surveil people in public. Malicious people can also utilise the technology for their deeds, and you may not even know about it until it is too late.