Meta is shutting down Facebook’s controversial face recognition feature and deleting the face data collected from users through the social media network, citing “growing societal concerns”. But privacy campaigners are concerned that the company hasn’t been clear on whether the algorithms trained on that data will be deleted.
Images uploaded to Facebook have been scanned by artificial intelligence (AI) tools since 2010, giving the uploader the option of “tagging” people in the image. Meta, then known as Facebook itself, attracted criticism when the feature first launched for failing to ask permission from users, and has since struggled to align it with local privacy laws.
In 2012, the company switched off face recognition for people in the EU after a German data protection commissioner said that it violated European Union law – it returned in 2018 with an explicit opt-in requirement. The firm also settled a class-action lawsuit last year in Illinois that claimed the feature violated state law, making a payment of $550 million.
Meta has now announced that it will shut down the system globally and delete the “faceprint” data collected from Facebook users, the digital representation of their faces. The company says that more than a third of Facebook’s 2.8 billion users had opted-in to face recognition.
“From a PR point of view, it seems positive,” says Ella Jakubowska at campaign group European Digital Rights, “but when you actually look under the hood it’s not doing anything to tackle the systemic issues.”
Although Meta says it will delete the faceprints of Facebook users, Jakubowska says there is no mention of deleting the AI algorithms that have been trained on the data, and which have the actual power to recognise people in images.
“They’ve had this database for over 10 years and they might now be thinking that they’ve got what they needed out of it, to train the algorithms… and actually, they can get rid of the database,” she says.
Read more: Why is the UK warning Facebook not to encrypt its messaging services?
Jerome Presenti, vice-president for artificial intelligence at Meta said in a blog post: “The many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole.” Meta referred New Scientist to the blog post when asked to comment further.
Jake Hurfurt at campaign group Big Brother Watch says the move should be cautiously welcomed. “No company should hoard that amount of biometric data,” he says. “There is still a need for clear rules to restrict the use of this intrusive technology and prevent the collection of millions of people’s private, biometric data by unaccountable corporations.”
Lorna Woods at the University of Essex, UK, says Meta’s move may have been driven by the fact that opt-outs are less compatible with recent data privacy legislation, such as the EU’s General Data Protection Regulation, and are increasingly less tolerated by the general public.
“If you’re building it into everyday products, then it’s hard to see where informed consent is coming in, or how legitimate reasons for the processing would take priority over privacy concerns,” she says. “I think there may be recognition that if you’re using facial recognition at borders, or to protect from terrorism, that’s more justifiable than if you’re just doing it so you can target ads.”
Meta still sees a future for face recognition, either to verify users’ identity or to prevent fraud or impersonation, and says it will keep working on developing those new technologies. The company says that on-device face recognition, which doesn’t share data with external computer servers, is one possible way forward.
More on these topics:
- facial recognition