Last month, Facebook parent Meta announced that it would be testing new ways for its Instagram platform to verify the age of its users. The social media service currently requires that users be at least 13 years old to sign up, while in some countries, the minimum age requirement is even higher.
Meta has partnered with Yoti, a company that specializes in online age verification, and is now employing artificial intelligence (AI) as part of the efforts to determine which users are teens and which are adults.
“We’re testing new options for people on Instagram to verify their age, starting with people based in the US,” the company announced. “If someone attempts to edit their date of birth on Instagram from under the age of 18 to 18 or over, we’ll require them to verify their age using one of three options: upload their ID, record a video selfie or ask mutual friends to verify their age. We’re testing this so we can make sure teens and adults are in the right experience for their age group.”
A First Step
The fact that Meta is instituting these practices shows that it is taking the issue at least somewhat seriously, and it could be seen as an important first step.
“One of the obvious security pros for providing age verification is that the intended ‘Adult’ audience for the social media platform is just that ‘Adults,'” said Dr. Brian Gant, assistant professor of cybersecurity at Maryville University.
“A site that can promote their process and provide statistics is exceptionally beneficial provides comfort to all stakeholders potentially preventing crimes like sexual exploitation, cyberbullying, and even fraud,” Gant added. “With any level of security on applications or platforms you have to have multiple layers to be most effective. Age verification is an example of one of those layers but you can take it a step further with multi factor authentication (MFA) and also peer identification such as submitting videos to corroborate the age of users.”
However, even these efforts may not go far enough in helping protect minors.
“While it is an effort to protect youth, it is severely flawed. The age verification policy will not help protect minors or will have a minimal effect,” explained Lois Ritter, teaching associate professor for the Masters of Public Health Program at the University of Nevada, Reno.
“New users can simply lie about their age as the policy only refers to those who have an existing account. Existing users can create new accounts or skirt around the age verification processes, which include uploading their identification (ID), recording a video selfie, or ask mutual friends to verify their age, referred to as social vouching,” Ritter added. “Here are examples of how these approaches can be ineffective in terms of verifying age and protecting minors from issues such a sex trafficking and substance misuse.”
Enter Digital Fake IDs?
Using someone else’s ID will be easier than trying to buy alcohol or enter a dance club, Ritter further noted, and warned that video selfies are equally flawed.
“Can artificial intelligence really tell the difference between a 17 year old and an 18 year old,” she pondered. “Also, using make-up, wigs, and other props one can look older. Using artificial intelligence to estimate age is error prone, and there are many youth, particularly those who are being trafficked, who are made to look older by the trafficker. Youth who are not trafficking victims, can look older than they are as they develop at different rates.”
There will certainly be similar problems with the “social vouching,” as it could even possibly put some children in harm’s way – “such as sex traffickers or adults purchasing their services,” said Ritter. “In addition, my guess is that a system will be developed for people to be paid for social vouching, and it will become another illegal revenue source. Like people paying someone to go inside the liquor store and purchase alcohol for them.”
Moreover, these age verifications do not seem to address the fact that underage youth could still have an adult share a password.
“Sex trafficking victims can have his or her trafficker sign on using the traffickers account,” warned Ritter. “Traffickers often monitor or control trafficking victim’s online behavior so this is feasible. For example, traffickers may require victims to engage with potential ‘clients’ online. It will not help with traffickers posting photos of their victims or with virtual sex trafficking.”
What About The Data?
In addition to the concerns that this data might not actually help protect children, there is also the issue of how the age verification information will be used? As we’ve already seen, social media companies haven’t exactly proven to be entirely trustworthy with user data.
“Consumers will have to trust that the social media platforms implementing this type of technology will delete the data immediately once age verification is done,” said Gant. “Data is very important and those who have it hold the keys to the kingdom. If companies use this data to not only conduct age verification but confirm the identities of individuals that is a very slippery slope.”
Ritter is even more cynical about these efforts, suggesting, “This is a way to appease Congress and concerned citizens about the media platforms’ negative effects on youth. Unfortunately, it is a public relations effort and not a solution to the problem.”