- Facebook and Instagram weren’t designed for people under the age of 13, so we’re creating new ways to stop those who are underage from signing up.
- We’re developing AI to find and remove underaged accounts, and new solutions to verify people’s ages.
- We’re also building new experiences designed specifically for those under 13.
As per our terms, we require people to be at least 13 years old to sign up for Facebook or Instagram. In some countries, our minimum age is higher. When people open our apps to sign up for an account, we ask them for their birthday. This is called an age screen. Those who are underage are not allowed to sign up, and we restrict people who repeatedly try to enter different birthdays into the age screen. But verifying someone’s age is not as simple as it might sound. While age screens are common in our industry, young people can — and often do — get around them by misrepresenting their age. So how are we addressing this problem?
Understanding people’s age on the internet is a complex challenge across our industry, and we already have various methods of finding and removing accounts used by people who misrepresent their age. For example, anyone can report an underage account to us. Our content reviewers are also trained to flag reported accounts that appear to be used by people who are underage. If these people are unable to prove they meet our minimum age requirements, we delete their accounts.
Many argue that collecting ID is the answer to this industry problem, but there are significant limitations to this approach: many young people don’t have an ID, ID collection isn’t a fair or equitable solution, nor is it foolproof. Access to government IDs varies depending on where you live in the world, as does the information contained in an ID such as a birthday. Some have access to IDs but don’t get them unless they choose to travel, and some simply can’t afford one. Indeed, lack of ID access disproportionately impacts underserved communities around the world, particularly young women. Even if they did have an ID, some young people may be uncomfortable sharing it. For example, perhaps they’re a young member of the LGBTQ+ community and they worry about having their identity attached to a pseudonymous account.
While these are not new problems to solve, we will continue to invest in finding the right solutions. We need to keep people who are too young off of Facebook and Instagram, and we want to make sure that those who are old enough receive the appropriate experience for their age. Today, we’re sharing how we’re tackling this issue from multiple angles. Here are a few examples.
Using AI to Detect Age
Artificial intelligence is the cornerstone of the approach we’re taking. We’ve developed technology that allows us to estimate people’s ages, like if someone is below or above 18. We train the technology using multiple signals. We look at things like people wishing you a happy birthday and the age written in those messages, for example, “Happy 21st Bday!” or “Happy Quinceañera.” We also look at the age you shared with us on Facebook and apply it to our other apps where you have linked your accounts and vice versa — so if you share your birthday with us on Facebook, we’ll use the same for your linked account on Instagram. This technology isn’t perfect, and we’re always working to improve it, but that’s why it’s important we use it alongside many other signals to understand people’s ages.
This technology is also the basis of important changes we’re making to keep young people safe. We’re using it to stop adults from messaging young people that don’t follow them on Instagram. And we announced today that we will no longer showing posts from young people’s accounts, or the accounts themselves, to adults that have shown potentially suspicious behavior. We plan to apply this technology across our apps to create more age-appropriate experiences and safety measures for young people. We’re also building similar technology to find and remove accounts belonging to people under the age of 13.
We’re focused on using existing data to inform our artificial intelligence technology. Where we do feel we need more information, we’re developing a menu of options for someone to prove their age. This is a work in progress and we’ll have more to share in time.
Working With Industry Partners
We’re also in discussions with the wider technology industry on how we can work together to share information in privacy-preserving ways that helps apps establish whether people are over a specific age. One area we believe has real promise is working with operating system (OS) providers, internet browsers and other providers so they can share information to help apps establish whether someone is of an appropriate age.
This has the dual benefit of helping developers keep underage people off their apps while removing the need to go through differing and potentially cumbersome age verification processes across multiple apps and services. While it’s ultimately up to individual apps and websites to enforce their age policies and comply with their legal obligations, collaboration with OS providers, internet browsers and others would be a helpful addition to those efforts.
Building Experiences for People Under 13
We’re also looking at ways we can reduce the incentive for people under the age of 13 to lie about their age. The reality is that they’re already online, and with no foolproof way to stop people from misrepresenting their age, we want to build experiences designed specifically for them, managed by parents and guardians. This includes a new Instagram experience for tweens. We believe that encouraging them to use an experience that is age appropriate and managed by parents is the right path. It’s going to take a village to make this experience compelling enough so that this age group wants to use it, but we’re determined to get it right.
Working With Experts
We believe this comprehensive plan is the right one for Facebook and Instagram, but the natural question for readers is how we’re going to do everything in a way that respects people’s privacy, and prioritizes safety at every turn. We’re fortunate to draw from multiple industry experts, organizations and bodies of research here.
First, to help us develop new products and features for young people, in 2017 we convened a group of experts in the fields of online safety, child development and children’s media to share their expertise, research and guidance. This group, known as the Youth Advisors, helps shape our work by providing feedback on the development of new products and policies for young people. We meet regularly with the group, which includes the Family Online Safety Institute, Digital Wellness Lab, MediaSmarts, Project Rockit and the Cyberbullying Research Center.
We recently expanded this group to add new experts in privacy, youth development, psychology, parenting and youth media, and will continue expanding to include a diverse range of global perspectives. Our new members include: Jutta Croll at Stiftung Digitale Chancen, Pattie Gonsalves at Sangath – It’s Okay To Talk, Vicki Shotbolt at ParentZone UK, Dr. Alfiee M. Breland-Noble at AAKOMA Project, Rachel Rodgers at Northeastern University, Janis Whitlock at Cornell University and Amelia Vance at the Future of Privacy Forum.
Next, we continue to welcome productive collaboration with lawmakers and elected officials to guide us. Age verification is a focal point of multiple new and proposed regulatory frameworks on data protection, online harm and child safety. In particular, the ICO Age Appropriate Design Code, the UN Convention on the Rights of the Child, the Irish DPC’s Children’s Fundamentals and the EU Audiovisual Media Services Directive, among others, underpin the work we’re doing to create privacy and safety standards for building youth products at Facebook. We plan to share these standards publicly in the coming months.
Finally, we’ll continue to take part in dialogues about age verification, developing industry best practices and forming new technical standards. For example, we recently joined the Advisory Board for the euCONSENT Consortium to help develop EU-wide infrastructure for online age verification and parental consent.
This is complex territory, with competing interests and considerations. We’re committed to working with experts and the broader industry to give young people a compelling and safe experience on our services.