Instagram launches ‘teen accounts’ as child safety laws loom
Instagram launches ‘teen accounts’ as child safety laws loom
Instagram is launching a new account type for teenagers that will include a range of heightened parental control features, as well as stricter default settings that limit the type of messages teens can receive and the type of content they see.
While many of the teen account safeguards, including restrictions on sensitive content and messaging, already exist, teens younger than 16 will now need a parent’s permission to opt out of those settings. The new accounts also include a feature that will allow parents and guardians to monitor who their children are direct-messaging (without viewing the messages themselves). And Instagram will soon begin deploying artificial ingelligence tools to identify any teens who are lying about their age; those users will then be transitioned over to teen accounts.
“Teen Accounts are designed to better support today’s parents and give them peace of mind that their teens have the right protections in place,” Instagram wrote Tuesday in a white paper explaining the new features.
The announcement comes as momentum for child safety laws is building both at the state and federal level. This summer, the Senate passed the bipartisan Kids Online Safety Act (KOSA), which requires online platforms to “prevent and mitigate” potential harms to children, including by disabling addictive features and limiting other users’ communications with minors. (The House of Representatives has yet to take up KOSA.) In Meta’s home state, meanwhile, the California Age Appropriate Design Code Act would, if it survives court challenges, similarly require stronger default data privacy protections for minors.
Those pieces of legislation have both drawn fierce opposition, which suggests some aspects of Instagram’s new parental controls may do the same. Advocacy groups have repeatedly expressed concerns about the harms that heightened parental supervision and content restrictions pose in particular to LGBTQ+ youths from unsupportive or abusive backgrounds.
“I do have concerns about parental controls that are really more about parental surveillance, versus restricting a company’s ability to collect minors’ personal data,” says Evan Greer, director of the advocacy group Fight for the Future. That said, Greer says she’d much prefer companies implement safeguards themselves than have the government mandate them.
On the other hand, the risks of lax supervision are also grave. Instagram’s impact on teen mental health has been the subject of debate for years. In 2021, the platform paused development on a product called Instagram Kids, intended for teens younger than 13, after Wall Street Journal reporters uncovered leaked internal documents suggesting Instagram was having a negative impact on teenage girls. (Instagram said the reporting was a “mischaracterization” of its research).
Studies and law enforcement data, meanwhile, have shown that crimes like financial sextortion are on the rise on Instagram and other online platforms. These scams, which often begin with a stranger using a fake account to direct-message unwitting users and solicit compromising nude images, have targeted mainly teenage boys and have led to at least 20 teen suicides in just the past three years. Earlier this year, Meta said it had removed 63,000 accounts linked to these scams.
The company has already begun using automated tools to filter nude images out of teens’ direct messages and to prevent suspected scammers from messaging teens. But the new controls will give parents the opportunity to see exactly who their teens are talking to. It’s a feature that already exists on Snap and that Instagram said it developed based on research feedback from parents and teens. Notably, the new teen accounts are less strict for older teens, who can change default settings without parental consent.
Rachel Rodgers, an associate psychology professor at Northeastern University, applauded this distinction. “The capacity to critically understand social media content, the intent behind it, and the social skills to be able to interact effectively online continue to emerge during adolescence,” Rodgers wrote in a statement to Fast Company. “The new features being implemented by Instagram are responsive to all of these by offering different protections for young and older teens, and ways in which these can be customized by teens and their parents.”
Meta is testing teen accounts on Instagram to start and said it plans to expand these accounts globally and to other Meta platforms next year. For now, any teen who signs up for the app will be funneled into these accounts immediately, and teens in the U.S., U.K., Australia, and Canada who are already on Instagram will have these settings turned on within the next 60 days.
Beginning next year, AI tools will also start to flag suspected teen accounts based on data points like when their account was created and how they interact with other accounts. Instagram will use that information to a
Instagram is launching a new account type for teenagers that will include a range of heightened parental control features, as well as stricter default settings that limit the type of messages teens can receive and the type of content they see.
While many of the teen account safeguards, including restrictions on sensitive content and messaging, already exist, teens younger than 16 will now need a parent’s permission to opt out of those settings. The new accounts also include a feature that will allow parents and guardians to monitor who their children are direct-messaging (without viewing the messages themselves). And Instagram will soon begin deploying artificial ingelligence tools to identify any teens who are lying about their age; those users will then be transitioned over to teen accounts.
“Teen Accounts are designed to better support today’s parents and give them peace of mind that their teens have the right protections in place,” Instagram wrote Tuesday in a white paper explaining the new features.
The announcement comes as momentum for child safety laws is building both at the state and federal level. This summer, the Senate passed the bipartisan Kids Online Safety Act (KOSA), which requires online platforms to “prevent and mitigate” potential harms to children, including by disabling addictive features and limiting other users’ communications with minors. (The House of Representatives has yet to take up KOSA.) In Meta’s home state, meanwhile, the California Age Appropriate Design Code Act would, if it survives court challenges, similarly require stronger default data privacy protections for minors.
Those pieces of legislation have both drawn fierce opposition, which suggests some aspects of Instagram’s new parental controls may do the same. Advocacy groups have repeatedly expressed concerns about the harms that heightened parental supervision and content restrictions pose in particular to LGBTQ+ youths from unsupportive or abusive backgrounds.
“I do have concerns about parental controls that are really more about parental surveillance, versus restricting a company’s ability to collect minors’ personal data,” says Evan Greer, director of the advocacy group Fight for the Future. That said, Greer says she’d much prefer companies implement safeguards themselves than have the government mandate them.
On the other hand, the risks of lax supervision are also grave. Instagram’s impact on teen mental health has been the subject of debate for years. In 2021, the platform paused development on a product called Instagram Kids, intended for teens younger than 13, after Wall Street Journal reporters uncovered leaked internal documents suggesting Instagram was having a negative impact on teenage girls. (Instagram said the reporting was a “mischaracterization” of its research).
Studies and law enforcement data, meanwhile, have shown that crimes like financial sextortion are on the rise on Instagram and other online platforms. These scams, which often begin with a stranger using a fake account to direct-message unwitting users and solicit compromising nude images, have targeted mainly teenage boys and have led to at least 20 teen suicides in just the past three years. Earlier this year, Meta said it had removed 63,000 accounts linked to these scams.
The company has already begun using automated tools to filter nude images out of teens’ direct messages and to prevent suspected scammers from messaging teens. But the new controls will give parents the opportunity to see exactly who their teens are talking to. It’s a feature that already exists on Snap and that Instagram said it developed based on research feedback from parents and teens. Notably, the new teen accounts are less strict for older teens, who can change default settings without parental consent.
Rachel Rodgers, an associate psychology professor at Northeastern University, applauded this distinction. “The capacity to critically understand social media content, the intent behind it, and the social skills to be able to interact effectively online continue to emerge during adolescence,” Rodgers wrote in a statement to Fast Company. “The new features being implemented by Instagram are responsive to all of these by offering different protections for young and older teens, and ways in which these can be customized by teens and their parents.”
Meta is testing teen accounts on Instagram to start and said it plans to expand these accounts globally and to other Meta platforms next year. For now, any teen who signs up for the app will be funneled into these accounts immediately, and teens in the U.S., U.K., Australia, and Canada who are already on Instagram will have these settings turned on within the next 60 days.
Beginning next year, AI tools will also start to flag suspected teen accounts based on data points like when their account was created and how they interact with other accounts. Instagram will use that information to a