Last year, the National Center for Missing & Exploited Children (NCMEC) released Data issued It shows that it received overwhelmingly more reports of child sexual abuse material (CSAM) from Facebook than any other web service it tracks. Where other popular social media platforms like Twitter and TikTok had tens of thousands of reports, Facebook had 22 million.
Today Facebook announce New efforts to limit the spread of some of that CSAM on their platforms. In partnership with NCMEC, Facebook is building a “global platform” to prevent “sextortion” by helping to “stop the spread of intimate images of teens online.”
“We are working with the National Center for Missing & Exploited Children (NCMEC) to build a global platform for teens who are concerned that intimate images they have created could be shared on public online platforms without their consent,” Antigone Davis, Facebook’s global vice president of safety, said in a post on Facebook. Blog on Monday.
This global youth platform will work similarly to The platform created by Meta To help adults combat “revenge porn,” Davis said Facebook said last year It was the “first global initiative of its kind”. Allows users to create a hash to proactively stop photos from being distributed on Facebook and Instagram.
According to Davis, Meta has found that more than 75 percent of the child exploitative content that goes viral on its platforms at rates that outpace other social media is posted by people with “no apparent intent to harm”. Instead, CSAM is shared to express anger, disgust, or “bad humor,” Davis said.
“Sharing this content violates our policies, regardless of intent,” Davis said. “We plan to launch a new PSA campaign that will encourage people to stop and think before re-sharing those images online and report them to us instead.”
She also said that there will be more news about the new platform for young adults in the coming weeks.
NCMEC did not immediately respond to Ars’ request for comment.
Meta looking at rating ‘suspicious’ adults
On her blog, Davis explained several other updates Meta has made to better protect teens.
For new users under the age of 16 (or 18 in some countries), Meta will default to privacy settings to prevent strangers from seeing their friends list, pages they follow, or posts they’ve been tagged in. Teens will also have default settings that limit who can comment on their posts and require them to review posts they are tagged in before those posts appear on their pages. For all teens already on the platforms, Meta said it will send out notifications recommending that they update their privacy settings.
Perhaps the greatest precaution Meta testing now has, though, is a step toward flagging adult users who are believed to be harassing teenage users as “suspicious” accounts.
“A ‘suspicious’ account is an account belonging to an adult that may have been recently blocked or reported by a young person, for example,” Davis wrote.
To find out who is “suspicious,” Meta plans to rely on reports from teenage users. When a teen user blocks an account, the teen user will also receive a notification to report the account to Meta to let them know “if something is making them feel uncomfortable while using our apps.” To identify more suspicious users, Meta said it will review all bans made by teens, regardless of whether they file a report.
Any account reported as suspicious will not be shown in People You May Know recommendations for teen users. Davis said Meta is also considering whether or not to remove the message button entirely when a “suspicious” account displays the profile of a teen user.
Even this is an imperfect system, of course. The main drawback to Meta’s solution here is that reporting “suspicious” users will only happen after some teens have already been harassed.