Instagram’s algorithm delivers toxic video mix to adults who follow children
Content served to WSJ test accounts included risqué footage of kids, overtly sexual adult videos and ads from major brands
Instagram’s Reels video service is designed to show users streams of short videos on topics the system decides will interest them, such as sports, fashion or humor.
The Meta Platforms-owned social app does the same thing for users its algorithm decides might have a prurient interest in children, testing by The Wall Street Journal showed.
Meta
The Journal sought to determine what Instagram’s Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.
Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.
The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults. The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.
RUSSIAN AUTHORITIES REPORTEDLY PLACE META SPOKESPERSON ON WANTED LIST
In a stream of videos recommended by Instagram, an ad for the dating app Bumble appeared between a video of someone stroking the face of a life-size latex doll and a video of a young girl with a digitally obscured face lifting up her shirt to expose her midriff. In another, a Pizza Hut commercial followed a video of a man lying on a bed with his arm around what the caption said was a 10-year-old girl.
The Canadian Centre for Child Protection, a child-protection group, separately ran similar tests on its own, with similar results.
META RAMPING UP EFFORTS TO REMOVE POSTS CONTAINING VIOLENCE, MISINFORMATION ABOUT ATTACK ON ISRAEL
Meta said the Journal’s tests produced a manufactured experience that doesn’t represent what billions of users see. The company declined to comment on why the algorithms compiled streams of separate videos showing children, sex and advertisements, but a spokesman said that in October it introduced new brand safety tools that give advertisers greater control over where their ads appear, and that Instagram either removes or reduces the prominence of four million videos suspected of violating its standards each month.
The Journal reported in June that algorithms run by Meta, which owns both Facebook and Instagram, connect large communities of users interested in pedophilic content. The Meta spokesman said a task force set up after the Journal’s article has expanded its automated systems for detecting users who behave suspiciously, taking down tens of thousands of such accounts each month. The company also is participating in a new industry coalition to share signs of potential child exploitation.
Companies whose ads appeared beside inappropriate content in the Journal’s tests include Disney, Walmart, online dating company Match Group, Hims, which sells erectile-dysfunction drugs, and The Wall Street Journal itself. Most brand-name retailers require that their advertising not run next to sexual or explicit content.
Ticker | Security | Last | Change | Change % |
---|---|---|---|---|
DIS | THE WALT DISNEY CO. | 115.07 | +5.94 | +5.44% |
WMT | WALMART INC. | 84.27 | -0.20 | -0.24% |
YUM | YUM! BRANDS INC. | 133.58 | -3.15 | -2.30% |
"Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions," said Samantha Stetson, a Meta vice president who handles relations with the advertising industry. She said the prevalence of inappropriate content on Instagram is low, and that the company invests heavily in reducing it.
After the Journal contacted companies whose ads appeared in the testing next to inappropriate videos, several said that Meta told them it was investigating and would pay for brand-safety audits from an outside firm.
ELON MUSK, MARK ZUCKERBERG, OTHER TECH GIANTS TO DESCEND ON CAPITOL HILL FOR SENATE AI FORUM
Following what it described as Meta’s unsatisfactory response to its complaints, Match began canceling Meta advertising for some of its apps, such as Tinder, in October. It has since halted all Reels advertising and stopped promoting its major brands on any of Meta’s platforms. "We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content," said Match spokeswoman Justine Sacco.
Robbie McKay, a spokesman for Bumble, said it "would never intentionally advertise adjacent to inappropriate content," and that the company is suspending its ads across Meta’s platforms.
Charlie Cain, Disney’s vice president of brand management, said the company has set strict limits on what social media content is acceptable for advertising and has pressed Meta and other platforms to improve brand-safety features. A company spokeswoman said that since the Journal presented its findings to Disney, the company had been working on addressing the issue at the "highest levels at Meta."
Ticker | Security | Last | Change | Change % |
---|---|---|---|---|
BMBL | BUMBLE INC. | 7.87 | -0.36 | -4.37% |
Walmart declined to comment, and Pizza Hut didn’t respond to requests for comment.
Hims said it would press Meta to prevent such ad placement, and that it considered Meta’s pledge to work on the problem encouraging.
META'S MARK ZUCKERBERG UNDERGOES SURGERY FOR KNEE INJURY DURING MARTIAL ARTS TRAINING
The Journal said that it was alarmed that its ad appeared next to a video of an apparent adult sex act and that it would demand action from Meta.
Meta created Reels to compete with TikTok, the video-sharing platform owned by Beijing-based ByteDance. Both products feed users a nonstop succession of videos posted by others, and make money by inserting ads among them. Both companies’ algorithms show to a user videos the platforms calculate are most likely to keep that user engaged, based on his or her past viewing behavior.
UNIVERSITY OF SOUTHERN CALIFORNIA STUDENT ALLEGEDLY RAPED BY LYFT DRIVER INSIDE VEHICLE
The Journal reporters set up the Instagram test accounts as adults on newly purchased devices and followed the gymnasts, cheerleaders and other young influencers. The tests showed that following only the young girls triggered Instagram to begin serving videos from accounts promoting adult sex content alongside ads for major consumer brands, such as one for Walmart that ran after a video of a woman exposing her crotch.
ELON MUSK’S SOCIAL-MEDIA COMMENTS SPARK TESLA INVESTOR BACKLASH
When the test accounts then followed some users who followed those same young people’s accounts, they yielded even more disturbing recommendations. The platform served a mix of adult pornography and child-sexualizing material, such as a video of a clothed girl caressing her torso and another of a child pantomiming a sex act.
Experts on algorithmic recommendation systems said the Journal’s tests showed that while gymnastics might appear to be an innocuous topic, Meta’s behavioral tracking has discerned that some Instagram users following preteen girls will want to engage with videos sexualizing children, and then directs such content toward them.
"Niche content provides a much stronger signal than general interest content," said Jonathan Stray, senior scientist for the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.
Current and former Meta employees said in interviews that the tendency of Instagram algorithms to aggregate child sexualization content from across its platform was known internally to be a problem. Once Instagram pigeonholes a user as interested in any particular subject matter, they said, its recommendation systems are trained to push more related content to them.
Preventing the system from pushing noxious content to users interested in it, they said, requires significant changes to the recommendation algorithms that also drive engagement for normal users. Company documents reviewed by the Journal show that the company’s safety staffers are broadly barred from making changes to the platform that might reduce daily active users by any measurable amount.
DISNEYLAND STREAKER ARRESTED AFTER WANDERING AROUND ‘IT’S A SMALL WORLD’ RIDE
The test accounts showed that advertisements were regularly added to the problematic Reels streams. Ads encouraging users to visit Disneyland for the holidays ran next to a video of an adult acting out having sex with her father, and another of a young woman in lingerie with fake blood dripping from her mouth. An ad for Hims ran shortly after a video depicting an apparently anguished woman in a sexual situation along with a link to what was described as "the full video."
Even before the 2020 launch of Reels, Meta employees understood that the product posed safety concerns, according to former employees.
DISNEY EMPLOYEES GIVE DIRTY DETAILS ABOUT MOST MAGICAL PLACE ON EARTH GUESTS: ‘CODE H’
Part of the problem is that automated enforcement systems have a harder time parsing video content than text or still images. Another difficulty arises from how Reels works: Rather than showing content shared by users’ friends, the way other parts of Instagram and Facebook often do, Reels promotes videos from sources they don’t follow.
In an analysis conducted shortly before the introduction of Reels, Meta’s safety staff flagged the risk that the product would chain together videos of children and inappropriate content, according to two former staffers. Vaishnavi J, Meta’s former head of youth policy, described the safety review’s recommendation as: "Either we ramp up our content detection capabilities, or we don’t recommend any minor content," meaning any videos of children.
At the time, TikTok was growing rapidly, drawing the attention of Instagram’s young users and the advertisers targeting them. Meta didn’t adopt either of the safety analysis’s recommendations at that time, according to J.
Stetson, Meta’s liaison with digital-ad buyers, disputed that Meta had neglected child safety concerns ahead of the product’s launch. "We tested Reels for nearly a year before releasing it widely, with a robust set of safety controls and measures," she said.
DISNEY CEO BOB IGER REPORTEDLY PLANS TOWN HALL FOR EMPLOYEES DURING TURNAROUND
Video-sharing platforms appeal to social-media companies because videos tend to hold user attention longer than text or still photos do, making them attractive for advertisers.
After initially struggling to maximize the revenue potential of its Reels product, Meta has improved how its algorithms recommend content and personalize video streams for users.
ELON MUSK SAYS AI WILL EVENTUALLY CREATE A SITUATION WHERE ‘NO JOB IS NEEDED’
Social-media platforms and digital advertising agencies often describe inappropriate ad placements as unfortunate mistakes. But the test accounts run by the Journal and the Canadian Centre for Child Protection suggest Meta’s platforms appeared to target some digital marketing at users interested in sex.
Among the ads that appeared regularly in the Journal’s test accounts were those for "dating" apps and livestreaming platforms featuring adult nudity, massage parlors offering "happy endings" and artificial-intelligence chatbots built for cybersex. Meta’s rules are supposed to prohibit such ads.
The Journal informed Meta in August about the results of its testing. In the months since then, tests by both the Journal and the Canadian Centre for Child Protection show that the platform continued to serve up a series of videos featuring young children, adult content and apparent promotions for child sex material hosted elsewhere.
As of mid-November, the center said Instagram is continuing to steadily recommend what the nonprofit described as "adults and children doing sexual posing."
After the Journal began contacting advertisers about the placements, and those companies raised questions, Meta told them it was investigating the matter and would pay for brand-safety auditing services to determine how often a company’s ads appear beside content it considers unacceptable.
Meta hasn’t offered a timetable for resolving the problem or explained how in the future it would restrict the promotion of inappropriate content featuring children.
EXXONMOBIL EXECUTIVE DAVID SCOTT ARRESTED IN TEXAS ON SEXUAL ASSAULT CHARGE
The Journal’s test accounts found that the problem even affected Meta-related brands. Ads for the company’s WhatsApp encrypted chat service and Meta’s Ray-Ban Stories glasses appeared next to adult pornography. An ad for Lean In Girls, the young women’s empowerment nonprofit run by former Meta Chief Operating Officer Sheryl Sandberg, ran directly before a promotion for an adult sex-content creator who often appears in schoolgirl attire. Sandberg declined to comment.
Through its own tests, the Canadian Centre for Child Protection concluded that Instagram was regularly serving videos and pictures of clothed children who also appear in the National Center for Missing and Exploited Children’s digital database of images and videos confirmed to be child abuse sexual material. The group said child abusers often use the images of the girls to advertise illegal content for sale in dark-web forums.
The nature of the content—sexualizing children without generally showing nudity—reflects the way that social media has changed online child sexual abuse, said Lianna McDonald, executive director for the Canadian center. The group has raised concerns about the ability of Meta’s algorithms to essentially recruit new members of online communities devoted to child sexual abuse, where links to illicit content in more private forums proliferate.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
"Time and time again, we’ve seen recommendation algorithms drive users to discover and then spiral inside of these online child exploitation communities," McDonald said, calling it disturbing that ads from major companies were subsidizing that process.