More than a year on from a child safety content moderation scandal on YouTube and it takes just a few clicks for the platform’s recommendation algorithms to redirect a search for “bikini haul” videos of adult women towards clips of scantily clad minors engaged in body contorting gymnastics or taking an ice bath or ice lolly sucking “challenge.”
A YouTube creator called Matt Watson flagged the issue in a critical Reddit post, saying he found scores of videos of kids where YouTube users are trading inappropriate comments and timestamps below the fold, denouncing the company for failing to prevent what he describes as a “soft-core pedophilia ring” from operating in plain sight on its platform.
He has also posted a YouTube video demonstrating how the platform’s recommendation algorithm pushes users into what he dubs a pedophilia “wormhole,” accusing the company of facilitating and monetizing the sexual exploitation of children.
We were easily able to replicate the YouTube algorithm’s behavior that Watson describes in a history-cleared private browser session which, after clicking on two videos of adult women in bikinis, suggested we watch a video called “sweet sixteen pool party.”
Clicking on that led YouTube’s side-bar to serve up multiple videos of prepubescent girls in its “up next” section where the algorithm tees-up related content to encourage users to keep clicking.
Videos we got recommended in this side-bar included thumbnails showing young girls demonstrating gymnastics poses, showing off their “morning routines,” or licking popsicles or ice lollies.
Watson said it was easy for him to find videos containing inappropriate/predatory comments, including sexually suggestive emoji and timestamps that appear intended to highlight, shortcut and share the most compromising positions and/or moments in the videos of the minors.
We also found multiple examples of timestamps and inappropriate comments on videos of children that YouTube’s algorithm recommended we watch.
Some comments by other YouTube users denounced those making sexually suggestive remarks about the children in the videos.
Back in November 2017, several major advertisers froze spending on YouTube’s platform after an investigation by the BBC and the Times discovered similarly obscene comments on videos of children.
Earlier the same month YouTube was also criticized over low-quality content targeting kids as viewers on its platform.
The company went on to announce a number of policy changes related to kid-focused video, including saying it would aggressively police comments on videos of kids and that videos found to have inappropriate comments about the kids in them would have comments turned off altogether.
Some of the videos of young girls that YouTube recommended we watch had already had comments disabled — which suggests its AI had previously identified a large number of inappropriate comments being shared (on account of its policy of switching off comments on clips containing kids when comments are deemed “inappropriate”) — yet the videos themselves were still being suggested for viewing in a test search that originated with the phrase “bikini haul.”
Watson also says he found ads being displayed on some videos of kids containing inappropriate comments, and claims that he found links to child pornography being shared in YouTube comments too.
We were unable to verify those findings in our brief tests.
We asked YouTube why its algorithms skew toward recommending videos of minors, even when the viewer starts by watching videos of adult women, and why inappropriate comments remain a problem on videos of minors more than a year after the same issue was highlighted via investigative journalism.
The company sent us the following statement in response to our questions:
Any content — including comments — that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We enforce these policies aggressively, reporting it to the relevant authorities, removing it from our platform and terminating accounts. We continue to invest heavily in technology, teams and partnerships with charities to tackle this issue. We have strict policies that govern where we allow ads to appear and we enforce these policies vigorously. When we find content that is in violation of our policies, we immediately stop serving ads or remove it altogether.
A spokesman for YouTube also told us it’s reviewing its policies in light of what Watson has highlighted, adding that it’s in the process of reviewing the specific videos and comments featured in his video — specifying also that some content has been taken down as a result of the review.
However, the spokesman emphasized that the majority of the videos flagged by Watson are innocent recordings of children doing everyday things. (Though of course the problem is that innocent content is being repurposed and time-sliced for abusive gratification and exploitation.)
The spokesman added that YouTube works with the National Center for Missing and Exploited Children to report to law enforcement accounts found making inappropriate comments about kids.
In wider discussion about the issue the spokesman told us that determining context remains a challenge for its AI moderation systems.
On the human moderation front he said the platform now has around 10,000 human reviewers tasked with assessing content flagged for review.
The volume of video content uploaded to YouTube is around 400 hours per minute, he added.
There is still very clearly a massive asymmetry around content moderation on user-generated content platforms, with AI poorly suited to plug the gap given ongoing weakness in understanding context, even as platforms’ human moderation teams remain hopelessly under-resourced and outgunned versus the scale of the task.
Another key point YouTube failed to mention is the clear tension between advertising-based business models that monetize content based on viewer engagement (such as its own), and content safety issues that need to carefully consider the substance of the content and the context in which it has been consumed.
It’s certainly not the first time YouTube’s recommendation algorithms have been called out for negative impacts. In recent years the platform has been accused of automating radicalization by pushing viewers toward extremist and even terrorist content — which led YouTube to announce another policy change in 2017 related to how it handles content created by known extremists.
The wider societal impact of algorithmic suggestions that inflate conspiracy theories and/or promote bogus, anti-factual health or scientific content have also been repeatedly raised as a concern — including on YouTube.
And only last month YouTube said it would reduce recommendations of what it dubbed “borderline content” and content that “could misinform users in harmful ways,” citing examples such as videos promoting a fake miracle cure for a serious illness, or claiming the earth is flat, or making “blatantly false claims” about historic events such as the 9/11 terrorist attack in New York.
“While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community,” it wrote then. “As always, people can still access all videos that comply with our Community Guidelines and, when relevant, these videos may appear in recommendations for channel subscribers and in search results. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.”
YouTube said that change of algorithmic recommendations around conspiracy videos would be gradual, and only initially affect recommendations on a small set of videos in the U.S.
It also noted that implementing the tweak to its recommendation engine would involve both machine learning tech and human evaluators and experts helping to train the AI systems.
“Over time, as our systems become more accurate, we’ll roll this change out to more countries. It’s just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube,” it added.
It remains to be seen whether YouTube will expand that policy shift and decide it must exercise greater responsibility in how its platform recommends and serves up videos of children for remote consumption in the future.
Political pressure may be one motivating force, with momentum building for regulation of online platforms — including calls for internet companies to face clear legal liabilities and even a legal duty care toward users vis-à-vis the content they distribute and monetize.
For example, U.K. regulators have made legislating on internet and social media safety a policy priority — with the government due to publish this winter a white paper setting out its plans for ruling platforms.
Let’s block ads! (Why?)