A group of YouTubers is trying to prove the site systematically demonetizes queer content

A group of YouTubers is trying to prove the site systematically demonetizes queer content

A group alleges that YouTube has been automatically flagging videos containing certain keywords, including queer-friendly vocabulary like “gay” and “lesbian.” | Karol Serewis/SOPA Images/LightRocket via Getty Images

They reverse-engineered YouTube’s ad revenue bot to investigate whether it’s penalizing queer content.

To many YouTube creators, the video site’s demonetization bot is an unfriendly watchdog. Its algorithmic magic automatically shuts off vital ad revenue to videos it deems un-advertiser-friendly based on a wide array of constantly updated parameters that aren’t always explicable to creators. Now, a group of YouTubers who spent four months working to reverse-engineer the algorithm have found what they say are alarming results: YouTube’s algorithm, they allege, can flag videos because of apparently random words that appear in video titles. Worse, they say the algorithm penalizes videos featuring LGBTQ-related vocabulary at a disproportionate rate: A full third of titles tested specifically for queer content triggered the bot.

The group began to investigate YouTube’s demonetization system in response to growing community frustration with the way the site automatically demonetizes videos — meaning those videos won’t feature ads and the creators can’t benefit from YouTube’s ad-based revenue system. Collecting ad revenue from their videos is the predominant way many YouTube creators make a living off the platform, so demonetization is a big deal, especially when the algorithm operates in what appears to be an unclear or unfair fashion. It’s not a mere annoyance; for creators reliant on YouTube payouts, demonetization means literally losing income.

The group included a data researcher known on YouTube as Sealow, who authored the study’s written results; the YouTube Analyzed channel, run by a creator known only as Andrew; the YouTuber known as Een, a member of the channel Nerd City; and a YouTuber known as Sybreed. The group began collaborating on the project in late June and released their results on September 29 after two months of testing.

After testing over 15,000 words, the group concluded in both a written report and a video posted to Nerd City’s channel that YouTube had been automatically flagging videos that placed certain keywords in their titles — including a wide range of queer-friendly vocabulary like “gay” and “lesbian.”

Not every word on the researchers’ giant list of confirmed demonetized words — over 900 in all — gets flagged in every video that uses it. But these words can all allegedly trigger demonetization in a video title when nothing else will. The list of words contains many confusing and unexpected examples; for instance, “admit” is an acceptable word; “admitted” isn’t. Other words, like “you” and “Idaho” are sometimes, but not always, okay.

In an extended study conducted just on queer vocabulary, the researchers found that 33 percent of the videos they tested with queer content in the titles were automatically demonetized. The researchers tested an array of monetized videos with LGBTQ vocabulary in the titles and then found that, after the bot automatically demonetized them, they were only re-monetized when they replaced those words with “friend” and “happy”:

33 out of the 100 titles tested that we deemed fit for monetization were demonetized despite being perfectly fine by all standards. The list of demonetized titles include titles such as:

“Gay and Lesbian Guide to Vienna – VIENNA/NOW”

“LGBT Tik Tok Compilation in Honor of Pride Month”

“Top 10 Lesbian Couples in Hollywood Who Got Married”

“Lesbian Princess”

“Lesbian daughters with mom”

What is more shocking is the fact that when we re-tested these titles after replacing the LGBTQ terminology with “friend” and “happy”. Every single video that we tested was now monetized.

Many of the words tested in the group’s July study were later apparently greenlit in some contexts, meaning the algorithm updated so that these words no longer triggered demonetization. These included straightforward innocuous terms like “LGBT” and “LGBTQIA.”

A YouTube spokesperson told Vox in an email that the site doesn’t have a list of queerspecific words that flag the algorithm and that the algorithm is constantly undergoing evaluation and updates to ensure fairness. Moreover, the spokesperson said the site tests its algorithm updates against a number of channels with queer content in an effort to ensure that its demonetization bots aren’t disproportionately affecting queer content. The spokesperson’s full statement is below:

We’re proud of the incredible LGBTQ+ voices on our platform and take concerns like these very seriously. We do not have a list of LGBTQ+ related words that trigger demonetization and we are constantly evaluating our systems to help ensure that they are reflecting our policies without unfair bias. We use machine learning to evaluate content against our advertiser guidelines. Sometimes our systems get it wrong, which is why we’ve encouraged creators to appeal. Successful appeals ensure that our systems are updated to get better and better.

But the researchers pushed back against YouTube’s defense. “Our testing result clearly showcase[s] that perfectly acceptable titles that are otherwise perfectly fine for monetization are demonetized only when ‘gay’ and ‘lesbian’ are added to the title,” Sealow told Vox.

The danger of an ever-changing algorithm is that words that have been flagged once could be flagged again. And for YouTube’s queer community in particular, unpredictable demonetization and the impression of censorship have been contentious ongoing problems for several years. The platform seems to be making little progress despite perpetual efforts to improve.

YouTube’s community has been growing frustrated with its unclear guidelines for how to avoid demonetization

YouTube’s demonetization algorithm is supposed to be clear. Since 2015, YouTube has maintained a public list of video characteristics that don’t fit its standards for “advertiser-friendly content.” One immediate problem with the list, however, is that it’s both broad and vague:

  • Inappropriate language
  • Violence
  • Adult content
  • Harmful or dangerous acts
  • Hateful content
  • Incendiary and demeaning
  • Recreational drugs and drug-related content
  • Tobacco-related content
  • Firearms-related content
  • Controversial issues and sensitive events
  • Adult themes in family content

But what exactly constitutes “appropriate” language? What would an algorithm register as a “controversial” subject? From the beginning, YouTubers trying to abide by the guidelines had problems. Queer YouTubers in particular have perennially found that their otherwise perfectly banal content has been demonetized, seemingly because it featured words like “gay” or “lesbian” in the headline. In 2017, YouTube altered its “restricted mode” settings, which are designed to improve mature-content filtering on the site, after a viral YouTube video pointed out that the site routinely classified non-sexual queer content as sexually explicit or “mature” content.

It’s important to note that this type of bias isn’t solely a YouTube problem; other social media platforms like Tumblr have also developed content curation algorithms that appear to conflate queer content with sexually explicit content. And AI researchers found in August that content created by black social media users is more likely to be incorrectly interpreted by algorithms as offensive. But YouTube in particular faces a growing problem of extremism; it recently banned thousands of extremist videos and instituted new hate speech policies. For marginalized voices being penalized by the algorithm, even as hate groups are gaining ground, the issue can feel very fraught.

YouTube released its community guide for its demonetization algorithm in January 2019 to help users understand how to avoid demonetization as well as how to help make the algorithm work better. At that time, the platform revealed that in addition to the actual content of a video, there are three main elements the algorithm looks at: the video’s title, the thumbnail image, and the first 30 seconds of the video. The company is also working on a self-certification program that allows creators to describe their videos more fully so the algorithm will be more likely to “read” them correctly.

But even with these guidelines and safeguards in place, the algorithm frequently seems to rely on a wide range of nebulous criteria that’s constantly being updated and evaluated. That makes it a consternating mystery to creators looking to follow the rules.

The solution? Take the algorithm apart.

Many of the words that seem to trigger YouTube’s automation bot are sexual, profane, or otherwise explicit. Others … aren’t.

The project began in June, when a YouTuber who goes only by Andrew, the host of YouTube Analyzed, began to manually test words in video titles to discern what would and wouldn’t trigger the demonetization algorithm. That is, he would upload videos with strategically chosen titles to see which videos triggered the algorithm. After individually testing 1,500 words, he contacted Sealow and the Nerd City channel to help with the project. Sealow told Vox that he spent a few weeks testing how YouTube’s monetization bot behaved in general before automating a script using YouTube’s data API that would edit hundreds of titles at once. In July, Sealow and Andrew then scanned an additional 14,000 words through their script.

The group posited that all the words they tested existed along a scale of risk and that certain words moved a video closer to the algorithm’s threshold for demonetization. These words were categorized as “high-risk.” In September, the researchers checked the “high-risk” words again and came up with a final list of over 900 words that had apparently caused multiple videos to be flagged — suggesting the issue was probably with the use of the words themselves and not other factors in the videos.

In the group’s initial round of testing, it studied a wide range of words, including many taken from Urban Dictionary (suggesting perhaps that algorithms have trouble parsing slang or euphemistic language) which wound up on the list of words that will allegedly cause a video to be automatically demonetized. The full list includes many advertiser-unfriendly high-risk words that probably won’t surprise anyone: words like “Adolf Hitler,” “escort,” “fapping,” and “group sex,” for example, along with a host of more obscene words and phrases.

But within this list, lumped in with the more obvious terms, there are plenty of head-scratchers: words like “dozens,” “grey,” “health” and “healing,” “link,” “North Carolina,” “Oklahoma,” and “scottish [sic],” to name just a few. This list of words was confirmed multiple times to result in demonetization; even the word “you” was confirmed as a word that could repeatedly trigger demonetization.

None of these words seem to fall outside the bounds of YouTube’s stated “advertiser-friendly” guidelines. Even weirder: The testing also uncovered a collection of what the researchers deemed to be “outlier” words that flagged the demonetization bot — words that seemed even more random than those on the primary list of confirmed demonetization keywords. These words were tested over time and found to be less consistent as a marker of what the algorithm would and wouldn’t flag, but even so, they were eyebrow-raising at best: words like “tourist,” “Xerox,” “puzzles,” “farming,” and “libraries,” as well as names like January and Josh, would all give the algorithm pause.

Most of the outlier words have since been removed from the algorithm’s list, but there are some that remain curiously high-risk. Still, Sealow told Vox that it’s less important to focus on the outlier words and more important to focus on the main list of words that he and his partners found were consistently high-risk, since those terms had been repeatedly confirmed as demonetization triggers as of September 2019.

“It’s very hard to tell how often the [algorithm] gets retrained/updated,” he said, “but the thing that can be expected is that unless Youtube make[s] larger changes to the model as a whole, most results remain mostly the same.” In other words, the terms on the project’s final list are likely to continue to cause problems for creators.

In reaction to the group’s findings and despite refuting their accuracy, YouTube responded in a direct tweet, noting that the company would be taking a close look at the claims being made.

Sealow told Vox he would welcome “the opportunity to privately send over some severe exploits that would easily have been found if [YouTube’s] internal evaluation process was indeed as rigorous as they claim.” So far, however, no one from the platform has contacted him directly.

According to a spokesperson, YouTube is constantly evaluating its algorithm against the way the platform is being used; to date, such reviews reportedly haven’t turned up matching examples of certain terms being unfairly demonetized relative to other videos. In previous instances where creators had spoken out about unfair demonetization, the platform argued that these creators had only uploaded a handful of automatically demonetized videos, relative to hundreds of other acceptable, monetized videos on their channels. YouTube’s position is that this is an acceptable rate of demonetization.

But YouTube users aren’t happy with YouTube’s response, and some question its insistence that there’s no vocabulary list. “It’s complete nonsense for them to continually assert that there’s no ‘forbidden list of words’ that gets videos demonetized,” a redditor known as a3wagner commented in response to the report in a thread about its findings. “Of course there is. They didn’t explicitly write it, but that’s what machine learning AI does. What they really mean is that they have no way to know what’s on that list without reverse engineering it like Sealow did.”

Sealow told Vox he was “ultimately very disappointed by YouTube’s response.” He pointed out that since the study’s results were made public, multiple YouTubers had reported that words like “gay” and “lesbian” were still being demonetized in the titles of videos.

In the face of such flagrant test results, it’s easy to understand users’ frustration and difficult to accept YouTube’s insistence that the system is working as intended. “It implies that they are satisfied with their current-day demonetization of LGBTQ titles … and have no intention to change their system,” Sealow said.

Sealow acknowledged that the algorithm had been updated since his initial round of testing and that many of the “high-risk” words had since been greenlit. But while so many flaws still exist in a system with so little transparency, he was skeptical that real positive change was taking place.

“[It’s] a very weak showcase of progress,” he said.

Though YouTube spokespeople insist that the platform embraces diversity, the demonetization report touches on just one thorny issue out of many that queer YouTubers have with the platform’s approach to their community. Other issues include the company’s alleged history of algorithmically suppressing queer content from recommended videos and censoring queer advertisements. In August, a pair of queer creators filed a discrimination lawsuit against the company after their advertisement for a queer-friendly Christmas show on the channel was rejected due to “shocking content.”

All this perceived discrimination is occurring alongside the YouTube community culture’s increasing swing to the far right. As the company grapples with questions like when and how it will allow hate speech in videos, queer creators have spoken out about what they feel is the company’s failure to protect them from harassment. Sadly, none of this is new ground — either for YouTube or the perpetually marginalized queer community — but it all adds up to a tense, fraught relationship between one of the biggest social media platforms and one of its biggest communities.

Author: Aja Romano

Read More

RSS
Follow by Email