YouTube just made sweeping positive changes to its harassment policy. So why all the backlash?

YouTube just made sweeping positive changes to its harassment policy. So why all the backlash?

Olly Curtis/Future via Getty Images via Getty Images

A ban on “malicious insults” and a complicated FTC ruling mean drastic changes could be coming to YouTube.

On Wednesday, YouTube made a major change to its community guidelines, announcing that it will now penalize videos that “maliciously insult” users based on identities like race, gender, or orientation.

The new rules, effective immediately, prohibit “harassment that suggests violence” as well as “content that maliciously insults someone based on protected attributes.” The policy also takes a more stringent approach toward ongoing patterns of harassment, even including toxic commenters. The company emphasized: “All of these updates represent another step towards making sure we protect the YouTube community.”

But instead of celebrating, YouTubers reacted to the news with intense criticism. Not long after YouTube unveiled its policy changes, users responded by getting the hashtags “YouTubeisOver” and “YouTubeIsOverParty” trending on Twitter.

Why are YouTubers railing against what would appear to be a step in the right direction for YouTube, a platform that’s regularly come under fire for failing to protect its creators over the years? The answer is complicated.

Part of the fear is that the new policy change, which additionally expands YouTube’s definition of threats and strengthens penalties for repeated patterns of harassment, might be unequally and unfairly applied across the site. But users are also angry at what they see as a larger trend of YouTube making tweaks to the platform that threaten the platform’s community expression, and that ultimately allow content by networks and corporations to flourish at the expense of smaller creators.

YouTube recently updated its policies to make the site safer for everyone, especially kids. But the changes actually threaten to drastically alter the community.

This new harassment policy, and the YouTube community’s negative reaction to it, is best understood in context. It’s, in fact, the second of two major changes for YouTube’s grassroots community in as many weeks. Both involve policy announcements that have the potential to drastically alter the platform.

In late November, YouTube announced a different but even more significant change to its infrastructure. The changes were made to comply with a settlement Google made with the US government in June over charges that it had violated COPPA, a 1998 law designed to keep children under 13 safe on the internet.

This new policy, which takes full effect starting on January 1, 2020, requires users to declare whether their content is explicitly for children; they also have the option to designate their whole channel as “always” for kids or “never” for kids.

This ostensibly is a positive change for a site that’s long been criticized as having strange and sinister pockets of unsafe content. But in fact, marking a video “for kids” has some immediate and intense ramifications that single-handedly drastically alter YouTube’s community infrastructure:

  • Videos marked for kids will no longer feature personalized ads, meaning the ad revenue from those videos will substantially drop.
  • Videos marked for kids will lose their comments sections; entire channels marked “for kids” will additionally lose their “community” tab, which fosters discussion of and around that channel and its content.
  • Videos marked for kids will also lose lots of customization options that help creators drive audiences to other videos they’ve made, like info cards and end screens.
  • Users subscribed to any channel that uploads a video designated for kids won’t be notified when that video is uploaded. Additionally, that video won’t appear in a YouTube search, and it won’t appear under algorithmic recommendations on other channels.

In other words, videos directed at children will effectively be quarantined away from the rest of the website’s community: The videos — or entire channels — will still exist, but no one will be able to find them in searches or recommendations, comment on them, or easily navigate to more related content from that channel.

What’s more, creators may not have any control over any of these restrictions — because according to YouTube’s explanatory video, the platform’s own algorithm will begin monitoring the platform for child safety as an additional backup measure, both “to find content that is clearly made for kids” and to “detect errors and abuse.” And once a video is automatically labeled by the algorithm as being for kids, creators will not be able to appeal this decision.

In addition, the Federal Trade Commission (FTC) pledged as part of the Google settlement to target individual channels and content creators who mislabel their channels or violate the law by suing them for up to $42,600 per video. As The Verge noted in November, the sheer size of YouTube as a platform means that the FTC would most likely go after the bigger, more popular channels, hoping to set examples and eliminate lots of content at once, if that content were deemed to be at issue.

YouTubers didn’t really react to these changes initially, but over the intervening weeks, they’ve increasingly realized that the consequences could be dire. In one video asking, “Will Your Favorite Channel Survive 2020?” popular YouTuber Matt Patrick, host of the Game Theory channel, argued that the change “threatens to wipe out huge chunks of content and creators you’ve been watching since you were young.” Then there’s the question of how well or fairly YouTube’s algorithm will work. Another YouTuber, Chad Bergström of Chadtronic, claimed that the changes could mean that a huge variety of video content could be targeted just for mentioning words like “animation.”

“Cartoons are under attack, toys are under attack, gaming content is under attack,” Bergström argued in a November video. In fact, the lack of limitations surrounding the rules are so confusing that on Tuesday, YouTube announced that it’s asked the FTC for additional clarification on how the COPPA ruling will be implemented.

None of these stark changes are really YouTube’s fault, but they’re still pretty jarring, considering how crucial things like ad revenue and engagement are to YouTube creators, and to the community as a whole. It’s also crucial background for understanding the harsh community reaction to YouTube’s next move.

YouTube users fear its new harassment policies will be unfairly applied

The changes YouTube unveiled Wednesday, meanwhile, were made to combat harassment. Under the updated community guidelines, users will be penalized for a number of harassment-related behaviors, including the aforementioned “content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation,” as well as “veiled or implied threats” in addition to explicit threats.

The move appears to be a long-considered policy response to previous attempts from well-known creators like Vox Media journalist Carlos Maza to get YouTube to penalize a conservative YouTube commentator who allegedly targeted him repeatedly in videos based on his queer identity.

The new content policy will not only affect creators, but also commenters: As part of gradually expanding comment moderation tools, creators will be able to extensively self-moderate their comments section according to the new guidelines, along with intense algorithmic comment-flagging. And creators and commenters who have a repeated pattern of harassment can now be penalized or banned.

But many YouTubers say they’re still reeling from the other disruptive, partly algorithm-driven changes taking place on the site, and that YouTube is now just further piling on. It didn’t help that YouTube immediately purged a number of older, popular videos after revealing the new content policy. Many of these videos were inflammatory videos by right-wing reactionaries, but some were satirical, including “Content Cop — Leafy,” a well-known 2016 comedy video made by Ian Carter, a.k.a. iDubbbz, a popular comedy vlogger with 8 million followers.

The video mocked another YouTuber, Calvin Lee Vail, a.k.a. Leafy, as well as the idea of bullying itself. (Vail later quit vlogging, citing burnout.) Its removal, despite being three years old, immediately caused alarm and backlash from many YouTubers wondering what the new one-two punch of policy implementations would mean for them.

These concerned users assumed that going forward, they will have to both carefully police their content for elements that could be flagged as kid-friendly, and watch out for content that could be read as maliciously targeting another person. So basically, in the minds of many users, anyone who makes kids’ content could be penalized, while anyone who makes edgy humor for adults, or satirical commentary … could also be penalized.

Among YouTubers, particularly right-wing vloggers, a common complaint quickly emerged and spread through the community: If YouTube was really serious about applying its “malicious insults” rule equally, then it would have also banned mainstream content creators like talk-show hosts and celebrity comedians.

There’s a huge difference between a late-night talk show host, for example, calling President Donald Trump orange, and the harsher insults of a video like the user iDubbbz’s, which contained language and coarse humor that would never be allowed on network television, and which, among other things, jokingly encouraged the audience to bully various groups of marginalized people.

But complicating YouTube’s rollout of the new policy on Wednesday were the actual mistakes the unwieldy algorithm made. Just as it has in the past, the algorithm appeared to be unfairly penalizing perfectly acceptable YouTube videos, including one news video from a popular true crime channel. “Do they support our community or not?” one true crime vlogger remarked in reaction to the deletion. “I’m exhausted.”

Many users made the point that YouTube has always had trouble implementing and enforcing its content policies without doing so at the expense of marginalized users, like queer people and YouTubers of color. To them, the whole platform is now finally getting a taste of what their experience of YouTube has been like for years.

And ironically, many YouTube users who the latest policy changes are designed to directly help are among its more vocal critics. Maza, whose original complaints to YouTube arguably sparked all of the harassment-related changes to begin with, was quick to challenge the idea that any of this would ultimately be effective. In a tweet, he argued that YouTube has historically been reluctant to take meaningful action against major creators regardless of their level of policy violations, and this time isn’t likely to be different.

As hashtags like #YouTubeisOver encouraged users to “cancel” YouTube permanently, the platform’s critics also took the opportunity to promote other video platforms. Prominent alternatives included Vlare, BitChute, TikTok, Twitch, Vimeo, and even PornHub. But all of these sites are wildly different in focus, scale, and target audience, and thanks to years of unregulated practices that have allowed giant companies like Google to monopolize the internet, there’s really no platform comparable to YouTube. And many YouTubers have observed, with a mix of wryness and panic, that there’s really nowhere else for them to go.

Regardless of alternatives, beginning on January 1, 2020, there’s a very real dread among users that we might be facing a whole new YouTube. As we’ve learned from the rise and demise of internet platforms throughout history, from LiveJournal to Tumblr, the biggest platforms are only as mighty as their ability to keep their users happy, loyal, and logging in for more.

Author: Aja Romano

Read More

RSS
Follow by Email