How the coronavirus rumor mill can thrive in private group chats

How the coronavirus rumor mill can thrive in private group chats

A traveler wears a medical mask at Grand Central Station on March 5, 2020, in New York City. | David Dee Delgado/Getty Images

Social media platforms work to fight coronavirus myths, but they may not be able to win against your DMs.

With the Covid-19 coronavirus outbreak still so new, fear of the death and disruption it could cause rapidly mounting, and treatments for it still a big question mark, rumors and misinformation about the outbreak are starting to spread.

A wary public might be especially susceptible to believing unverified rumors about how the outbreak began (nope, the new coronavirus wasn’t created in a lab) or taking misguided measures to protect themselves (unless you’re sick or caring for someone who is, face masks aren’t the answer!). And buying into alarmist falsehoods can be dangerous.

The spread of misinformation is already commonplace on social media, where the sharing of content from biased sources frequently outpaces whatever fact-checking and moderation safeguards are in place. Which is why coronavirus myths are starting to crop up on both Facebook and Twitter, along with YouTube and TikTok and messaging apps like WhatsApp.

YouTube videos advertising the “truth” about the virus are showing up in people’s recommended video feeds, and there are millions of tweets alleging that the US government played a role in creating the virus. Your aunt’s best buddy that you are friends with on Facebook with could very well be sharing a highly questionable article about how coronavirus will definitely kill you soon, right this moment. (Go check your Facebook feed. I’ll wait.)

There are a lot of things we genuinely don’t know yet about the coronavirus, and for the makers of those platforms, distinguishing the right details from the wrong ones is a difficult process. That difficulty multiples when the misinformation is dispersed in private groups.

For example: Anti-vaxxers regularly gather in invite-only Facebook groups to share damaging rhetoric about vaccinations, which can later make its way to the feeds of Facebook users who aren’t in those groups. The contained anti-vaxx group is harder to police than the general newsfeed, allowing misinformation to proliferate. There are reports that similar groups are now circulating hoax-like content about the coronavirus, sending users panicking behind closed doors for reasons that may not be based in reality.

Facebook, Twitter, YouTube, and TikTok have all pledged to stop these posts from traveling widely, according to Recode. In the case of Facebook specifically, CEO Mark Zuckerberg published a lengthy statement on March 3, pledging that Facebook will flag any info that needs to be fact-checked, and that it will also block ads from shady enterprises attempting to prey on people’s fears. The extent to which any of these companies can adequately prevent misinformation from spreading remains unseen, but the commitment is encouraging.

But with private messages and group chats, where friends and family members might discuss everything from gossip to more serious matters, successful moderation of health crisis fear-mongering can seem impossible.

The rumors on messaging apps sometimes make their way offline like they’re part of a game of telephone. A link shared in one group can easily be passed along by one of its members to another group, and another, until there are numerous people who don’t even use WhatsApp who stopped going to that restaurant in Chinatown where they’re sure a cook had coronavirus.

The sourcing for these stories is usually non-existent — that’s how rumors work — but their ramifications are heavy. East Asian-owned businesses are losing customers as a result, according to recent reports from the New York Times and the Los Angeles Times. That’s in large part due to xenophobia surrounding the coronavirus. But in some cases, it’s because patrons of these businesses are themselves spreading stories that aren’t based in fact.

In one bizarre but poignant case reported by Matthew Kang at Eater, several specific restaurants in Los Angeles’s Koreatown neighborhood have seen business drop off by over 50 percent, thanks to a rumor spread via KakaoTalk.

KakaoTalk is an app similar to WhatsApp that’s popular among Korean speakers both in South Korea and the US; users from both countries shared a simplistic Instagram screenshot across the platform, an image that named places an allegedly infected flight attendant visited during a layover in Los Angeles. The post set off hysteria that eating at those restaurants could cause coronavirus infections, and suddenly, many Koreans in L.A. were avoiding them en masse. There was no truth to any part of the story, but hearsay overpowered reality; such is the power of the group chat groupthink.

But therein lies the problem: Although it’s not encrypted to the same extent that WhatsApp is, KakaoTalk is similarly private and similarly unmoderated. Social media companies can control content on their public platforms to some extent, but when it comes to private messages, DMs, and group chats, the right to privacy works against attempts to prevent false info that might be going around.

So should social media companies be stepping in to stop people from (sometimes unknowingly) spreading rumors that obscure the truth about an impending global crisis? Or is the onus on the people in the group chats?

These aren’t easy or simple questions to answer. But in Thursday’s episode of Recode’s podcast Reset, I spoke to someone who tried their best. Russell Brandom is a policy editor at The Verge, where he covers regulatory practices in politics, technology, culture, and beyond. Below is a lightly edited segment of our conversation, in which we discussed where responsibility lies to blunt dangerous misinformation from spreading in private messages and group chats.


Allegra Frank

There are a lot of gray areas when it comes to enforcing freedom of speech on social media. And one place where I feel like it’s especially tricky is with insular communities, like encrypted and private channels and websites — such as Facebook groups and messaging apps.

In these places, there isn’t moderation to the same extent that there is on more public platforms like YouTube, Twitter, Facebook. There can’t be, because it would be a violation of privacy to have companies surveil our messages. So could misinformation be more harmful and easily spread in these encrypted messaging apps, where there isn’t that same moderation scale as on more publicly visible timelines?

Russell Brandom

The way that I think about it is that censorship is kind of this dirty word, like, you say it and people straighten up. But in a very basic sense, if someone is trying to say something in a channel, and you’re like, maybe don’t say that, that’s censorship. That doesn’t have to be bad, though. I think that censorship is less harmful as you get to the bigger and bigger channels.

If you’re going on national broadcast news and everyone with an antenna can get it and you’re using public airwaves to share misinformation, then it makes sense for there to be this higher standard of what is acceptable speech. If someone is sharing incorrect information in a video that’s going on YouTube, and it’s going to be promoted algorithmically to millions of people, it actually does make sense that YouTube should be the one taking responsibility for what it’s promoting in that way.

[The situation gets trickier] when you get into these encrypted group chats. Say it’s a one-on-one thing — I’m just going to pick up my phone and send this link about the coronavirus to you, Allegra. What if the messaging app is then going to say, “No, you’re not allowed to send that link”? That seems messed up.

Allegra Frank

Right, because as private citizens, we’re not comfortable with the moderation options that are more analogous to surveillance. So is it on us to vet and stop the spread of mistruths? I mean, I think the answer Facebook would give us is yes. But what is the responsibility of users, specifically, to actually try tamping down on this misinformation — when it comes to something as broadly dangerous and impactful as global health crises, at least?

Russell Brandom

I don’t know that just bystander intervention is really a good strategy. Is it enough if I’m going to sit down with my loopy aunt and have a real talk about the things she’s sharing on the internet? I mean, that’s fine. I’m less troubled by someone sharing something that’s false than sharing something that could hurt someone. And that’s a lot of what we’re talking about when we talk about misinformation, is that your child could get measles and get horribly sick. It’s not just that someone has this belief that I consider incorrect. There’s concrete harm that’s directly related to the thing they’re sharing.

That’s also [true] if someone shares something that could inflame existing societal tensions that result in this mob violence. If someone is sharing something that’s going to make people freak out and panic about coronavirus and then do something that will hurt themselves or others — that, to me, always feels more urgent.

And even if the sender resists when the people they shared those coronavirus myths or other incendiary links with respond to debunk the claims, the sender has to understand where those people are coming from. They’re coming from a place of concern for others rather than just being like, “You’re an idiot. I hold very different political beliefs than you.”

Allegra Frank

It seems that ultimately, misinformation is such a big problem in this day and age, and we’re still trying to figure out the best way to combat it. So the most we can do is, I think, continue having these conversations.

To hear us talk more about the nature and damage of misinformation in the wake of the coronavirus — as well as more on the story of how KakaoTalk scared people away from Koreatown businesses, from Eater LA’s Matthew Kang — listen to the full episode.

Author: Allegra Frank

Read More

RSS
Follow by Email