Writer Chuck Wendig found out the hard way that fake internet outrage can have real consequences.
In the pre-social media era, “never feed the trolls” was a primary rule for navigating the internet. But in the more populated era of social media, the “trolls” have evolved in the extreme: Now they can be “fed” not only by your reactions to them, but by years-old tweets, collective performed outrage, hashtag movements, and ironic memes.
More crucially, the “trolls” are increasingly likely to be a mix of alt-right white supremacists interacting with you in very bad faith, and bots or fake accounts created by foreign organizations for nefarious political purposes.
But even though the ways and means of trolling are vastly more complicated than they used to be, most of us still don’t really have tactics to deal with or combat a troll attack when it hits. Sci-fi writer and noted Twitter user Chuck Wendig found this out the hard way recently, when he was fired from his prominent gig as a writer for Marvel’s Star Wars comic Shadow of Vader and a forthcoming Star Wars book.
Wendig was ostensibly fired because of the controversy his tweets were generating. But then a bot-savvy supporter of Wendig’s looked closer. What she found suggests that increasingly, we’re encountering situations where the appearance of controversy is being driven, or at the very least amplified, by bots, automated outrage, and an angry fringe mob.
In these cases, cooler heads and even colder hard data might be used to clarify what’s actually happening, and how many people are actually mad.
Chuck Wendig’s denunciation of “civility” toward extremists appeared to incite alt-right trolls — with damaging consequences
Wendig, well-known for his outspoken, stridently progressive social media stance, was never writing his Star Wars comics for alt-right fans, but for a more diverse, liberal readership. But he ran into heavy backlash from alt-right Twitter users anyway, after a viral tweet thread earlier this month in which he lambasted the call for “civility” from right-wing extremists, arguing, “Civility is for normalcy. When things are normal and working as intended, civility is part of maintaining balance. But when that balance is gone … your civility gives them cover for evil.”
Wendig’s tweets generated considerable outrage from the alt-right on social media, especially after former DC Comics artist and conservative Star Wars fan Ethan Van Sciver made a YouTube video denouncing Wendig. Van Sciver is a prominent voice the Gamergate offshoot movement known as “Comicsgate,” a campaign to fight the spread of diversity in comics that has a history of harassing leftist and progressive comics creators. His video garnered more than 20,000 views and seemed to drive angry onlookers to Wendig’s Twitter.
According to Wendig in a blog post following his firing, he was subsequently dropped from his Marvel projects without warning by an editor who appeared to have abruptly grown tired of “the negativity and vulgarity that my tweets bring. Seriously, that’s what Mark, the editor said,” wrote Wendig. “It was too much politics, too much vulgarity, too much negativity on my part.”
On the surface, this incident appears to be part of the recent trend of prominent progressives being targeted for harassment from the alt-right based on their social media content — and in some cases, suffering severe consequences. In other words, it was an unfortunate consequence for Wendig caused by the fact that his tweets made people mad.
But to Bethany Lacina, an associate professor of political science at the University of Rochester who researches civil conflicts, the trolling that got Wendig fired was suspicious.
Lacina suspected that Wendig’s case wasn’t a traditional incident where a lot of people got mad at his words and reacted with anger, but rather an example of a relatively minor dust-up that was escalated by automated bots and anonymous accounts performing synthesized outrage.
Breaking down a backlash
Lacina’s instincts were backed by plenty of data that bots are manipulating more of our social media behavior than we might realize. A recent Pew study found that bots generate two-thirds of all links on Twitter, while a recent data dump of 10 million bot-generated tweets showed the bot accounts doing everything from stoking political fears to tweeting about pop culture and circulating memes.
Lacina decided to analyze Wendig’s Twitter backlash using Botometer, an algorithm developed by scientists at Indiana University that closely examines Twitter accounts to determine the likelihood that they are automated accounts. She had previously used the Botometer to analyze the makeup of mobs of angry Star Wars fans.
Using the information provided by the Botometer, Lacina broke Wendig’s tweetstorms down into four categories: responses from accounts that most likely belonged to real people, those from fully automated accounts (classified as bot accounts), those from partially automated accounts (classified as sockpuppets), and those from accounts that couldn’t be confirmed because very little information could be gleaned about them, i.e., anonymous accounts that were likely to be bots.
Lacina’s analysis, which she shared with Vox, charted the activity levels around Wendig’s “controversial” tweets before, during, and after his firing. She found that the metrics showed very little backlash around Wendig’s tweet thread until his account was linked from Van Sciver’s YouTube video. Once Van Sciver’s alt-right followers picked up the scent of Wendig’s “civility” tweet, the “outrage” suddenly increased dramatically — as indicated by the second vertical blue line in the timeline analysis below.
Lacina’s analysis reveals that very few people — fewer than 250, and far fewer than that when you subtract automated accounts — replied directly to Wendig’s tweets for the first 24 hours the thread was up between October 6 and 7. That changed dramatically directly after Van Sciver tweeted a link to his YouTube video in which he called out Wendig.
As the next two charts reveal, once that happened, the number of tweets from real people spiked — and so did the number of bots, substantially in comparison to the relatively inactive period before.
In the half-hour or so following Van Sciver’s tweet to his video denouncing Wendig, Wendig received about 600 tweets from normal accounts, i.e., real people — and about 400 from automated and anonymous accounts.
In other words, the backlash was most likely coming from alt-right followers of Van Sciver’s social media who had descended upon Wendig as a target, rather than Wendig’s normal audience and Marvel readership; and roughly one-third of that perceived outrage wasn’t even real — rather, it was being generated by alt-right bots and sockpuppets.
The horizontal black line in the following graph shows the average amount of overall tweet activity from the “bad actors” in this equation — the bots and anons.
The contrast is immediately noticeable when compared to the response to Wendig’s tweet-thread about his firing on October 12. That thread drew nearly 1,500 responses, most of them supportive, from real people — and the overall amount of activity around those tweets far outweighs the amount of activity around the “controversial” tweets.
In essence, Lacina’s analysis revealed that far more real people turned out to support Wendig than to revile him — and also highlighted the unexpected element of YouTube as a driver of alt-right mob mentality.
Lacina’s analysis presents a good argument for thinking critically about these mob harassment campaigns
It’s arguable that if Wendig’s editor had seen an analysis like Lacina’s, it might have quelled his anxiety over the backlash and saved Wendig’s job. And even if not, it’s possible that this approach to diagramming the anatomy of a Twitter mob can be helpful to many more potential troll targets in the future.
I sat down with Lacina to walk through how all of this fits together and what it says about how we interact with bots and trolls on social media. Our interview has been lightly edited for length and clarity.
Was this about what you expected when you were doing your analysis?
No. There are two things. One thing that really surprised me was how distinct the group of tweets around Van Sciver is. It’s just like a wall in the data. I did not think it would be that sharp.
But Van Sciver himself only got like 22 retweets. That seems like such a small amount. And he doesn’t even link Wendig’s Twitter in there, so how can we be sure—
So he links to his YouTube video and that gets more attention.
Yeah, that got like 24,000 views.
YouTube is a much more right-wing platform. So I think it’s the activity on YouTube that is the real driver of action for both the real and not real things that follow [Van Sciver’s tweet].
I don’t think that people understand how politically driven certain subsets of YouTube are, like what an impact YouTube had on Gamergate and has had on spreading the alt-right within all these corners of geekdom.
I guess I’ve been sort of blind to YouTube. Who watches a video? A whole video? Yeah, it’s incredible. I guess it’s just a platform that a lot of these guys feel works best for them.
This is a tradition that has been going on in this community since before Gamergate. There were all these vloggers — the Amazing Atheist springs to mind — who would just rant and rant about Anita Sarkeesian.
And because she was on YouTube, I think maybe that was seen as a natural counterpoint? Like a way to have a counter-voice on that platform. And gaming bloggers are already prone to be in this community because gamers are already huge on YouTube.
I think there’s a lot of natural growth that happened on that platform that you wouldn’t really be aware of if you were only following Twitter and so forth. But as we can see, obviously it’s driving this sort of reaction.
Yeah, I mean other people have mentioned Van Sciver doesn’t have that many followers. How can his Twitter matter that much? And I think the secret is YouTube.
And again, [these subgroups of Twitter users] are never the majority. From 20 percent to 30 percent, and that’s what I was going to say the second surprise in all this for me is. I [wrote an article] about the [Star Wars] character Rose Tico, and true bots, fully automated accounts, were just not that big a part of the story. They were something like 3 percent of all activity. So I produced this initial figure, sort of thinking, “Okay, once I add the true bots it’s not going to look that different, because a real bot is rare.”
I expected the bot category would be essentially invisible because most [automated] accounts are in this semi-human category, not in the fully automated category. So I am shocked how many bots were on Chuck Wendig’s Twitter.
Can you walk me through how you analyze the difference between semi-automated and fully automated?
I used the same tool pretty much everybody uses, which is called Botometer. It has a graphical user interface version so you can go in and find your own Twitter account if you want.
And the gist of what these studies do is, there’s certain features computer scientists have flagged as somewhat bot-like, and then bots tend to follow each other, so the program looks at both the way the account posts and who follows the account, and who does the account follow, because bots kind of cluster.
All of the things we know about the characteristics of bots basically come from periodically data sets get released of known bot accounts. There was a new one just released by Twitter of Russian and Iranian automated and semi-automated accounts.
The difference between fully automated and semi-automated is not a gray area. There’s a fully automated program that is reading their Twitter at all times to pick ads for you. For example, you could easily write a bot that would just read Twitter at all times and “like” every time your name came up.
What semi-automated accounts offer is that a person at a farm for these things can step into the account and make its behavior a little more life-like when it actually wants to, say, engage with something.
If I was an entrepreneurial young thing, I may set up a bot network of hundreds of thousands of accounts that do only totally automated tasks. But I might, at some point, convert some of those to semi-automated because I have something I want them to appear more lifelike for.
Like the studies of Brexit suggest that the puppet accounts were much more nuanced and proactive in that last couple of days before the referendum, and before that it just wasn’t worth the time to be in there every day and making sure they—
Yeah. Basically, you’re distinguishing puppets as semi-automated and bots as fully automated. An anonymous account could be a puppet, but it’s a weaker test, so I see it as two different levels of sketchiness.
Anonymous in this case means someone who has no information in their user description, so it’s not about whether you’re using your real name or your real picture. It’s that gray text underneath somebody’s egg. An anonymous account hasn’t put anything in there. And so if you were a bot farmer and just needed hundreds of thousands of accounts, that’s a kind of personalization that you probably skipped.
Anonymous is an accepted word for it, but it’s not somebody who’s going incognito. It’s that this account is so generic that I don’t think anybody’s actually using it except when they need to troll. So it’s probably part of a bot farm, but you can’t really tell because there’s so few characteristics.
The puppet accounts are accounts that aren’t verified and tweet more than 70 times a day, which is another characteristic that was flagged by researchers as something sockpuppets tend to do. Although when I mentioned this on Twitter, I had people respond to me and say, “I tweet that much in a day.”
Well, the people talking to you could be sockpuppets, so—
Exactly. Although I tend not to assume that. The majority of people on Twitter are, in fact, people. People whip out that accusation so quickly. People over-correct for it, yet don’t over-correct for it the right way.
But in this case, it seems like they didn’t over-correct enough, because in Wendig’s characterization, his editor firing him for “the negativity and vulgarity that my tweets bring” seems to presume that these were all real people.
Right. I mean, as you already noted, we only know the reasons Chuck Wendig says he was fired.
Well, if we take Wendig at his word, it seemed like his editor was really mad. And editors do release freelancers all the time.
The distinction that I see is that there have been multiple people who’ve been fired because they’re supposedly in this public face of the company role and something about them becomes a scandal in the media in general. And so the ax falls.
What seems more localized, to me, about Wendig’s case is even if all the fighting on Twitter were real, which I don’t think it was, it never jumped off Twitter. And he’s also not that famous. It wasn’t that much attention being paid to it. If they hadn’t fired him, it wouldn’t have been a story in the New York Times week after week. He’s much closer to just a normal person than James Gunn or Roseanne Barr.
Right, but also I feel like the comics industry itself is really small and really niche. Do you think these trolls are primarily related to comics or to Star Wars fandom, or both?
One of the lessons of my research is that it’s really hard to make any inferences about the people who follow prominent accounts. Van Sciver’s thing is Comicsgate plus his hatred of Rose Tico.
There’s this other set of people who Chuck Wendig got into a dustup with a while back with a group called Rebel Force Radio, and their brand is very much about Star Wars. They worked symbiotically [with Van Sciver] after they closed their Twitter account; they announced that via Van Sciver’s Twitter. There’s cooperation to amplify each other’s messages.
So they’re clearly all tied, in the sense that these guys have linked their careers and livelihoods to each other. But it is really hard to know which of the people in that pond are actually people who would be consuming comics and Star Wars.
But also if you’re in Star Wars fandom, there’s a good chance that you’re in other corners of geek culture just by the nature of being a sci-fi nerd.
Sure. And certainly at this time of year in the Star Wars cycle, the people following Star Wars a year and a half out from the next movie really are a pretty select group. So probably everybody who follows Comicsgate follows Star Wars. The reverse is not at all true because Stars Wars is just so, so much bigger. The hardcore Star Wars people, yeah, I bet there’s a lot of overlap.
But the casual Star Wars fan might have no idea any of this is going on.
I mean, I don’t know. Star Wars-gate, such as it is, has been making a lot of headlines this year. There’s been a lot of massive backlash that I think is larger than all this, but also I think is driven by all this.
So I guess my question then is: What do you think can be done? What do you think we can do in terms of remedy, or just making people aware of how these mobs are working?
Well, in principle, Twitter could be doing more to solve this. It would be totally possible for Twitter to be telling you in real time what kind of accounts were arriving in your mentions. Something like Botometer, so you can tweet them, the name of the account, and they will tweet back the [bot] scores of the account to you.
You could imagine a little widget that you’d have installed on your Twitter page that would tell you the bot score of accounts just like it right next to that blue check mark. I think Twitter is reluctant to go down that road and understandably reluctant to antagonize people by calling them bots erroneously.
When you talk about Twitter taking action and being able to identify bots, you’re talking about the individual level. I think when you have what looks like a mass mob of people descending upon an account, unless you have somebody like you actually taking the time to run analysis on every single tweet and do a mass plotted analysis of all these tweets at once, you’re not going to be able to really see how much of it is bot-generated noise and how much of it is sincere.
Well, if they wanted to, Twitter could build a little thing that was a graph that was just continuously updating. “These are the kind of accounts that have visited you in the last week.”
A graph like I did could be just sitting on your page being updated continuously. The information you need to calculate that graph is really small relative to the information they’re already using to, say, target ads to you.
That’s true. And they did that sort of thing when they developed their shadow-banning algorithm.
Right, yes. You can definitely see why they might decide they wouldn’t want people knowing that.
Well, I think even in the aggregate, that data would be more useful. Even if you don’t go out and pinpoint specific Twitter accounts that are identified as bots—
No, that’s right. Right. If they gave it to you in aggregate, they wouldn’t necessarily be telling you who are the bad people that you could go block or whatever.
Right. To jump back to the shadow-banning idea, I think it would be really good for a lot of lawmakers to actually be able to see in the aggregate how many bots are following them and how many Russian trolls are following them. But the way that Twitter went about that became an issue.
But I think there’s a lesson for employers. We need to have a policy about this, and if Chuck Wendig is correct that no one had never mentioned this to him, that’s incredible. He gets into it with Star Wars fans on Twitter periodically.
It’s such a known part of his online persona.
Right, that’s exactly right. This is part of what you’re buying when you’re buying Chuck Wendig.
And the Comicsgate guys aren’t wrong to say that a bunch of normal people found Chuck Wendig via Van Sciver and went in there and gave him a piece of their mind, but just look at the normal people for a second. A lot more people freaked out after he was fired.
And I think the real question for companies is: How do you [rate] people who surf on over to Chuck Wendig’s Twitter because they want to get in a fight? How many of them didn’t have an opinion about Chuck Wendig before? Twitter skews super activist-y.
And how many Comicsgate supporters were actually buying Wendig’s comics before?
Certainly they weren’t buying Chuck Wendig’s comics. He’s been on the outs with that group for a long time. … The idea that this dustup revealed that he’s pissing off people who would have otherwise have been [Marvel] customers seems unlikely to me.
Yeah, I think that’s a really good point, especially for people who know Chuck Wendig and know the subject matter that he writes about. I mean, he introduced queer characters into the Star Wars canon. He’s not going to be the type of person that Comicsgate people are reading, but he is going to be the type of person Comicsgate people would want to get fired.
So was this basically just a giant win for Comicsgate? Was this just a giant win for the alt-right?
Yeah, I guess. Troll farms can take cues from what the Comicsgate guys are doing without their permission or help. I have no idea if any them, Comicsgate people, actually do any work with bots themselves. But they don’t have to do anything extra for a bot to take cues about who to troll from their feed. So it’s possible that it was kind of a passive win for them, but it definitely was a win.
Author: Aja Romano