Andrew Marantz on Facebook, tech ideology, and the spread of hate on the internet.
As my colleague Jane Coaston has explained, the Proud Boys are a hodgepodge of men’s right’s groups and pro-Trump street-fighting clubs that emerged in 2016 as a counterweight to antifa and other lefty protesters. But for most people who don’t follow politics all that closely, the debate was likely the first time hearing anything about them.
It was, for that reason, one of those strange moments in which the weird world of the online berserk spilled into the political mainstream. Indeed, given all the coverage the Proud Boys received after the debate, it may not be accurate to call them “fringe” any longer.
Which is why I reached out to Andrew Marantz, a writer at the New Yorker who spent years immersed in the world of online extremists. In his 2019 book, Antisocial, Marantz interviewed conspiracy theorists, alt-right trolls, and various media gate crashers who have flooded the virtual space and, in his words, “hijacked the American conversation.” The book is a fascinating guide through the digital wilderness, and it’s even more relevant now that several US intelligence agencies have warned of increased election-related threats from domestic extremist groups.
Marantz and I discussed how online extremism has evolved since his book was released, if he thinks the threat is being overstated, and the role of the tech companies in perpetuating all these problems. I also asked him if we just have to accept that American politics in the years to come is going to look increasingly like the darkest corners of the web.
A lightly edited transcript of our conversation follows.
The phrase “online extremism” is fuzzy. I’d like to know how you define it.
You’re right, and a lot depends on how we define it. You could define it so narrowly that it’s just a handful of people, most of whom have already been banned from most of the social networks. In that case, it’s not that big a problem.
You can also define online extremism as Donald Trump getting on the internet and telling people they don’t need to worry about getting coronavirus, that it’ll actually make them feel awesome and they’ll get to take a lot of steroids, so they shouldn’t wear their masks. I’d consider that not just loony but extremist and dangerous.
So a lot depends on where you set your limits. And despite the subtitle of my book, I’ve never really loved the word “extremist.” I just couldn’t think of a good substitute. I don’t like it because it implies “fringe” or “marginal” when honestly some of the most extreme opinions come from some of the most powerful mainstream voices in our society.
The thing I don’t want people to take away from my book is that we’re just talking about a small group of people who I tracked closely because it worked for the kind of narrative reporting I like to do, and that once they’ve been contained we’re all good. It’s the whole online ecosystem that props them up [that’s the problem].
Conversations about online extremism imply that it’s basically a right-wing problem, and that seems mostly true, but is it a little misleading?
If by right-wing you mean people who read Edmund Burke and care a lot about curtailing the overreach of the federal government, then no. There’s nothing intrinsically harmful or extremist about that. But if by right-wing or conservative we mean people who don’t think climate change is real, who don’t think epidemiology is real, and who don’t like people who aren’t white, then yeah, that’s harmful and extreme and it’s the kind of stuff that thrives online.
Are you surprised at how pervasive and influential QAnon has become? Did you see this coming?
Yes and no. I think it would be professional malpractice at this point for me to ever be surprised at how bad and stupid American politics can be, but when I was writing in my book, for example, about Pizzagate, there was this metastasizing version of it that people were starting to call QAnon. If you had pinned me down and asked me to predict whether there would be QAnon members of Congress in the next congressional cycle, I probably would have said that’s a little nuts, but here we are.
It’s so tempting to dismiss something like QAnon, but you can’t do that when it’s spilling into the real world —
Nope, not when QAnon members are winning congressional races. You can’t get much more mainstream than that.
Part of the issue is we have this dual intuition with this stuff where we say, on the one hand, the actual content in question is so stupid that it’s almost beneath contempt, and so the brain just wants to ignore it because that’s what you do with things that are incomprehensibly stupid. But there’s another set of intuitions that says it doesn’t matter how dumb something is; if it has actual power in the world, we have to deal with it.
It’s the same thing with climate denial. It’s also unspeakably stupid and dangerous to not believe in climate change, but we’ve known for a long time that that view has enough political power in the world that we can’t afford to ignore it. But it’s a hard line to walk. I kept colliding with this when I was reporting on all this stuff. People would say, “You can’t spend all day worrying about what weirdo, huckster, loser misogynists on the internet are doing because those people are contemptible.” And my response was always, “Yes, that’s true, but they’re also a model for how the worst things in our society can take over.”
Does the reality of the internet make all of this a basically intractable problem?
It’s certainly a really, really difficult problem. I guess we won’t know whether it’s truly intractable until the simulation ends and we see how it all unfolded. But I keep going back to the climate denialism. We have these big systemic failures like the climate crisis or the information crisis and we can’t just throw our hands up and we also can’t expect it to work itself out. All we can really do is try to unbuild the system we’ve built and replace it with a new one.
I agree, but I really do wonder if this is just what politics is going to look like moving forward. Both of us think that societies are shaped by the tools they use to communicate, and since the internet is now the dominant form of communication and this is the kind of shit that flourishes there, shouldn’t we expect this to be the new normal?
It’s an interesting point, and to some extent it’s true, but I also think that’s why we have to change how this stuff works. One of the frames that I keep coming back to in the book is the pragmatist philosopher Richard Rorty’s idea that to change how we talk is to change who we are. I think that is actually really, deeply true. And I think everyone can see that the way we talk to each right now is fundamentally broken.
So, yeah, I think you’re right that we’re in for a tough slog, but again, these things are not static. These tech companies like Facebook and Twitter and Google are some of the newest, fastest-growing entrances to global corporate behemoth status that we’ve ever seen. It’s not like we’ve been doing things this way for 100 years. These things were barely thinkable 15 years ago. Which is to say, things can change. We can change. And we’re figuring out how best to change them.
You mentioned the tech companies and you just published a big New Yorker piece about Facebook. Before I ask you what they can or should do, let me ask: Do you think they’re actively complicit in this problem?
When you say actively complicit, I think the image that that conjures up is of evil, villainous men twisting their mustaches in a Bond villain cave somewhere. I don’t think it’s that, but I do think my reporting in that piece and also in the book showed that the ideology that has become the house ideology at a lot of these companies is blinding and misleading. Right after the phrase “online extremists” in the subtitle of my book comes the phrase “techno-utopians.”
It’s not very sexy to talk about an obscure ideology that most people have never heard of on the cover of a trade book, but the reason I wanted to do it is I think the ideology is really the fundamental problem. Of course profit is a problem. Of course the structure of late capitalism is a problem. Of course delivering maximum shareholder value is a problem. That’s kind of obvious to most people.
But I think what may be less obvious is that it’s not purely people looking at a spreadsheet and going, “Okay, we can make 1 percent more profit if we club more baby seals over the head,” or whatever. I think it’s that these people really believe themselves to be harbingers of good in the world. The more cognitive dissonance that shows up between your belief in yourself as an agent for positive change and the countervailing evidence that you’re not, the more that cognitive dissonance starts to make you a worse and worse decision-maker.
So let’s take a specific case: I think that the corporate logic of a company like Facebook means that they have to arrive at a certain conclusion when it comes to a strongman like Donald Trump or Jair Bolsonaro or Rodrigo Duterte. Now, this is speculation, so I don’t know what’s in anybody’s heart, but my educated guess based on reporting is that, although they might go through a process of trying to decide what to do when someone like Trump or Duterte breaks the rules of their platform, deep down they know that the logic of their business requires that they keep that person on the platform.
Facebook will say they allow someone like Trump to spew dangerous nonsense because it’s inherently “newsworthy,” which is exactly what the mainstream media does, so in that sense they’re not doing anything different from CNN or Fox News or whatever.
I agree with the first half of that. I think it’s a very similar trap that the mainstream finds itself in. If we lived in a world where the national approval rating for social media were flipped with the national approval rating of the mainstream media, then maybe I would have written a book that’s critical of the mainstream media. In other words, if I thought people were aware of the problem of social media to the extent that they’re aware of the inherent problems with TV or newspaper media, then I’d be more interested in highlighting that problem.
But I really think that, if anything, we’re still underrating the problem of social media and probably overrating the problem of mainstream media. And here’s the biggest difference: There are people at any given news network who you can appeal to to try to have them make a different set of decisions. So there was a whole movement to get someone like Jeff Zucker [president of CNN] to stop covering Trump’s rallies wall to wall in 2016. It was good for ratings but bad for democracy. The pressure campaign worked.
There are obviously other problems with CNN, but there was at least a human being who could make the change, whereas what the tech platforms will tell you is that we don’t have any human beings sitting in that chair by design. We have outsourced all of those decisions to algorithms, and that is our attempt to make a better machine. But it’s not working.
A contrarian take on this, and I know people won’t like it, is to say, “Look, this is what a truly free and open information space looks like. The media gatekeeping age is dead. And these social media platforms are a cultural mirror, whether we like the reflection or not, so is it really reasonable to ask them to clean up a mess they didn’t create but have certainly amplified?”
Yeah, the platforms will talk about themselves as a mirror, or they’ll say we’re the tail and society is the dog. If they are a mirror, they’re a funhouse mirror. They are not a photorealistic depiction. You know that because as soon as you introduce any algorithmic distortion into the picture, the reflection gets altered. As soon as it’s not just a chronological feed of everything that every person in the world said, you are introducing distortion.
Part of the problem is that we see this stuff, we see what’s trending, and we think that’s just an objective heat map of the American conversation right now, when in fact it’s proprietary, it’s microtargeted, it’s individually tailored, it’s based on some secret sauce that nobody’s allowed to know.
The other part of the problem is that we somehow are aware of the fallacy of blaming the consumer in every other industry except for this one. So if you talk to a casino company and they say, “Well, what would you have us do? We’re just honestly reflecting the preferences of the consumer,” I think everybody would know that’s not true because they’re pumping the casino full of oxygen, and they’re blotting out all of the windows, and they’re giving people free drinks, and they’re playing to everybody’s worst, most addictive behaviors. So people are still pulling the levers on the slot machines, but they’re being manipulated. But for some reason, we have a harder time connecting the dots when it comes to speech or ideas or media.
We all agree that these platforms are the main vectors for spreading malicious content. What are the most practical steps these companies can take to at least mitigate the problem?
Well, the steps the companies can take would be much more powerful and have a much wider multiplier effect. Leaving it up to individual consumer choice isn’t going to get us where we need to go. One thing that the government can do would be to look very seriously, as they already are, at antitrust solutions, at breaking up companies when they’re too big.
Another thing that the companies could do voluntarily, although I’m not holding my breath, would be to really look at the roots of their algorithms and imagine what it would look like if they did not prize emotional manipulation above all else. On some basic level, all of these platforms are built on what social scientists call “activating emotions.” It’s an emotion that makes you take a measurable behavior that the platform can quantify and monetize. As long as that is the fundamental currency of virality on the internet, these things will always be subject to manipulation. They will always be able to be gamed in either positive or destructive ways.
The United States is in the middle of one of the most consequential presidential elections of our lifetimes. It’s essential that all Americans are able to access clear, concise information on what the outcome of the election could mean for their lives, and the lives of their families and communities. That is our mission at Vox. But our distinctive brand of explanatory journalism takes resources. Even when the economy and the news advertising market recovers, your support will be a critical part of sustaining our resource-intensive work. If you have already contributed, thank you. If you haven’t, please consider helping everyone understand this presidential election: Contribute today from as little as $3.
Author: Sean Illing