How to regulate Facebook and Twitter

How to regulate Facebook and Twitter

Facebook CEO Mark Zuckerberg testifies before the House Judiciary Subcommittee on Antitrust, Commercial, and Administrative Law on July 29. | Graeme Jennings/AFP via Getty Images

We could treat them like phone companies or like TV networks, but not both.

Rudy Giuliani’s efforts to gin up a scandal about Hunter Biden and his work in Ukraine already got President Donald Trump impeached. Now it could turn into a watershed moment for the debate over regulating social media companies.

When the New York Post published stories based on questionably sourced emailed purportedly found on Hunter’s laptop in a Delaware computer repair shop, Facebook and Twitter both reacted by throttling traffic to the stories. Twitter temporarily blocked links altogether.

That prompted a backlash from Republicans, who of course would like the story to gain widespread attention in the weeks before the election. That’s the hope even though the emails — even if completely accurate — don’t do anything to alter the reality that Hunter had nothing to with the firing of Ukrainian prosecutor Viktor Shokin or that Trump’s children have their own massive financial conflicts of interest.

Much of the debate has focused on Section 230 of the Communications Decency Act and, most recently, a Trump administration initiative to get the Federal Communications Commission to modify the law. As my colleague Sara Morrison explains, they can’t really do that. The Section 230 debate, however, has been awash in misunderstanding and misinformation, with lots of hectoring about an alleged (but fake) legal distinction between a “platform” and a “publisher.”

What the law actually says is that if I run a computer service and you use it to libel someone, then you are legally responsible for the libel but I am not. At one point in the early internet, there was concern that if a company edited or moderated its comments sections in any way, that would expose the company to legal liability for anything it failed to remove. Section 230 sought to encourage moderation (there’s no porn on Instagram, for example) by clarifying that this is not the case.

Meanwhile, Democrats in the House have been pushing for antitrust scrutiny of big tech companies, something Republicans have at least rhetorically backed from time to time.

But while there are antitrust issues in the technology world, the question posed by the Hunter Biden email story is not really a question of competition policy. Forcing Facebook to divest Instagram and WeChat, for example, would not really eliminate anyone’s concern about social networks being used to algorithmically supercharge misinformation or becoming a vector for foreign intelligence operations.

Nor would it alleviate conservatives’ concerns that tech companies run by mostly left-of-center coastal professionals will try to selectively censor conservative speech, or progressives’ concerns that algorithms are being rigged against them to placate congressional Republicans.

The argument, instead, is about how and whether social media companies should be regulated as critical communications infrastructure with important implications for the health of American democracy. And there’s precedent for regulation in that domain. The whole reason there is an FCC in the first place is because in past generations, Congress thought the communications industry — first telegraph and radio, then phones and television — needed its own regulatory framework.

But this classic era of communication regulation provides two distinct models for regulation with roughly opposite implications, and proponents of cracking down on Big Tech need to think harder about which it is they are asking for.

The unregulated newsstand

Beneath the hullaballoo about platforms and publishers and special regulations, the best way to think about the regulatory status of social media platforms is with an analog analogy: They are treated like newsstands.

  • A newsstand carries many magazines but certainly not all magazines.
  • The newsstand’s owners and managers decide for themselves which magazines they carry and which magazines receive more favorable placement on the racks.
  • The placement decisions are made primarily on the basis of business considerations, rather than editorial ones, but there’s no governing framework; a newsstand doesn’t need to have a set standard or apply it fairly.
  • If something libelous or otherwise illegal (for reasons of copyright, national security, privacy law, or whatever else) appears in an issue of a magazine that lands on a newsstand rack, the newsstands are not legally liable for that content.

These various features of the newsstand business are not just compatible with the principle of free speech, they are required by it — both in the narrow sense of freedom from government regulation and in the broader sense of promoting a healthy and diverse discourse. If you made newsstands legally liable for errors made by magazines, the newsstands would need to become incredibly conservative in what they sell (they, after all, can’t review the content of every article before placing the issues on the racks) and the discourse would suffer.

But if you deprived newsstands of the ability to decide for themselves which magazines to stock and which to promote, you’d be trampling on their freedom. Many people value the circulation and dissemination of articles they don’t always agree with but also would not want to own — or shop in — a store whose walls were full of neo-Nazi propaganda.

Trying to set a hard and fast universal rule about what stores should carry and what they shouldn’t would be censorship. Letting shops decide for themselves what they want to do is the opposite of censorship, and requiring them to carry everything would be absurd.

The specter of monopoly

To US Sen. Josh Hawley, a Missouri Republican, this may be fine for retail stores. But Facebook, as he tweeted on October 15, “is [a] lot like a supermarket … except there’s only ONE supermarket in town, and they decide who can and can’t shop. That’s what we call a monopoly.”

This is obviously not quite right. If you’re mad at Facebook, for example, you can tweet about it. If Twitter blocks a New York Post story, you can still read about it on the Post’s website or on any of dozens of other websites that choose to cover it. It’s not like being crushed under the heel of a true monopolist.

At the same time, Hawley is correct that Facebook’s decision-making isn’t on par with that of a neighborhood retailer or even a large retail chain. Facebook is simply a really, really big company — big enough that its decisions are a matter of public concern. In 2017 and 2018, for example, it tweaked its newsfeed algorithm to reduce the quantity and prominence of political news. As Slate’s Will Oremus reported at the time, this had a huge impact on the media business: “Traffic from Facebook plummeted a staggering 87 percent, from a January 2017 peak of 28 million to less than 4 million in May 2018. It’s down more than 55 percent in 2018 alone.”

When the Great Throttling happened, there were suspicions that certain conservative sites were receiving favorable treatment.

Those suspicions were confirmed in an October 16 Wall Street Journal story by Deepa Seetharaman and Emily Glazer, which revealed that Mark Zuckerberg personally approved a tweak-within-a-tweak to benefit the Daily Wire and other conservative publishers:

In late 2017, when Facebook tweaked its newsfeed algorithm to minimize the presence of political news, policy executives were concerned about the outsize impact of the changes on the right, including the Daily Wire, people familiar with the matter said. Engineers redesigned their intended changes so that left-leaning sites like Mother Jones were affected more than previously planned, the people said. Mr. Zuckerberg approved the plans.

Perhaps the most telling part of the story is the non-denial denial they received from a Facebook spokesperson, who said “we did not make changes with the intent of impacting individual publishers.”

The charge, of course, is not that they made the changes with the intent of impacting individual publishers. It’s that they made changes intended to reduce the prominence of political content, and then made changes to that change that were intended to minimize the impact on conservative publishers writ large.

Facebook doesn’t really deny that this happened, and it is a big enough deal that its decision to disproportionately distribute right-leaning content has a real impact on the world. Changing course would also have a real impact on the world. And even without Facebook being a monopoly, the decisions the company makes are obviously a big deal for American society and public policy thus might try to shape them. The United States has traditionally subjected communications technology to regulatory standards that go beyond market efficiency because they are seen as having particular social importance.

But the question for those who’d regulate social media is: What are they trying to achieve?

Social media as Ma Bell

If you go back to the “classic” era of American communication technology in the third quarter of the 20th century, you see two very different types of regulatory standards applied to two different technologies. There’s the model used to regulate telephone companies, most of all AT&T, and there’s the model used to regulate the big three broadcast television networks.

Both were cases of industries with sharply limited competition and great social importance that led to a widespread sense that you can’t just “leave it up to the market” the way you would with a newsstand.

Phone companies were (and, to the extent that they are operating as phone companies, still are) required to act as “dumb pipes.” They carry audio from one phone to another, no questions asked.

  • You can curse on the phone, engage in lewd or pornographic talk, slander people, harass them, shout racial slurs, or otherwise do whatever you want and the phone company has zero liability for your actions.
  • Not only can phone companies get away with letting you do that stuff on the phone, they are legally required to so do. The phone company does not listen in on calls or disfavor bad or undesirable transmissions.
  • If a mafia boss orders a dozen murders via coded messages delivered over the phone, that’s not the phone company’s problem. The government can, with a warrant, bug his phone. And if they catch him, he goes to jail. But the phone company is fine.
  • This extends beyond government regulation to the sphere of social convention. Journalists don’t write stories about how “extremist groups are using phone calls to recruit members and organize events.” It would be like blaming paper companies for letting extremists use paper to take notes.

The entire “net neutrality” debate is in large part about whether broadband internet service providers (which include classic phone companies like AT&T and Verizon as well as what you traditionally would have called cable companies like Comcast) should be required to act as dumb pipes.

Under FCC Chair Ajit Pai, net neutrality rules are not in place. Companies thus far have taken advantage of non-neutrality mainly to do things like tell you that you have gotten unlimited data and then throttle streaming video unless you pay extra. Or they will make special deals with particular services (Verizon has one with Disney right now, and T-Mobile with Netflix) to give you certain things at a discount. In debates on the issue, net neutrality proponents often told scare stories about ISPs censoring or throttling disfavored websites, and so far that hasn’t happened. But under the regulatory framework Republicans have created, it could.

Once upon a time, both Facebook and Twitter did more or less work as dumb pipes. You picked who you followed, and the services then displayed whatever the people you follow posted, in order. But that is no longer the case — algorithms on the services determine what you see — and turning social media into dumb pipes would have far-reaching implications.

E.J. Fagan, a political scientist at the University of Illinois Chicago, argues the federal government should incentive the dumb-pipe approach by changing the liability rule so that “when a platform makes decisions about what user-generated content a user sees,” the platform itself is legally responsible for the content, like a newspaper or magazine would be if they printed it.

That would, in effect, provide a massive financial incentive for social networks to return to the older, non-algorithmic means of displaying content. That, in turn, would make the networks far less engaging and far less lucrative — cutting Facebook down to size and hurting Twitter as well.

In this view, misinformation, hate speech, and whatever other content one might deem undesirable wouldn’t spread as far. Of course, misinformation did go viral before the age of the algorithm. Even before social media existed, there was a dense ecosystem of email forward nonsense. But the perverse dynamics whereby algorithms can supercharge the spread of erroneous or inflammatory material would be curbed. And by design, there would be no centralized authority in place to make sure that wholesome content prevails instead.

An alternate regulatory concept might take inspiration from the other major communications framework: broadcast television.

Social media as broadcast television

Television antennas can’t get a clear signal if more than one person is trying to broadcast on the same frequency in a given geographical area. Consequently, the existence of the television industry before the rise of cable was predicated on government-granted monopoly rights to the use of certain frequencies in certain areas.

This created a rationale for regulating the airwaves in a much more stringent way than the First Amendment would permit for print periodicals or movies where there’s no natural scarcity in the distribution channel.

And through a variety of formal and informal means, that regulatory framework led to the Big Three television networks making programming decisions that leaned overwhelmingly in the direction of being bland and inoffensive. There were no opinionated news shows, no edgy ideas, and most of the content was light-hearted entertainment that leaned heavily in the direction of vanilla inoffensiveness.

The notion of television as a potentially prestige or highbrow medium that could feature dark antiheroes or mass atrocities is entirely a product of the more modern landscape of audience fragmentation to cable and streaming services. In the Big Three era, lots of interesting video content was happening in movie theaters, and print publications carried the big debates about the issues of the day, but television was — by design — boring.

Economist and Bloomberg columnist Noah Smith sees the throttling of Hunter Biden news as a trend toward big technology platforms voluntarily taking up that mantle. Facebook, YouTube, and Twitter, he writes, “have grudgingly accepted that they are the CBS/NBC/ABC of the modern age, and have begun to act accordingly.”

Smith envisions this as a voluntary evolution. And the responsible blandness of the big three was itself largely a product of social norms and self-regulation by standards and practices departments rather than a set of formal rules. But there were regulations, too, such as the FCC’s Fairness Doctrine, which prevailed from 1949 to 1987 and formally required political issues to be treated in an even-handed way. In practice, the formal and informal regulatory standards worked in tandem. Broadcast television networks and the owners of local TV affiliates recognized that their oligopolistic position in the economy was lucrative and not worth jeopardizing by tempting the regulators to adopt a more heavy-handed approach.

Eschewing certain kinds of content did mean leaving a bit of market share and engagement on the table, but that was a choice worth making. Free speech was protected in the sense that all ideas could flow freely in books, magazines, pamphlets, and elsewhere, but the single most efficient means of distributing information to a mass audience was pretty buttoned down. This philosophy had some real costs — it was the heyday of manufacturing consent and cultural conformism — but it also put a damper on extremist politics and ensured that elections didn’t turn on the caprice of network CEOs or the vagaries of algorithms.

Either regulatory future, or even both, is certainly possible for social media. But to get there, policymakers would need to be clearer and more consistent about what they’re asking for —and speak in terms of principles rather than just yelling about individual cases.


Help keep Vox free for all

Millions turn to Vox each month to understand what’s happening in the news, from the coronavirus crisis to a racial reckoning to what is, quite possibly, the most consequential presidential election of our lifetimes. Our mission has never been more vital than it is in this moment: to empower you through understanding. But our distinctive brand of explanatory journalism takes resources. Even when the economy and the news advertising market recovers, your support will be a critical part of sustaining our resource-intensive work. If you have already contributed, thank you. If you haven’t, please consider helping everyone make sense of an increasingly chaotic world: Contribute today from as little as $3.

Author: Matthew Yglesias

Read More

RSS
Follow by Email