Search

The Economic Case for Regulating Social Media - The New York Times

The core business model of platforms like Facebook and Twitter poses a threat to society and requires retooling, an economist says.

Social media platforms like Facebook, YouTube and Twitter generate revenue by using detailed behavioral information to direct ads to individual users.

That sounds straightforward enough. But this bland description of their business model fails to convey even a hint of its profound threat to the nation’s political and social stability.

Rising concern about social media abuses has already prompted legislators in Congress to propose the breakup of some tech firms, along with other traditional antitrust measures. But the main hazard posed by these platforms is not aggressive pricing, abusive service or other ills often associated with monopoly. Instead, it is their contribution to the spread of misinformation, hate speech and conspiracy theories.

Because the economic incentives of companies in digital markets differ so sharply from those of other businesses, traditional antitrust measures won’t curb those abuses.

Consider what basic economic theory tells us.

In the market for widgets beloved by economists (substitute your own imaginary item, if you like), producers expand output until the additional cost of the last widget produced is equal to what the last buyer is willing to pay for it. Stopping short of that level would leave cash on the table, since an additional widget could be sold at a price greater than its marginal cost. Exceeding that level would also be wasteful, since the last buyer would then value the purchase at less than its marginal cost.

The upshot is the economist’s celebrated efficiency criterion: Goods and services should be sold for the marginal cost of producing them.

But this criterion simply can’t be met by digital platforms, since the marginal cost of serving additional consumers is essentially zero. Because the initial costs of producing a platform’s content are substantial, and because any company’s first goal is to remain solvent, it cannot just give stuff away. Even so, when price exceeds marginal cost, competition relentlessly pressures rival publishers to cut prices — eventually all the way to zero. This, in a nutshell, is the publisher’s dilemma in the digital age.

It helps explain why published content has been migrating to digital aggregators like Facebook. These firms make money not by charging for access to content but by displaying it with finely targeted ads based on the specific types of things people have already chosen to view. If the conscious intent were to undermine social and political stability, this business model could hardly be a more effective weapon.

Merriam-Webster defines clickbait as “something (such as a headline) designed to make readers want to click on a hyperlink, especially when the link leads to content of dubious value or interest.” The targeted-ad business model is clickbait on steroids.

The algorithms that choose individual-specific content are crafted to maximize the time people spend on a platform. As the developers concede, Facebook’s algorithms are addictive by design and exploit negative emotional triggers. Platform addiction drives earnings, and hate speech, lies and conspiracy theories reliably boost addiction.

Careful studies have shown that Facebook’s algorithms have increased political polarization significantly. Researchers have identified a small group of right-wing personalities — Dan Bongino prominent among them — whose influence on social media played an outsize role in promoting false beliefs about the 2020 presidential election. And witness testimony leaves little doubt that posts on a variety of social media platforms helped provoke the Jan. 6 assault on the nation’s Capitol.

Some people object to reining in social media on libertarian grounds. John Samples, vice president of the Cato Institute, a conservative think tank, says, for example, that government has no business second-guessing people’s judgments about what to post or read on social media. That position would be easier to defend in a world where individual choices had no adverse impact on others. But negative spillover effects are in fact quite common.

When an accident blocks the southbound lanes of a freeway, for example, it also causes long delays in the northbound lanes, because many northbound drivers judge the scene worth the 10-second delay to slow down for a closer look. Yet the cumulative impact of those decisions may be several hours of additional delay for drivers behind them. If drivers could decide collectively, most would surely reject that trade-off. But drivers make such decisions individually, not collectively.

For parallel reasons, individual and collective incentives about what to post or read on social media often diverge sharply. There is simply no presumption that what spreads on these platforms best serves even the individual’s own narrow interests, much less those of society as a whole.

In short, the antitrust remedies under consideration in Congress and the courts won’t stem the abuses that flow from the targeted-ad business model. But a simpler step may hold greater promise: Platforms could be required to abandon that model in favor of one relying on subscriptions, whereby members gain access to content in return for a modest recurring fee.

For those willing to pay the fee, this model satisfies the economist’s efficiency criterion, since they can enjoy unlimited quantities of a platform’s offerings at a zero marginal charge. Major newspapers have done well under this model, which is also making inroads in book publishing. The subscription model greatly weakens the incentive to offer algorithmically driven addictive content provided by individuals, editorial boards or other sources.

But since platforms incur no additional costs when they make content available to new members, the subscription model isn’t fully efficient: Any positive fee would inevitably exclude at least some who would value access but not enough to pay the fee. More worrisome, those excluded would come disproportionately from low-income groups. Such objections might be addressed specifically — perhaps with a modest tax credit to offset subscription fees — or in a more general way, by making the social safety net more generous.

Adam Smith, the 18th-century Scottish philosopher widely considered the father of economics, is celebrated for his “invisible hand” theory, which describes conditions under which market incentives promote socially benign outcomes. Many of his most ardent admirers may view steps to constrain the behavior of social media platforms as regulatory overreach.

But Smith’s remarkable insight was actually more nuanced: Market forces often promote society’s welfare, but not always. Indeed, as he saw clearly, individual interests are often squarely at odds with collective aspirations, and in many such instances it is in society’s interest to intervene. The current information crisis is a case in point.

Proposals for regulating social media merit rigorous public scrutiny. But what recent events have demonstrated is that policymakers’ traditional hands-off posture is no longer defensible.

Robert H. Frank is an emeritus professor of economics at Cornell University. Follow him on Twitter: @econnaturalist


Let's block ads! (Why?)



"case" - Google News
February 11, 2021 at 07:00PM
https://ift.tt/3d3VePc

The Economic Case for Regulating Social Media - The New York Times
"case" - Google News
https://ift.tt/37dicO5
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update

Bagikan Berita Ini

0 Response to "The Economic Case for Regulating Social Media - The New York Times"

Post a Comment

Powered by Blogger.