Social media stock
Stock photo of Facebook, Messenger, Twitter, Instagram, WhatsApp, Snapchat and Pinterest, social media app icons on a smart phone.

How online platform transparency can improve content moderation and algorithmic performance

During one of the 2019 sessions on platform transparency at the Transatlantic Working Group on Content Moderation and Free Expression, one of the participants said in exasperation, “Well what are we trying to accomplish here? What harms are we trying to prevent by imposing all these disclosure obligations on platforms?”

601 views

The questions deserve a thoughtful answer. Transparency’s goodness often seems self-evident to those who advocate for it. But sometimes it can be deeply complicated. In the case of social media companies, however, it is an essential and timely part of an overall regulatory system.

WHY TRANSPARENCY?

To see why transparency can be problematic, consider Jeremy Bentham’s panopticon. In his proposed prison reform, each prisoner would be assigned to a transparent cell visible from a central observation tower in the middle of a circular ring of cells. Disclosure to the unobserved guards in the central tower would be permanent and complete.

This example is often used in privacy courses to illustrate how surveillance exerts quasi-coercive pressure on those surveilled to conform to external values and expectations.

Transparency in Bentham’s panopticon is a privacy violation. But sometimes we want transparency: If companies have to reveal substantial information about their operations to regulators and to the public, they are more likely to conform to public values and expectations. It is a way to increase public trust. As is often said, sunlight is the best disinfectant.

In the U.S., transparency was the initial way in which regulators like Charles Francis Adams Jr. tried to establish control over railroads. It grew into a broader system of pervasive public utility regulation which persists to this day. Banks and other financial institutions, for instance, are heavily regulated for safety and soundness and to protect financial consumers from unfair, deceptive, or abusive practices. Disclosure of information to bank supervisors and to the public is an essential element in that regulatory structure.

It is only natural that a first step and essential part of a new regulatory system for online platforms should be transparency requirements. For instance, the EU Digital Services Act’s transparency requirements are part of a pervasive regulatory structure that also includes (at Article 54) on-site inspections very much like bank supervisory visits designed to provide some public oversight.

WHICH TRANSPARENCY REQUIREMENTS?

But that general answer does not address the question of why specific disclosure requirements for social media are needed and what harms they are seeking to remedy. Disclosure is costly and time consuming, distracting companies from their real business of providing service to the public. Disclosure requirements demand a careful and measured articulation of the rigorously defined government interests they are serving.

Due Process Protections

The first category of disclosures has to do with due process protections of users in connection with the operation of platform content moderation programs. Disclosures must include content rules in terms of service or community standards; enforcement techniques used such as deleting, demoting, or delaying content; procedures for the public to complain about possible rules violations; procedures for platforms to explain their decisions to affected parties; and procedures for individual appeals in connection with enforcement actions.

Why require standards disclosures and explanations for content moderation decisions? After all, policymakers do not require newspaper editors to reveal their editorial standards or explain their decisions to run a story or a column.

Part of the answer is that social media have acted like common carriers, providing service in a nondiscriminatory way to all comers, or like places of public accommodation such as restaurants, bars, and amusement parks that are generally open to the public. People have legitimately come to expect that they will have access to social media provided they do not make a nuisance of themselves and do not disrupt others.

Traditional media never held themselves out as publicly available places in part because they did not, and do not now, have the technical capacity to provide service to all comers the way platforms do. So, they are not subject to disclosure and due process requirements the way platforms should be.

A second reason for disclosure is that participation in the larger platforms has become an essential way to participate in contemporary social, political, and economic life. When a platform becomes this important, it has to provide its users with appropriate due process protections.

To some degree, platforms have established these due process protections, but they need to be reinforced with binding legal obligations. In the U.S., Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.) introduced the Platform Accountability and Consumer Transparency (PACT) Act and Rep. Jan Schakowsky (D-Ill.) circulated a draft bill in the House to require transparency and accountability in online content moderation programs. In Europe the proposed Digital Services Act (DSA) also imposes due process obligations.

Reports on the operation of content moderation programs

A second transparency requirement is reports from the companies to government agencies and to the public with aggregate statistics accurately reflecting the operation of the content moderation programs. What material has been taken down and how much of it? Is the information disorder getting better or worse? Are the moderation decisions politically biased, as some have alleged? Only public reports can begin to answer these questions.

Why do we need this? Information disorder—hate speech, racist and misogynistic cesspools, political and medical misinformation, terrorist conspiracies, and campaigns to undermine election integrity—seems to flourish easily on social media. These online disorders lead to substantial harms, including most recently the storming of the U.S. Capitol, provoked by months of misinformation about the election results of the U.S. presidential election.

The public and policymakers need to understand how well content moderation programs are controlling this information disorder. The platforms say they are working on it, but the problems do not seem to be going away. The need to account for a platform’s actions in a public report might provide an incentive to get better at content moderation.

Algorithms

The third transparency area is access to algorithms. Vetted researchers and regulators should have access to enough information about the algorithms used in content moderation, prioritization, advertising, and recommendation, and enough data about how these algorithms affect platform content to allow an independent assessment.

Why? It is widely thought that these algorithms make it too easy for likeminded people—even racists and terrorists—to find each other on these platforms. Outsiders suspect that the algorithms are tuned to maximize user engagement—regardless of content—in the service of the advertising business model the platforms have chosen. These two forces have created risks of “real and foreseeable negative impact on public health, public security, civil discourse, political participation and equality,” as the DSA puts it.

But only further research will more fully inform policymakers and the public of how great these risks are and help guide policies that might mitigate the harms.

Advertising

But beyond algorithmic disclosure to vetted researchers and regulators, it would be important to disclose some information about online ads at the time the ad is displayed to the ad recipient. This would include sponsorship identification, especially for election ads, because this helps viewers evaluate the ad’s message.  The DSA also wants to let viewers know in the moment about the main parameters used for targeting a specific ad. This strikes me as information overload, clutter that will only distract viewers.

It would make sense, however, to include that targeting information in a public repository of advertising information. Such a public repository would be valuable to researchers, advocates, and regulators to understand better the workings of the online ad industry, which could use a little sunlight, and to guide future decisions by regulatory agencies. Platforms have already begun campaign ad archives, but they should expand them to other areas. Commercial ads, for instance, can be used to discriminate and to spread hate speech and disinformation. Unlike the details of algorithms, there is no need to confine this information to vetted researchers.

Audits

To improve trust, outside audits by independent companies should verify the accuracy and completeness of the reports on the operation of content moderation programs and the operation of algorithms. Audits differ from independent research in that the platform hires an independent auditor to verify its own compliance with outside requirements.

However, it is crucial for regulators to tie each audit requirement to a particular purpose and to the disclosure of specific information. For instance, audits of fairness should describe what outcomes are subject to a fairness requirement and disclose disparate impact information in connection with that outcome. Platform ads, for instance, should not discriminate against protected classes in housing or employment.

DSA gets at this need for specificity by having outside audits to assess the systemic risks that might affect a range of public policy objectives including “safeguarding public order, protecting privacy and fighting fraudulent and deceptive commercial practices.”

ALIGN GOVERNMENT EFFORTS

Responsibility for enforcing transparency rules should be assigned to a single government agency that would also administer a broad regulatory program. The agency would supervise platform due process obligations, public reports, and audits and would vet researchers and auditing firms. It would also determine which data and in what format has to be made available for audits and research. It is impossible to specify all the details of data disclosure in advance, but it can be the subject of ongoing dialogue among regulators, platforms, researchers and auditors.

A social media regulator should have additional regulatory tasks beyond content moderation, including privacy and the promotion of competition. These goals interact and sometimes conflict. So, a single regulator would be the best place to house the different responsibilities.

Other regulatory responsibilities are very specialized and might need to remain with the different specialist regulator. For example, the U.S. Department of Housing and Urban Development brought a discrimination case against Facebook. It might be best to leave the enforcement of housing discrimination laws with specialized agencies and require only close collaboration with the social media regulator.

HOW TO PREVENT ABUSE?

A social media regulator would have the power to manipulate the oversight system to favor the interests of the political party that happened to be in power. How should this be constrained?

One safeguard is to ensure that the supervisory agency is independent of the rest of the government. European data protection authorities, for instance, are independent in this way, as are to some degree U.S. independent agencies such as the Federal Trade Commission. This independence would prevent the agency from simply following the orders of the current administration. In addition, its regulatory decisions would have to be subject to normal notice and comment rules and judicial review.

The second safeguard is to prevent agencies from second-guessing content decisions of social media companies. Banking regulators do not make loans and social media regulators should not make content decisions. The DSA requires an out-of-court dispute resolution system, where the regulator vets and approves the settlement body, but does not make the decision itself.  I suggest an industry body such as has developed under FINRA in the United States, which separates the industry regulatory body from the dispute resolution system.

Transparency is not always a good thing, but in the case of social media platforms it is an idea whose time has come. It is not just nice to have but serves concrete, tangible public policy interests. It is an essential part of a comprehensive regulatory scheme for social media companies.

Leave a Reply

Your email address will not be published.