Throughout American history, Americans have insisted that free speech protections are essential to the defence of democracy. But as the cyber-extremist ecosystem spreads across social media, free speech claims also protect actors who threaten democratic civil society. Just in the past two years, social media platforms have been used to organise seditious conspiracies, promote white supremacist ideology, and spread disinformation that weakens civil society and national security.

Given the profound challenges posed by social media, corrective measures need to go beyond “de-platforming” bad apples. But how can the United States make such structural changes without undermining the democratic tradition of free speech?

The main policy proposals fall into three categories: regulation by federal agencies, holding platforms accountable for their content in civil courts, or requiring data transparency and reporting.

None of these are easy. All would require new legislation that could withstand Supreme Court scrutiny. However, without some form of government intervention, it is unlikely that social media companies will be able to effectively self-regulate.

Option one – regulation – could come in the form of Congress enacting a new law that delegates authority over social media to an agency within the executive branch. Social media might be seen as naturally falling under the purview of the Federal Communications Commission (FCC). However, the U.S. Supreme Court’s decision in Reno v. ACLU (Reno v. ACLU, 1997) made it clear, among other things, that Internet companies are not broadcasters. Internet companies have also been granted an exemption from the Communications Regulation Act of 1996, under which the FCC may regulate extremist content.

Alternatively, regulatory authority may be granted to the Federal Trade Commission’s Bureau of Consumer Protection (BCP). The logic here is that social media is a consumer product and may fall under the BCP’s jurisdiction over product safety. However, this jurisdiction may be too narrow for the issue at hand. The need to regulate social media extends beyond physical safety, financial abuse, and even mental health; instead, it extends to the protection of civil society and national security. If Congress wishes to oversee social media through a federal agency, the most direct but politically complicated route may be to create an entirely new agency under a new statute.

A second frequently discussed option would be to amend Section 230 of the Communications Decency Act (CDA) to provide that social media companies could be held liable in court for damages caused by content on their platforms. Proponents argue that this option would force the industry to become more self-regulatory – either proactively avoiding the risk of litigation or as a result of litigation and judicial decisions.

But this option would directly touch on another aspect of the Reno v. ACLU decision, which held that Internet companies are fundamentally different from traditional publishers. Since Internet companies don’t pick and choose who writes posts or types in chat rooms when they make their rulings (think early blogs and AOL), the court said they aren’t legally liable for their own content, just as The Theatre is. The New York Times or the Washington Post decided to publish (or not) each article.

Perhaps it’s time to revisit this. Today’s Internet is no longer the Internet of the 1990s. While individual users still create content on social media, today, companies’ algorithms are largely responsible for what is and isn’t amplified on their platforms. At what point should they be held accountable? Individuals can sue traditional media companies for damages for publishing defamatory content. Have social media platforms become less responsible for viral and promoted content?

Finally, Congress may pass laws requiring social media platforms to provide data to third-party researchers and evaluators, thereby requiring greater transparency. In theory, scrutiny by independent researchers may encourage social media companies to better curb the spread of malicious information. One specific bill is the Platform Accountability and Transparency Act (PATA), introduced in 2021 by Senators Chris Coons (Democrat, Delaware), Rob Portman (Republican, Ohio), and Amy Klobuchar (Democrat, Minnesota). PATA would require the National Science Foundation to establish a review process to approve social media researchers, who must be affiliated with an academic institution. Once approved, the researchers would have access to de-identified aggregated data from social media companies with more than 50 million unique users per month. Companies that fail to comply with these requirements will be held liable as publishers under Section 230 of the CDA, and PATA will not impose new requirements on platforms with fewer than 50 million users.

While perhaps useful as a first step, PATA by itself does not appear to be sufficient to reduce misinformation, cyber-extremism and other threats to civil society. As a policy tool, transparency ultimately depends on the willingness of companies to self-regulate in order to prevent embarrassment, comply with ethical norms, or value the public interest. This does not work in other contexts, such as financial institutions.

However, these types of transparency requirements can complement other policy options. It is useful to ensure that social media data is discoverable by federal regulators or private parties in civil litigation. Just as banks are required to report currency transactions over $10,000 ( PDF), social media companies may be required to report the content most amplified through their algorithms on a weekly basis and provide raw data.

Those opposed to regulating social media companies make a variety of arguments: it could hamper free speech. It would place an undue burden on small providers. It would incentivise unnecessary and aggressive content removal or no removal at all. Repealing Section 230 could harm money and jobs in the U.S. economy. In many of these cases, there are potential rebuttals. For example, regulatory requirements may vary depending on the size of the platform. Or companies could be held accountable for the content promoted by their algorithms, but not all content posted.

An objective analysis of the costs and benefits of these regulatory options is long overdue in the United States. The teenagers who set up Myspace accounts are middle-aged; social media companies that were once featherweights are now owned by billionaires; and the Internet is now older than the youngest member of Congress. To find evidence that the status quo isn’t working and that social media itself isn’t getting better, just look online.

作者 tanxuabc

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注