·  by Engine

Online Content Moderation in the Hot Seat

Originally Posted On: Engine

TLDR: Amid the coronavirus pandemic and critical ongoing conversations about race-based inequalities and injustices, Americans are increasingly turning to the Internet to learn, share, and stay informed. That’s shining a brighter light on the ways in which Internet platforms handle all kinds of content—including misinformation, violent speech, and alleged infringement from the country’s highest office.

What’s Happening This Week: President Donald Trump is continuing to criticize Internet platforms after Twitter, Facebook, and Instagram disabled a video from the president’s reelection campaign after receiving copyright infringement claims. The video featured photos and clips of protests in response to the killing of George Floyd, the Black man who was killed by a Minneapolis police officer, sparking nationwide protests.

President Trump misattributed the video’s removal to the platforms, calling it an “illegal” decision, but Twitter CEO Jack Dorsey clarified that the video was removed because Twitter received a DMCA complaint from a copyright holder. A spokesman for Facebook, which also owns Instagram, said that the video was removed from both platforms after the company also received a DMCA complaint. The Digital Millennium Copyright Act (DMCA) is a landmark law that provides Internet platforms, rightholders, and online users with a framework for addressing allegations of online infringement. The DMCA is particularly crucial to startups, since it shields online firms from being automatically liable when their users are accused of infringement. And pursuant to DMCA, Twitter and the other platforms were effectively required to remove the president’s campaign video in response to the complaint.

The president’s frustration with Twitter’s handling of copyright infringement claims comes on the heels of his recent criticism of the content moderation practices of Internet companies. While DMCA complaints originate from third-parties—and companies are compelled by federal law to respond and remove content after receiving allegations of infringement—President Trump has been especially critical of a separate law that shields online platforms from abusive litigation: Section 230 of the Communications Decency Act, which allows online platforms to host and moderate user content without being held legally liable for that content.

The president released an executive order last month to dismantle Section 230. While the law has created an environment where anyone can host and create communities online without worrying about ruinous moderation and litigation costs, President Trump and other policymakers have accused platforms of using Section 230 to stifle the speech of online users through their content moderation efforts.

Why it Matters to Startups: No matter the size of the company, Internet platforms rely on the current intermediary liability regime created by the DMCA and Section 230—and moderation of users’ content is an inherently difficult, time consuming, and expensive task. These moderation decisions often become more complicated when users, particularly government officials, turn routine decisions into campaign and political talking points. And policymakers are already pushing competing demands on Internet firms, calling for platforms of all sizes to crack down on violent, extremist, and hateful content, while at the same time criticizing companies that remove content when it’s perceived to impact one political party over another.

As we’ve seen, platforms that try to be more aggressive in their efforts to combat hate speech, copyright infringement, and misinformation can be criticized for supposedly exhibiting bias against certain users. Alternatively, platforms that do too little to combat and remove offensive content can be attacked for allowing hate speech and fake news to proliferate on their platforms. And an Internet firm that fails to remove allegedly infringing content from its platform can face legal repercussions under the DMCA.

But changing the current intermediary liability system would harm both users and Internet firms. Current protections—such as the DMCA and Section 230—protect Internet companies that grapple with these content moderation decisions. Without legal protections like Section 230, a startup would be unwilling to host user-generated content that might result in damaging lawsuits. And without legal protections like the DMCA—which creates a balanced and collaborative framework for rightsholders and platforms to work together to identify and remove instances of infringement—platforms would be in the position of removing even more creative content than they already do, and still risk being sued for making incorrect decisions about what is and is not infringing.

This intermediary liability system is what allows startups and large Internet firms to host user generated content by shielding them from potentially disastrous lawsuits brought on by the actions of their users. As we’ve pointed out, damages for copyright infringement cases could reach as much as $150,000 per infringing work. And it can already cost a startup as much as $80,000 to fight a meritless suit over its content moderation decisions, even with Section 230 in place. Since the average startup launches with approximately $78,000 in outside funding, the protections afforded by the DMCA and Section 230 ensure that early-stage firms will not go bankrupt before they even have the chance to grow.

There are no technological solutions for platforms to identify and remove all problematic content, nor are there any effective filtering tools for correctly identifying and removing all instances of copyright infringement. As the debate over the removal of the Trump campaign video highlights, it’s often difficult for groups of people to make these decisions. If companies had to use mandatory filtering technology to flag and remove infringing content, the Trump campaign’s video—and, likely, any mash-up or montage video—would be automatically removed. Even if this type of sophisticated filtering technology existed (it does not), there are questions that machines cannot answer (such as whether content is protected by fair use). Mandating the use of this type of software, or the over-reliance on technology to handle all content moderation, would mean more improper takedowns.

The current intermediary liability system established by the DMCA and by Section 230 is working for startups and Internet firms. Any changes to this system, however small, would disproportionately impact early-stage startups and would harm the Internet ecosystem. Policymakers must ensure that this system—which protects user speech, promotes competition, and encourages innovation—continues to work for companies of all sizes.

On the Horizon.

  • The Senate Small Business Committee is holding a hearing tomorrow at 10 a.m. to discuss the implementation of Title I of the CARES Act. The hearing will feature Treasury Secretary Steven Mnuchin and U.S. Small Business Administration Administrator Jovita Carranza.

  • The House Financial Services Committee is holding a hearing at noon on Thursday to discuss inclusive banking during the COVID-19 pandemic, particularly the use of “FedAccounts and Digital Tools to Improve Delivery of Stimulus Payments.”

Archives