Article 3VY7B Senator Mark Warner Lays Out Ideas For Regulating Internet Platforms

Senator Mark Warner Lays Out Ideas For Regulating Internet Platforms

by
Mike Masnick
from Techdirt on (#3VY7B)
Story Image

For over a year now, Senator Mark Warner has been among the most vocal in saying that it's looking like Congress may need to regulate internet platforms. So it came as little surprise on Monday when he released a draft white paper listing out "potential police proposals for [the] regulation of social media and technology firms." Unlike much of what comes out of Congress, it does appear that whoever put together this paper spent a fair bit of time thinking through a wide variety of ideas, recognizing that every option has potential consequences -- both positive and negative. That is, while there's a lot in the paper I don't agree with, it is (mostly) not in the hysterical moral panic nature found around such debates as FOSTA/SESTA.

The paper lays out three major issues that it hopes to deal with:

  1. Disinformation that undermines trust in our institutions, democracy, free press, and markets.
  2. Consumer protection in the digital age
  3. Antitrust issues around large platforms and the impact it may have on competition and innovation.
All of these are issues worth discussing and thinking about carefully, though I fear that bad policy-making around any of them could actually serve to make other problems even worse. Indeed, it seems that most ideas around solving the first problem might create problems for the other two. Or solving the third problem could create problems for the first one. And so on. That is not to say that we should throw up our hands and automatically say "do nothing." But, we should tread carefully, because there are also an awful lot of special interests (a la FOSTA, and Articles 11 and 13 in the EU) who are looking at any regulation of the internet as an opportunity to remake the internet in a way that brings back gatekeeper power.

On a related note, we should also think carefully about how much of a problem each of the three items listed above are. I know that there are good reasons to be concerned about all three, and there are clear examples of how each one is a problem. But just how big a problem they are, and whether or not that will remain the case is important to examine. Mike Godwin has been writing an important series for us over the last few months (part 1, part 2 and part 3) which makes a compelling case that many of the problems that everyone is focused on may be the result of a bit of moral panic, overreacting to a smaller problem and not realizing how small it is.

We'll likely take more time to analyze the various policy proposals in the white paper over time, but let's focus in on the big one that everyone is talking about: the idea of opening up Section 230 again.

Make platforms liable for state-law torts (defamation, false light, public disclosure ofprivate facts) for failure to take down deep fake or other manipulated audio/video content-- Due to Section 230 of the Communications Decency Act, internet intermediaries like socialmedia platforms are immunized from state tort and criminal liability. However, the rise oftechnology like DeepFakes -- sophisticated image and audio tools that cart generatefake audio or video files falsely depicting someone saying or doing something -- is poised tousher in an unprecedented wave of false and defamatory content, with state law-based torts(dignitary torts) potentially offering the only effective redress to victims. Dignitary torts such asdefamation, invasion of privacy, false light, and public disclosure of private facts represent keymechanisms for victims to enjoin and deter sharing of this kind of content.

Currently the onus is on victims to exhaustively search for, and report, this content to platforms who frequently take months to respond and who are under no obligation thereafter to proactivelyprevent the same content from being re-uploaded in the future. Many victims describe a"whack-a-mole" situation. Even if a victim has successfully secured a judgment against the userwho created the offending content, the content in question in many cases will be re-uploaded byother users. In economic terms, platforms represent "least-cost avoiders" of these harms; they arein the best place to identify and prevent this kind of content from being propagated on theirplatforms. Thus, a revision to Section 230 could provide the ability for users who havesuccessfully proved that sharing of particular content by another user constituted a dignitary tortto give notice of this judgement to a platform; with this notice, platforms would be liable ininstances where they did not prevent the content in question from being re-uploaded in the futurea process made possible by existing perceptual hashing technology (e.g. the technology theyuse to identify and automatically take down child pornography). Any effort on this front wouldneed to address the challenge of distinguishing true DeepFakes aimed at spreadingdisinformation from satire or other legitimate forms of entertainment and parody.

So this seems very carefully worded and structured. Specifically, it would appear to require first a judicial ruling on the legality of the content itself, and then would require platforms to avoid having that content re-uploaded, or face liability if it were. The good part of this proposal is the requirement that the content go through a full legal adjudication before a takedown would actually happen.

That said, there are some serious concerns about this. First of all, as we've documented many times here on Techdirt, there have been many, many examples of either sketchy lawsuits that were filed solely to get a ruling on the books to try to take down perfectly legitimate content. If you don't remember the details, there were a few different variants on this, but the standard one was to file a John Doe lawsuit, then (almost immediately) claim to have identified the "John Doe" who admits to everything and agrees to a "settlement" admitting defamation. The "plaintiff" then sends this to the platforms as "proof" that the content should be taken down. If Warner's proposal goes through as is, you could see how that could become a lot more common, and you could see a series of similar tricks as well. Separately, it could potentially increase the number of sketchy and problematic defamation lawsuits filed in the hopes of getting content deleted.

One would hope that if Warner did push down this road, he would only do so in combination with a very strong federal anti-SLAPP law that would help deal with the inevitable flood of questionable defamation lawsuits that would come with it.

To his credit, Warner's white paper acknowledges at least some of the concerns that would come with this proposal:

Reforms to Section 230 are bound to elicit vigorous opposition, including from digital libertiesgroups and online technology providers. Opponents of revisions to Section 230 have claimed thatthe threat of liability will encourage online service providers to err on the side of contenttakedown, even in non-meritorious instances. Attempting to distinguish between truedisinformation and legitimate satire could prove difficult. However, the requirement thatplaintiffs successfully obtain court judgements that the content in question constitutes a dignitarytort which provides significantly more process than something like the Digital MillenniumCopyright Act (DMCA) notice and takedown regime for copyright-infringing works may limitthe potential for frivolous or adversarial reporting. Further, courts already must make distinctionsbetween satire and defamation/libel.

This is all true, but it does not take into account how these bogus defamation cases may come into play. It also fails to recognize that some of this stuff is extremely context specific. The paper points to hashing technology like those used in spotting child pornography. But such content involves a strict liability -- where there are no circumstances under which it is considered legal. Broader speech is not like that. As the paper acknowledges in determining whether or not a "deepfake" is satire, much of this is likely to be context specific. And so, even if certain content may represent a tort in one context, it might not in others. Yet under this hashing proposal, the content would be barred in all contexts.

As a separate concern, this might also make it that much harder to study content like deepfakes in ways that might prove useful in recognizing and identifying faked content.

Again, this paper is not presented in the hysterical manner found in other attempts to regulate internet platforms, but it also does very little beyond a perfunctory "digital liberties groups might not like it" to explore the potential harms, risks and downsides to this kind of approach. One hopes that if Warner and others continue to pursue such a regulatory path, that much more caution would go into the process.



Permalink | Comments | Email This Story
External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments