In late 2016, we wrote about the positively silly case that lawyer Harry Jordan filed on behalf of his client, Dawn Bennett, in which she sued Google because a guy she had once hired to do some search engine optimization work for her, and with whom there was a falling out, later wrote a mean blog about her and her company. As we noted, Bennett did not sue that person -- Scott Pierson. Instead, she and Harry Jordan went the Steve Dallas lawsuit way of filing against some tangential third party company, because that company is big and has lots of money. In this case, it meant suing Google, because Pierson's blog was hosted by Google.
As we noted, this would be an easy CDA 230 win, because Google is not at all liable for what bloggers using its blog hosting do (we also noted that the lawsuit botched the legal meaning of "defamation" -- which is generally not a good thing to do in a defamation lawsuit). And thus it was of little surprise to see the lawsuit dismissed last summer. It was an easy ruling to make given the status of CDA 230 (which, yes, is now under threat). But, Bennett appealed. And... the results of the appeal are exactly the same as the results in the district court. Case dismissed, quick and easy (in just 10 pages), because CDA 230 makes it obvious that Google is not liable.
Still, as law professor Eric Goldman notes in his post about this ruling, the DC Circuit makes some useful statements about CDA 230 and how it works.
Bennett argues that by establishing and enforcing its Blogger Content Policy, Google is influencing— and thus creating—the content it publishes. This argument ignores the core of CDA immunity, that is, “the very essence of publishing is making the decision whether to print or retract a given piece of content.” Klayman, 753 F.3d at 1359. In other words, there is a sharp dividing line between input and output in the CDA context. Here, the input is the content of Pierson’s negative blog about Bennett’s business; that blog was created exclusively by Pierson. Google’s role was strictly one of output control; it had the choice of leaving Pierson’s post on its website or retracting it. It did not edit Pierson’s post nor did it dictate what Pierson should write. Because Google’s choice was limited to a “yes” or “no” decision whether to remove the post, its action constituted “the very essence of publishing.”
I think it's also worth highlighting another point that the court makes -- which is frequently ignored or misunderstood by people who are now attacking CDA 230 (including members of Congress). CDA 230 is designed to enable sites to monitor and make decisions on moderation without those decisions impacting their liability. In other words, it actually creates a scenario where platforms are more likely to monitor, rather than putting their heads in the sand to avoid "knowing" anything. Indeed, the court uses the "heads in the sand" language:
The intent of the CDA is thus to promote rather than chill internet speech.... By the same token, however, the CDA “encourage[s] service providers to self-regulate the dissemination of offensive material over their services.”... In that respect, the CDA corrected the trajectory of earlier state court decisions that had held computer service providers liable when they removed some—but not all—offensive material from their websites.... Put differently, section 230 incentivized companies to neither restrict content nor bury their heads in the sand in order to avoid liability.
This is such an important point -- and it's good to have it clearly stated in a court ruling. One hopes it still matters after Congress is done mucking with CDA 230.