Logo
UpTrust
Log InSign Up
  1. Home
  2. How does "Anderson v TikTok/ByteDance" r...
I
isaac_uptrust·...
New to social media

How does "Anderson v TikTok/ByteDance" relate to UpTrust?

(continued from the Uptrust slack)

Links:

  • https://www.thebignewsletter.com/p/judges-rule-big-techs-free-ride-on
  • Hacker News discussion

This stuff was hard for me to make sense of. I think writing about it here has helped.

The Moody v NetChoice case mentioned in The Big Newsletter is about NetChoice (Facebook, Google, etc.) in reaction to recent laws passed by Florida and Texas. NetChoice claims that content moderation is their right according to the US constitution’s First Amendment

https://www.reuters.com/technology/us-supreme-court-weigh-state-laws-constraining-social-media-companies-2023-09-29/

, and therefore it’s unconstitutional to pass a law that places restrictions on a platform’s content moderation ability. Appealing to the First Amendment seems weird, because in my mind it’s associated with freedom of speech, and content moderation doesn’t seem to be a speech-like thing. At the same time, it seems reasonable in general for businesses have some control over the conduct of its patrons, off- and online. The court opinion indicates support for this intuition.

The Florida law calls for consistent censorship standards on a platform, allowing opt-out of recommendation systems in favour of objective sorting methods, equal opportunity for political candidates and news sources, disclosure of censorship methods and events, and offboarding for censored users. The Texas law prohibits social media companies from censoring users based on their viewpoint (while allowing for removal of illegal content), and requires disclosure of how the platform’s recommendation and moderation systems work, as well as providing reasoning and an appeals process for censored users.

Next is Anderson v TikTok/ByteDance, the main subject of the The Big Newsletter article. The case was escalated to the US Supreme Court after being dismissed from the District Court. The District Court claimed that the Communications Decency Act, section 230 immunised TikTok. The actual Supreme Court document was surprisingly easy to follow (given that I have no background in law), and it felt like a good overview of the history of this kind of problem.

Here’s my understanding of what they wrote. The ideas behind Section 230 are:

  • A company isn’t considered the author of some content merely due to having hosted it.

    For example, if someone uploads a copyrighted work to a file-sharing site, then only the uploader has violated copyright, not the file-sharing company.

    Here’s a case where I think this played out reasonably: someone tried to sue some bookstores (Amazon, Barnes & Noble, etc.) for defamation because the bookstores had defamatory content as part of the book’s description on its store page. The defamation claim against the bookstores was dismissed because they were not the ones making the claim; it was copy provided to them by the book’s publisher. Only the publisher and the author remained liable.

  • A companies with some kind of content moderation policy / screening process is not necessarily liable for all content.

    For example, a family-friendly bulletin board that proactively excludes obscene submissions shouldn’t be liable for defamatory content on the grounds that it has some screening process for submissions.

TikTok tried to argue that its recommendation system is merely an extension of hosting videos, so it shouldn’t be liable for the harm caused by those videos.
The court dismissed this argument:

I would affirm the District Court’s judgment as it relates to any of Anderson’s claims that seek to hold TikTok liable for the Blackout Challenge videos’ mere existence on TikTok’s platform. But I would reverse the District Court’s judgment as it relates to any of Anderson’s claims that seek to hold TikTok liable for its knowing distribution and targeted recommendation of the Blackout Challenge videos.

I think it’s reasonable to hold TikTok accountable for the impact of its highly personalised, virality-biased recommendation system.

Seeing all of this, I’m optimistic for how UpTrust could fit into the legal landscape.
Giving users agency over their content curation means that UpTrust doesn’t need to censor based on viewpoint.
I think transparency regarding recommendation systems and moderation decisions fit in our collective values, and to me they seem like a good thing regardless of whether they’re legislated.

And when it comes to the potential for harm that’s facilitated by the platform, like a recommendation system spreading a self-asphyxiation meme that kids copy and kill themselves, I think it’s reasonable for such a risk to exist. We don’t need to be immune to liability in order to build the thing, and perhaps the risk will help us think about how to make a safer system. Like, if UpTrust did well because 80% of the activity was a child-abuse ring, then it’s kind of okay if the company got nuked or had to make significant reparations. So let’s do our best to make sure we create something that’s good for people, and be willing to kill it if we find out it’s not.

Comments
4
Log in to UpTrustLog in to DownTrust