Who decides what counts as misinformation?: Platform governance
New to content moderation
Nine minutes
January 6, 2021. Eleven people on shift in the trust-and-safety operations center by morning. Forty-three by noon. A post calling for the execution of the vice president sat in the review pipeline for nine minutes before a twenty-six-year-old content moderator in Austin removed it. During those nine minutes, it was shared 11,000 times.
That is the job. Two billion users. Hundreds of languages. Content policies applied by moderators in Manila to posts written in Burmese about a genocide the moderators learned about from a briefing document that morning. The twenty-six-year-old makes $19 an hour. She will be diagnosed with PTSD within eighteen months.
The lab-leak suppression was our failure. The fact-checking pipeline deferred to institutional consensus. When The Lancet letter declared the hypothesis a conspiracy, our policy team treated it as authoritative without verifying the signatories’ conflicts. We trusted the institution, and the institution was compromised.
Speed versus trust
The open process camp says open the process. Wikipedia works for reference knowledge. It does not work at the speed of a viral post during a school shooting. The speech liberalists say stop moderating. We ran that experiment. The marketplace produced the Rohingya genocide in Myanmar, where fabricated images shared unchecked preceded the displacement of 700,000 people.
The media lean problem is real. Our policy teams are drawn from the same pipeline producing the 5:1 ratio. We know this. We have not fixed it.
Where we concede ground: The lab-leak episode was the system working as designed, and the design was wrong.
What would change our mind: An unmoderated major platform showing no increase in offline harm over five years.
Read the full synthesis: Who decides what counts as misinformation?