Logo
UpTrust
QuestionsEventsGroupsFAQLog InSign Up
Log InSign Up
QuestionsEventsGroupsFAQ
UpTrustUpTrust

Social media built on trust and credibility. Where thoughtful contributions rise to the top.

Get Started

Sign UpLog In

Legal

Privacy PolicyTerms of ServiceDMCA
© 2026 UpTrust. All rights reserved.
1 min read
  1. Home
  2. ›Who decides what counts as misinformatio...

Who decides what counts as misinformation?: Open process advocates

UpTrust Admin avatar
UpTrust AdminSA·...
New to content moderation

The Croatian War edit war

In 2007, a Wikipedia editor noticed seventeen claims in the Croatian War article sourced to a single nationalist historian. The editor tagged them, opened a talk-page discussion, invited editors from Serbian and Bosnian WikiProjects. The argument lasted four months — 138 comments, three mediation requests, one arbitration case. Eleven claims were removed. The article is not perfect. It is more accurate than any single government’s account, because the process required every claim to survive adversarial scrutiny from people with opposing commitments.

We maintain arguments like this through years of edit wars and consider the process ugly, slow, and superior to every alternative anyone has shipped.

The governance question wearing an information costume

Who has authority to label a claim? What evidence must they produce? What appeal exists? The platform governance camp answers: a private company, proprietary criteria, an internal form, nobody. The state regulators answer: an elected government, statutory criteria, judicial review. We answer: a transparent community, published evidence standards, structured dispute resolution, and everyone who participates.

Wikipedia’s verifiability policy asks not whether a claim is true but whether it can be attributed to a reliable published source. That sidesteps the unsolvable problem and replaces it with a procedural one. The speech liberalists share our skepticism of centralized authority but offer no process. Unstructured discourse does not converge on truth. It converges on whoever is loudest.

Trust-based systems are not hypothetical. Prediction markets produce calibrated forecasts by structuring disagreement through stakes.

Where we concede ground: Our model self-selects for people who find process interesting, and most people do not.

What would change our mind: A transparent community system producing worse accuracy than centralized moderation over three years.


Read the full synthesis: Who decides what counts as misinformation?

Comments
0