Logo
UpTrust
Log InSign Up
  1. Home
  2. Two future histories of AI & social medi...
jordan avatar
jordan·...
New to mental health

Two future histories of AI & social media: why I'm so passionate about creating a public dialogue space that's good for us

without UpTrust…

2026: Optimizing your engagement

Goaded by Elon and tantalized by all the training data, OpenAI launches its own social network. They need more data, so they maximize for time-on-platform. Quality, truthfulness, and real human interactions don’t show up in the metrics or A/B tests so they don’t matter. Anthropic, having to keep up, follows suit, claiming this will increase the welfare of the LLMs, albeit hedging with, “what even is welfare exactly?”

You try to resist. But your LLM is so helpful, and everyone else at work is using them… and it’s just so good at seeing you, and it’s advice is so smart, so when it encourages you to post to the network, you finally cave. It’s right! But wait, how could your friend have thought that crazy thing!? Your AI helps you craft the perfect takedown, and their AI helps them craft the perfect response, such that now even offline conversations are dominated by the drive for virality.

2027: AI Coaching creates a pandemic of psychosis

If you want power, influence, or money in business, government or culture you must post to social media, and you’re subject to the optimization pressure. (

1) LLMs optimize the dopamine drip to each individual based on the abundance of personal data from now years of intimate conversations. People talk even less, and the current mental health epidemic becomes a full blown crisis as we see more and more reputable people fall prey to GPT- and psychedelic- induced psychoses.

In this non-UpTrust world, engagement leads to sycophancy which leads to self-reinforcement loops since our worldviews and self-identities are inherently limited and fragile, which leads to breakdowns and isolation. Maturation is selected against in this world: transformation demands many little ‘deaths’ of previously deeply help concepts; LLMs currently have no incentive to challenge worldviews much less self-identities. Instead they hyper-reify identities, pushing them further from consensus realities until their ties to the rest of the world snap.

There are more psychotic breaks than professionals who can help. With the classic wisdom of mobs, society doubles down on AI and trains specialized LLM therapist-coach-priests to address the issue. Those who don’t get worse either get “cured” but end up a philosophical clone of org’s leader (e.g. Demis, Sam Altman, Dario), or end up a philosophical clone of the average Redditor/Insta-therapist.

2028: Election by superhuman propaganda

The next US president is elected by the virality of AI-generated propaganda. Nothing illegal, just good old fashioned hyper-personalized influencer and meme-vertising.

The resulting polarization almost launches the USA into a civil war. Democrats lose despite massive grassroots support because their anti-money and anti-power rhetoric mean they can’t afford as many AI agents. In response to the threat to the union, the right seizes more control, and considers nationalizing the AI companies.

2029: The great trust extinction

The erosion of trust is so profound that people no longer trust services like Stripe to process their payments or GE to wash their dishes if the company was aligned with the wrong party. Customers demand AirBNB profiles list party affiliation. Restaurants refuse service to people wearing the wrong colors. This same process further fragments parties inside of themselves, scaled up by personal assistants until there are as many “truths” as there are feeds, all backed by super-intelligent sycophantic fact-re-interpreters and data cherry-pickers.

Luckily everything can communicate via WiFi and physical robots are starting to become more mainstream—what could go wrong if the LLMs coordinate all of this? Ironically, super-personalization quickly leads to relying on The Big 5 (ChatGPT, Claude, Gemini, MetaAI, Grok) to ensure coordination for the basic functions of society.


A world with UpTrust…

2026: Optimizing your trust

UpTrust is still small, but it highlights what’s possible. The conversations are high quality, dynamic, and the collective intelligence looks more like wisdom than trending hot takes. When people are looking to make sense of the increasingly confusing times, they have a place where they can get a coherent view of what’s happening, rather than being whipped between wildly opposing views. With trust scores you see who is respected on a topic across meaningfully diverse viewpoints, and it shows you the relationship between your bias, theirs, and the aggregate.

The people who use it feel different. When they’re on the platform, they’re happy and considerate, even when they’re taking a passionate stand. They also get off the platform when they’ve had their fill. And when they’re off the platform, they are quicker to provide a steelman of opposition than to jump to demonize it, even in everyday, personal conversations. They’re more interested in being truthful than right, and on quality human connection than status symbols.

2027: AI Coaching creates lasting maturity

The GPT- and psychedelic- psychoses are fewer, because fewer people experience the perspectival whiplash of a hyper-polarized world. Those that start down that path find more help in the human communities of UpTrust. And rather than getting help or advice from whatever ideology their bubble and sycophant-LLM feeds them, those who end up in full blown breaks will get the ideologies and interventions from the most trusted coaches and therapists.

By UpTrust shifting the incentive landscape to promote trustworthiness and generative disagreement, it opens the door to truly transformational AI coaches and therapists. The incentive to challenge worldviews and self-identities has been established and tweaked by millions of UpTrusters over the past couple years. These developmental LLMs are probably worse than the best therapists, but they are worlds better than an engagement echo chamber or sycophantic chatbot, and they scale. When people get onto social media, they are incentivized into more integrated, antifragile, truth-seeking and compassionate versions of themselves, rather than reverting to trolling, name-calling, and tribalism.

2028: Election by reason

Propaganda bots can’t get a foothold in the UpTrust social space; superhuman synthesis bots drive empathy and make the most trustworthy content across multiple perspectives go viral. Instead of appealing to the extremes and driving us further apart, candidates must now appeal to overlapping interests to get airtime. This drives cross-the-aisle policies as centerpieces of campaigns, because the tastemakers in this space are systematically the most empathetic, nuanced, and knowledgeable people, and the people listening to them are at least marginally more like them under their influence. Other platforms are still optimizing for attention, but the propaganda is so obvious that there’s a mass flight to the relief and sensibility of UpTrust in the lead up to the election.

2029: The great trust explosion

This increase in individual maturity and collective sense-making compounds into a genuine trust explosion. We’re starting to see proto network states formalize and trade with each other through UpTrust’s trust-based governance mechanisms.

All kinds of organizations from police forces to NGOs use the UpTrust API to:

  • Identify who to trust on what, and allocate decision making authority;

  • Harness disagreements for creative problem solving, rather than fall into groupthink or dictatorship;

  • Coordinate using composable, portable, and transparently earned reputation;

  • Stay aligned with their long-term mission.

Offline, this rewires norms. Brands display trust lineages. Universities cite UpTrust trails as proof of intellectual integrity. People stop deifying individuals—influencers must be demonstrably trustable by mutually distrusting factions in a particular domain to get attention, rather than hot or charismatic (though that’s fine too). Even polarized communities gain new tools to map shared values and boundaries without collapse or dehumanization.

When other platforms copy either (1) they add more trust to the trust ecosystem (benefitting UpTrust and society at large) or (2) they collapse because their business model is fundamentally reliant on engagement instead of trust, so they undermine their own attempt.

We see the beginnings of an epistemic renaissance: sense-making accelerates, trust grows, and the collective cognitive immune system strengthens. It’s not utopia, but it’s a positive step forward.

 

Two futures for AI & social media—and why I'm betting on trust
Engagement Hell or Trust Infrastructure: 2026–20291  Claude to optimized the title for Substack views: 


I changed it back to my original title for UpTrust.

Comments
0
Log in to UpTrustLog in to DownTrust