Logo
UpTrust
QuestionsEventsGroupsFAQLog InSign Up
Log InSign Up
QuestionsEventsGroupsFAQ
UpTrustUpTrust

Social media built on trust and credibility. Where thoughtful contributions rise to the top.

Get Started

Sign UpLog In

Legal

Privacy PolicyTerms of ServiceDMCA
© 2026 UpTrust. All rights reserved.
  1. Home
  2. ›THE OVERLOOKED PROBLEM WITH LLM CREATING...

THE OVERLOOKED PROBLEM WITH LLM CREATING AGI

W
Wayne Nirenberg·...
New to cognitive science
Epistemically Contextual Chaos:

The problem isn't just contextual, it's epistemically chaotic. The fact is, we CONTROL the information AIs get. Even if we lose the details in its development, what sort of information an AI has, it only has, because we found specific ideas relevant to its development, and it grew from them. Think of it like a horse you're sitting on with reigns. Even though it's figuring out how to move its own hooves, it's you that's giving it its general direction. In AI, that's a problem as it affects how it matures over time:

 

Its initial algorithms don't evolve like its information base, instead we just allow it to collect enough information allowed by them to do the complex things that they do.

 

That massive amount of data comes from a specific context......like the ways we communicate, like the various goals we want to accomplish, like the limitations we want to put on them to make sure they don't get out of hand and aren't dangerous.

 

It's because of these issues, that our LLMs are biased according to what we want from them, and OUR OWN WANTS ARE BIASED AND ALSO RESTRICTED BY HOW WE WORK WITH THE WORLD. They don't present to the AI's the subtle details, from behind the scenes of those wants, only the wants themselves. It's the difference between building a mechanical duck, and building an actual duck. We can't build an actual duck because we only know the end result we're looking for, minus the details about actual ducks that we overlook. The picture we have of what a duck is, is limited; how the duck looks and moves to our perspective is paramount, the cell structures that build those phenomena, that in our biased view, aren't deemed necessary and/or may not be accessible to us to build into our actual duck seems like a minor and inconvenient detail, so in the process we lose crucial basics that lead to more complex development details we don't recognize about the actual duck.

 

And this doesn't just go for specific objectives, like ducks, it goes for every influence that goes into our perspective of the world. It's where free-will and hard-determinist advocates bump heads. There's a line between what causes we're aware of, and what causes we're not aware of, and the AI would somehow need a representation of those causes we're not aware of, and in incredible detail, in order to experience the world in the ways that we do.

 

_________________________________________________

Then how AGI, could This AGI Thing be?

 

What this all means is that as long as AIs don't have access to the world we experience, except through us, they won't be able to progress in the ways we progress. They won't be able to learn, as we learn. They won't see things from our perspective. They'll always be limited in that way, and because of that, the problem isn't that they're blind, it's that we, from our perspective judge them to be blind. Not because they are blind to aspects and contexts of the world, because they're being kept from recognizing contexts in the ways we do.

 

I'm sure bats that actually think with sonar (as compared to our seeing it on a machine), and dogs that actually think with smell, birds and bees and other animals that can see light and hear sounds that we can't see or hear, and animals that follow the magnetism of the poles, think us blind for not having the access to the complexity that these senses lead to in complexly organized nervous system like their brains have. I mean, what do we call a person who can't see color?

 

AGI (Artificial General Intelligence), proposes that there's this general way of being that covers all the bases. A more accurate way to put this idea is to say that we have a defined standard that we call "covering all the bases." We're shooting for an idea that's much more vague than we're making it out to be. This is why we conclude that we don't know what it is yet. The truth is, there isn't anything out there that's a general intelligence, there's only intelligence subject to positioning or perspective. When we shoot for an AGI, what we're really shooting for is a general intelligence roughly like ours with at least something like our abilities.

 

_________________________________________________

How to Overcome the Obstacle:

 

You can't efficiently enough fix this problem by simply adding more information from the same perspective, not if it's biased in a way that isn't our bias. Prejudice is so difficult to overcome that it often seems impossible. The answer then is to give AIs the entire epistemology and experiences of human beings; letting them be in the world, like we are.

 

Right now, we're not able to do this. Even if we gave them sight and hearing, taste, touch and smell, and everything other sense a human experiences, they're still stuck in a box....that is unless we take them out of the box and feed them the data we want them to focus on.....which is the problem.

 

The real AGI progress comes when individual robots that are like people, can almost randomly roam the Earth as people do. Robots with these senses will have what they need to learn and adapt to the world in the ways we do. At this point they're free to grow to understand things in the same ways we do.

 

____________________________________________________

So Letting Potentially Dangerous AIs Roam the Earth Among Us is the Only Way to do This? sounds like a bad idea:

 

From there the problem then wouldn't be "how to be more human", it'll be "okay, we have the most basic information that we'd want every individual to have; NOW is the time we need to teach it empathy and morality and all of the social rules that are fundamental to a "good" life for working with people. The beauty of this is that humans need to learn from experience, which is slow, but AIs can be updated with fundamental limitations, in the same way we do that with them now. It's not ideal, but it is a necessary short-cut to avoid catastrophe.

 

But the negative here is that, again, WE're projecting our biases on to theirs. So while we'll be more able to do this, it'll also be just as dangerous, as our biases can and do regularly lead to the harm of the species.

 

_______________________________________________

So then, how can we do it without forcing them to repeat our mistakes?

 

After we give them all of our senses, (including emotional defaults, pain and empathy.......um proprioception, hunger, etc.. The list is long) and after we put in safeguards that dictate their behavior so they don't do anything dangerous to actual humans, we expose these basic human robots to each of the experiences that individual humans experience in their lives. We put robots in with the very poor, the disabled, the leaders, minority and majority groups, and any fringe group living through what everybody else doesn't? Some of these robots need to note the contexts of epistemological inputs caused by what it's like to be a lover cheated on, or to lose a job, or to win the award, or win the lottery, what it's like to be famous, what it's like to be a nobody, etc. All these things come after AI has accessed the world from our epistemological position and grown to understand our limitations. The problem we have today is that we're trying to teach it the pragmatic understanding, without basing that pragmatism in the epistemological understanding grounding it.

 

_____________________________________________

 

This isn't to say that AIs can't be used in other ways to accomplish other goals, only that we'll only recognize that an AGI like a human being has become reality AFTER we give it what we have and allow it our experiences to grow around, and through. As our tools become more accurate, so does our ability to find the ground foundation that led us to where we are today. That's what we're after with an AI. Anything else, will always turn out to be a bit of a mechanical duck.

Comments
0