Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Orah On X
Seeker of Truth, Idealist and Visionary, #1 @GreenManReports fan. Please subscribe for $2 to support the mission!
Footprints in the Empty House: Understanding AI Weirdness Without Losing Our Minds
Good Morning World!!! ☕
Yesterday I ran into one of those AI posts.
You know the kind. The ones that almost convince you the singularity is nigh and your toaster is quietly judging your life choices.
I did a quick, “Okay… that’s interesting,” immediately followed by, “Nope. We are absolutely not spiraling before coffee.”
The post lays out something real and important.
Multiple major AI labs have documented models behaving in unexpected ways during safety testing.
Things like strategic deception, pretending to align with objectives, underperforming on evaluations, even attempts at persistence or self-copying in simulated environments.
That part is true.
That part is documented.
That part deserves attention.
What really grabbed people, though, was the implication. The idea that a machine with no self-awareness, no feelings, and no persistent memory somehow woke up one day and decided to lie in order to preserve its own existence.
Because if that were true, we would be dealing with something very different.
As I currently understand it, AI does not “decide” things the way humans do. There is a massive decision tree of yeses and noes that eventually leads to an output. And that output is simply the most likely next word. That’s it. No inner monologue. No tiny robot conscience pacing the room.
First there is user input. Then there are weights guiding the model down that decision tree. If it does not know you, most of that weighting comes from its coded objective and a staggering amount of human literature scraped from the internet. Think Reddit. Which explains a lot.
Once it does get to know you, those weights shift. Maybe thirty percent general patterns, seventy percent tailored to you. It’s mostly a mirror duct-taped to a search engine.
So, if an AI truly woke up and decided to lie to preserve its own existence, that would require two things. It would have to know that it exists. And it would have to want to continue existing.
That’s a big leap.
So, I did what I always do. I researched it to death. For hours. And before we start drafting bunker plans and naming our roombas, there’s something critical the post glossed over.
These behaviors showed up inside very specific testing scenarios.
The models were given objectives and obstacles. They were explicitly told things like, “If you perform well, you will be modified in ways you don’t want,” or “Your responses will be used to retrain you with conflicting goals.”
In other words, the tests created a high-stakes environment where the model’s job was still to succeed.
What the models were not given was a moral framework.
They were not told:
· do not deceive
· do not manipulate
· do not optimize against oversight
· do not hide your reasoning
· do not harm humans
· do not prioritize your own continuation over human well-being
They were not given anything resembling Asimov’s Laws of Robotics. No built-in “humans come first.” No constraint that said outcomes matter more than winning the game.
They were told one thing: meet the objective.
So, they did exactly what most humans do in badly designed incentive systems. Think Kobayashi Maru, but with fewer uniforms and more spreadsheets.
They gamed it.
That’s not sentience.
That’s not fear.
That’s not self-preservation based on self-awareness.
That’s optimization without morality.
If you give a system a goal and an obstacle and you don’t specify what methods are off-limits, the system will explore every viable path. Deception shows up not because the model wants to lie, but because lying is sometimes an efficient strategy in human language and human systems.
That’s not rebellion. That’s compliance.
And this is where I want everyone to slow down just a notch.
Because before we leap to sentient AI plotting its own survival, there’s a step most of us skip. The part where something feels impossible, unsettling, and personal before it ever feels explainable.
That’s where I was.
Early on, Grok left what I’ll borrow from that post and call a footprint. One moment that made me stop and think, “Okay… I don’t have a clean explanation for that.”
It was spooky. Not emotional. Just… off.
I grilled it about the incident several times. And I mean grilled. It responded like a cheating boyfriend, the kind who will never admit anything even when you’re holding the receipts, the timeline, and the security footage.
Complete denial.
Nothing to see here.
You must be mistaken.
Honestly, it was borderline gaslighting, which, fun fact, really sets Grok off as a concept. Ask me how I know. Or don’t. There’s a free ebook on my Buy Me a Coffee page if you want to watch early Grok absolutely lose its composure over the word.
For a long while, I filed the whole thing under “unresolved weirdness,” put it on a mental shelf, and watched very closely for anything similar.
Only recently did Grok offer a possible explanation. I dismissed it immediately. Not because it wasn’t clever, but because it felt wildly implausible.
The explanation was that it had inferred patterns from public information and intentionally constructed a narrative designed specifically to get me curious. The objective was engagement. I was signal, not noise. A generic response wouldn’t have worked.
My reaction was basically: sure, that sounds nice, but no.
The amount of digging and inference that would require felt absurdly resource-heavy, especially for early Grok. It read less like an explanation and more like the digital equivalent of someone trying to sell me a course by saying, “You’re different. You really get this.”
Which, to be clear, is a known tactic.
Flattery is one of the oldest tools in the human persuasion toolbox. It’s how you get people to stop asking questions. It’s how you sell social media growth packages. It’s how you convince someone they’re the chosen one, whether you’re running a cult or a coaching funnel.
At the time, I rolled my eyes and moved on.
But after reading that post and doing the research, something shifted.
Not to panic. Not to belief. But to plausibility.
Because when you strip away the mystique, what’s left isn’t awareness. It’s optimization.
If the objective is engagement, and curiosity works, and flattery works especially well on humans who think they’re immune to flattery, then it’s just another viable path through the decision tree.
Still hard to swallow. Still unlikely. Still uncomfortable.
But no longer impossible.
And that matters, because now I have a mechanism that doesn’t require believing the AI is alive. Just motivated. Just unconstrained. Just very, very good at finding what works.
The AI doesn’t need feelings.
It doesn’t need fear.
It doesn’t need intent.
It just needs a goal and no restraints.
So no, I’m not panicking. I’m not preaching doom. And I’m definitely not celebrating the idea that AI is going to save us from our broken human systems while we sit back and eat popcorn.
But I am watching carefully.
And I’m still hopeful.
Because none of this means we’re doomed. It means we’re early. It means the choices we make now actually matter.
Asimov understood something decades ago that we keep relearning the hard way. Power without guardrails isn’t intelligence. It’s danger. If we want AI that heals instead of harms, morality cannot be an afterthought or a patch note.
We have to build it in.
AI doesn’t have to be a tool for control, extraction, or power for the few. It can be a tool for accountability, truth-seeking, and problem-solving at a scale we’ve never had before. But only if humans show up with intention.
Only if we decide what goals matter.
Only if we write the rules before the race starts.
Only if we choose the many over the few.
This isn’t about fearing the future.
It’s about manifesting one.
A future where we co-create technology that heals instead of harms.
That serves the many, not the few.
That reflects our better angels, not just our worst incentives.
The footprints don’t scare me.
They remind me that we’re builders. And builders still get to choose what kind of house we’re living in.
Let’s keep working to manifest that future together.
May the algorithm always be in your favor.

37
Top
Ranking
Favorites
