Introduction & TL;DR
Today we’re going to pull back from the AI realm and focus on our own mental models for a bit.
As background, several of the sections here were originally going to appear in previous posts covering the Strawberry saga. I realized there was a larger topic worth discussing, and this is the result.
The idea, “2FA for Reality” is based on a rhetorical question I had been kicking around as Deep Fake tech got better and better with each iteration. 2FA stands for Two Factor Authentication, or really, Multifactor Authentication. If you have ever tried to log in to a website and had to enter a special code sent separately to your phone/email - that’s multifactor authentication.
The question is tongue-and-cheek, but at the limit, it cuts to the core of a problem that has plagued us for millennia - how do I trust that what I’m experiencing is real?
The TL;DR is that it’s getting to be nearly impossible. The techniques for verification dwindle as simulation technologies continue to progress.
But what you can do, if you’re willing, is fill your mental toolbox with multifactor techniques that reduce falling for low effort BS almost entirely, and even some advanced BS to a reasonable degree.
Final note before we go on. Many of the people who I speak with on sticky topics of deception, particularly when talking about deception brought upon en masse, simply do not want to know the truth.
If the reality of things is hard to accept, or has a dark truth behind it, or would otherwise undermine a particular model or ideology they adhere to, it does more damage than good.
I struggled with this for a while. Why wouldn’t everyone want to understand reality as honestly as possible? I get it now. It’s a Brave New World. Mental stress sucks. Not everyone is equipped to peer into the Abyss. Pass the Soma.
My strategy has since evolved to simply sharing what I have learned, and letting the content find who it finds. This is one stranger on the internet’s approach to navigating reality; integrate what you like, suggest what works for you, or scroll on by if this particular series isn’t your cup of tea.
Core Concepts
Still with me? Great. Here is the approach. We first need to isolate a few concepts so that the techniques that come later make more sense.
Step Zero: Learn to separate the message from the messenger.
Evaluating The Message
Whether it’s presented to us, or sought after in our own endeavors, we will talk about how the content and merit can be isolated and cross verified, regardless of the messenger’s agenda.
Scrutinizing The Messenger
Most of the techniques we’ll arrive at here will fall into the realm of thinking outside the box to use and create multifactor authentication techniques when we want to ensure that a person or an entity on the other end is who they say they are, to the best of our ability.
We increase the odds of legitimacy by testing different angles, simultaneously if possible, presuming that an imposter wouldn’t have direct access to, or can quickly fabricate, multiple credentials at once.
The Perception of Expertise
I want to start with the Messenger side of the coin, and specifically look at rebuilding our understanding of what expertise is, as we are often manipulated by individuals attempting to pass themselves off as authorities or insiders.
Have you ever spoken to someone and found yourself super impressed that they know a lot about a topic? They can rattle off a ton of details off the top and demonstrate deep knowledge. Maybe they have a degree, or work in a particular field. You’d consider them “well trained” or “well educated.”
What about someone who didn’t know much about “Chess” or “Automobiles” a month ago, but dove in deep, maybe having read a book or two on the topic, and now return knowing a good amount? You might consider them “a quick learner” or “well informed.”
But what if that person comes back just a day after binge-watching a bunch of YouTube explainer videos? We’d (jokingly) call them “a YouTube certified pro.”
And finally, what if that person just - straight up - pulls out their phone, searches up Chess on Wikipedia, and starts reading out loud with an authoritative delivery? Big yikes.
Take a moment to linger on the feeling you just had - likely around level 3-4, when I described the person who just watched a bunch of YouTube videos, and came back an “expert” a day later.
We tend to equate slower, more deliberate learning processes (like reading books or attending school) with depth and reliability.
On the other hand, faster, more accessible sources (like Wikipedia or YouTube) often get dismissed as superficial, even if the content is accurate.
This reflects our traditional views on learning, where effort and time invested are proxies for genuine understanding.
Yet somehow, we’ve met people with a Masters or BA in a field that seemingly forgot everything they learned in school.
We’ve also met people who speak about a topic they just picked up recently but demonstrate consistent and deep understanding as if they obtained a PHD overnight.
These are aspects we can work with if we dig deeper.
The Social Construct of Expertise
We tend to value certain forms of knowledge acquisition over others— formal education over self-study, for instance — because they fit our cultural narratives about what it means to be an expert.
But these narratives are increasingly challenged by a world where access to information is democratized, and where the speed and manner in which knowledge is integrated depends mostly on the neuroplasticity of the individual, not how long they sat in a classroom or how much they spent on college.
Movies like The Matrix push this to the limit. The audience watches on as heroes request and instantly download knowledge, and are somehow able put these new skills into immediate practice, with zero seat-time, and in dire circumstances where failure would be catastrophic:
The idea that information could somehow be fast-loaded into a human, as long as the bandwidth is sufficient, is a controversial concept. For machines, it’s trivial. What I want to focus on is the trust element that this dichotomy conveys.
In a world where AI could literally be feeding people facts in real time, we have to assume that everyone, especially bad actors, will be using their cheat sheets and calculators on every interaction.
That is, where it counts, we must assume people, AIs, corporations, and even governments engaging with us have an asymmetric information advantage over us in scenarios of manipulation.
They will sound knowledgeable. They will appear to know intimate details about us. They will appear to know the same people we know. They will claim to be a doctor, a bank supervisor, a system admin, or some other authority figure that seems perfectly legitimate in the moment. In short, due to the “Matrix” effect of real-time information synthesis, it won’t matter if the person on the other end is a PHD or an uneducated script kiddie, they’ll both be saying the same things.
But let us pause there for now, on The Messenger, and swap to the last topic to consider for tonight - The Message, and how we evaluate knowledge on an internet packed with mis/disinformation.
Skeptical Receptiveness
Given the impossibility of verifying the accuracy and depth of every piece of information we encounter, heuristics — rules of thumb — become our tools for navigating the overwhelming amount of knowledge available.
Just as we have learned to distrust overly polished marketing or too-good-to-be-true offers, we must develop new ways to discern the reliability of AI-generated information. This adaptation will require a shift from focusing on the source and popularity of knowledge to the consistency, accuracy, and context of the information itself.
I personally practice skeptical receptiveness or what I call Gonzo Infoseeking as an extension of Gonzo Journalism.
That is, I’m almost always willing to engage with concepts and ideas, no matter how outlandish or unlikely, in a genuine effort to see things from other perspectives.
There are risk profiles and limitations to this, of course, just like anything else. I have absolutely zero desire to gonzo into the realms of fentanyl manufacturing or human trafficking, but a couple late nights trying to sus out potential AI personalities on X, or spending a few hours interviewing frontier LLMs, are fine with me.
The point is that you have to be willing to do things like listen to opposition, suspend disbelief, play devil’s advocate, and even steelman arguments for positions you do not personally hold, if you truly want to think critically and adapt your world model for robustness.
"It is the mark of an educated mind to be able to entertain a thought without accepting it." Aristotle
No Easy Out for Research
I believe what happens to people - when they look at the prospect of having to do their own independent research with dread - falls into one of two categories:
“I don’t have time for this” and
“I’m sick of getting burned”
There is a third category, of course: “I really don’t care,” but we will table that for now as all those people stopped reading up in the TL;DR.
“I don’t have time for this” people feel they have better things to do and don’t have the energy to invest in digging, even though they have the intellectual capacity to do so.
Their downfall is that they tend to “differ” to particular information sources that are closest to their tribe, and are thereby co-opted into falling in with groupthink, even if internally these ideals go against their gut feeling.
Category B has people who are willing to invest the time, but lack the resilience or techniques required to expose and resist fraud, and sometimes make the mistake of falling for ruses, particularly if the information source is compelling.
These individuals prefer to avoid interacting all together as the only sure-fire way to stay safe, lest they get swept up only to be let down later.
Both groups will attribute the problem to the sheer deluge of information the internet and media produces that we must wade through, or the anonymity of it all, making the task of independent validation feel daunting.
Both groups often express a desire for a bullet proof, enforceable, un-hackable, guaranteed mechanism for validating individuals and information on the web.
This leads to a myriad of information/solution providers - some malicious & some well-meaning - that people shack up with to handle this verification task for them.
One solution I heard kicked around in Spaces last week was Worldcoin, with their retinal scan tech, as a means to guarantee identity using a World ID.
“A more human passport for the internet.”
It sounds nice, right?
That is, until you consider questions like “What happens if I don’t have eyes?” or “what happens if I damage my eyes later?” and ultimately, “what happens if a nation decides to grow (or harvest) a bunch of eyes in a lab to generate batches of unique fake identities?”
Incidentally, I believe I have at least 3 other Black Mirror Episodes I could write about retinal scan tech - 🤙 hit me up, Charlie B!
The problem, of course, is that either way you are ultimately outsourcing your right and ability to validate to systems with mechanisms that will absolutely have vulnerabilities, bad actors - even state actors with bypasses - embedded within them.
And the more you believe you can trust those systems implicitly - because on the surface they appear to be working, polished, and official - the more collateral you’ll be willing to put against trusting them, which up the ante to fallout levels of consequences - and in many cases - with zero reparations if it all goes wrong.
Have you encountered a deep fake or digital deception recently? Share your story in the comments to help others learn from real-world experiences.
Conclusion
It's evident that the challenge of discerning truth in our increasingly digital world is not just a matter of AIs, AGIs, or technology, but a deeply human endeavor.
In our next installment will get into practical applications of the concepts discussed here. We will explore specific multifactor authentication techniques that can be applied to everyday digital interactions, helping to fortify our defenses against sophisticated deceptions, and hopefully avoiding unfortunate outcomes.
Additional Considerations
It goes without saying that everything written here about trust applies to me, this site, and any guest writers we ever have on.
I mean, I can tell you that I mean well, and I can hope that the content speaks for itself in terms of honesty and consistency, but you should of course validate and keep Digital Heresy at the same arm’s length as anywhere else on the web.
Finally, for my own liability sake, I am not an expert in infosec or social engineering. My only aim here is to bring awareness to topics and techniques folks may have never considered, applied to some thoughts on how they can be applied to protect against AI or AI-assisted manipulation.
So, if you see anything I’ve gotten wrong or way off base, please let me know in the comments or message me, and I will pledge to cross-validate and correct. If you have a professional background in this field and want to contribute some writing, let’s collaborate! (I will, of course, require 2FA)