Introduction & TL;DR
In part 1 of our 2FA series, we discussed the core idea of multifactor authentication in a world of increasingly convincing deep fake technologies.
We explored the idea of objectively separating the Message from the Messenger, the diminishing returns of trying to judge expertise in a world of instant deep knowledge availability, and practicing Skeptical Receptiveness to info-seeking.
In part 2, we took a timely stroll down the streets of Vegas where we’re reminded scams and deep fakes have existed since time immemorial, so vigilance is mostly a matter of risk management, limiting exposure of data, and creating space to make informed decisions by not being pressured to take immediate action.
In our 3rd and final installment (for now) in the 2FA for Reality series, we’ll get into some additional techniques and approaches for getting a handle on the authenticity of the individual on the other end of the wire.
Authenticating People
You’ve probably heard the phrase “Trust, but Verify”, from the Russian proverb doveryay, no proveryay (Доверяй, но проверяй).
These days, and especially on the internet, I live by a different saying, “nikogda ne doveryay, vsegda proveryay“ - “Never Trust, Always Verify” which is a principle borrowed from a Zero Trust security strategy.
Typically, we judge info based on the source's reputation or at least the perceived reputation, how well we know the source personally, and the perceived authority of the source.
However, these shortcuts can also lead to biases, where even traditionally "trusted" sources can be spoofed or manipulated by outside forces to avoid suspicion.
Ok, ok, for the ones we love, sometimes we just gotta eat the apple to keep the peace - but for everyone else - how do we know when the person on the other end is who they say they are, if the message appears to be actionable?
I have my preferences, but I posed this question to the new GPT-o1 reasoning model. I gave it the most basic of parameters - give me out of the box thinking for multifactor authentication techniques that are also elegant and convenient enough that they aren’t a massive chore to implement casually in a conversation.
Surprisingly, it contemplated the scenario for a good 28 seconds before answering with 15 options, some of which were on my bingo card of dependable techniques.
Here were its suggestions-
GPT-o1:
In an era where digital deception is increasingly sophisticated, authenticating people in a seamless yet effective manner is crucial. Here are some out-of-the-box, elegant solutions to "2FA Reality" when interacting with people, ensuring their authenticity without causing inconvenience:
Real-Time Interaction Challenges:
Spontaneous Actions: During a live video call, casually ask the person to perform a simple, unscripted action that's hard to fake in real-time, such as picking up a nearby object, showing something in their immediate environment, or making a specific hand gesture.
Environmental Requests: Ask them to tilt their camera to show a part of their surroundings, like a favorite picture on the wall or a view out the window. This leverages the difficulty of deepfakes to adapt to spontaneous environmental changes.
Behavioral Biometrics:
Unique Communication Patterns: Everyone has distinctive ways of typing, speaking, or structuring messages. Subconsciously monitor for anomalies in their usual vocabulary, emoji usage, or sentence structure.
Response Timing: Pay attention to how quickly they respond. Significant delays might indicate someone fabricating responses or using translation tools.
Dynamic Code Phrases:
Contextual Keywords: Instead of static shared secrets, agree on a system where certain words or phrases woven naturally into conversation serve as authentication. Update these regularly to stay ahead of potential interceptors.
Subtle Inquiries: Casually reference recent shared experiences or inside jokes that only the real person would understand and respond to appropriately.
Device Authentication Apps:
Trusted Devices: Utilize apps that authenticate devices, ensuring messages or calls originate from a known device. For instance, messaging platforms like Signal provide device verification features.
Digital Certificates: Implement digital certificates on personal devices that automatically authenticate interactions without extra steps for the user.
Shared Digital Activities:
Collaborative Tasks: Engage in activities that require real-time input, such as co-editing a document, drawing on a shared digital canvas, or playing an online game together.
Live Polls or Quizzes: Use spontaneous, fun polls or quizzes during conversations that require immediate, personalized responses.
Cross-Channel Verification:
Multi-Platform Confirmation: If you're interacting on one platform (like email), send a confirmation message on another trusted platform (like a text message or a secure chat app) to verify consistency.
Simultaneous Communication: During a call, send a code or image via text and ask them to reference or describe it during the conversation.
Environmental Audio Cues:
Background Sounds: Familiar ambient noises like a pet's bark, a distinctive clock chime, or street sounds can serve as informal authentication.
Music or Media: If they often have a particular type of music playing, casually ask what's currently on in the background.
Personalized Non-Verbal Signals:
Consistent Gestures: Develop a habit of using specific gestures at the start or end of video calls, like a unique wave or a sign.
Visual Tokens: Agree to display a particular object or wear a specific accessory during video interactions, like a certain hat or a piece of jewelry.
Adaptive AI Filters:
Interactive Filters: Use video call platforms that offer filters or effects requiring real-time facial movement and interaction, which are difficult for deepfakes to replicate smoothly.
AR Enhancements: Incorporate augmented reality elements that require precise facial recognition and movement, adding a layer of authenticity.
Proximity-Based Authentication:
Location Check-Ins: Use location-sharing features to verify they're where they should be, especially useful for close contacts.
NFC or Bluetooth Confirmation: When in close physical proximity, devices can exchange authentication tokens automatically.
Mutual Network Verification:
Third-Party Confirmation: If in doubt, involve a trusted mutual friend in the conversation to validate identity.
Social Media Cues: Cross-reference recent activity or posts on their verified social media accounts for consistency.
Passive Voice Recognition:
Voice Biometrics: Employ apps or services that can analyze and recognize voice patterns in real-time, alerting you to potential anomalies without interrupting the flow of conversation.
Background Authentication Tools: Use software that runs passively during calls to detect signs of audio manipulation or deepfake technology.
Encrypted Communication Platforms with Visual Indicators:
Verification Badges: Use platforms that provide visual confirmation of a user's verified identity, such as verified badges or encryption keys.
Session Authentication: Some apps offer session-specific codes or images that both parties can verify at the start of a conversation.
Time-Sensitive Prompts:
Immediate Responses: Send a prompt or question that requires a quick reply, leveraging the time lag that artificial systems might need to generate responses.
Live Shared Activities: Engage in activities like watching a live stream together, where synchrony and immediate reactions are key.
Customized Emojis or Stickers:
Personal Emoji Sets: Create and use custom emojis or stickers that only you and your close contacts have access to, incorporating them into messages for subtle authentication.
Inside Visual Jokes: Use images or memes that have personal significance, which would be challenging for an impostor to replicate convincingly.
There are some nuggets in there, and some that seem like a good idea, but we have to assume won’t work long-term in the realm of real-time video and audio generation.
Let’s look at a few of the duds that may not make the cut…
Real Time Interaction and Behavioral Biometrics
It used to be that if you wanted proof of life or proof that someone was who they said they were, you’d ask them to hold up a current newspaper. These days I’m sure an AI could find an online screenshot of any current publication and simply fabricate or super-impose the correct results on the screen.
Even asking the user to turn the camera around and show the rest of the environment could be doctored. Maybe for a short while longer, you’ll be able to spot the uncanny valley of AI produced photos and video, but I wouldn’t bank on this being the go-to move.
The same is likely true for the biometrics approach, especially if enough content exists on the internet to train on.
Environmental Audio Cues & Personalized Non-Verbal Signals
These are risky. Again, you might get away with this IF there is limited data on you out there, but these days it’s almost impossible to not have lots of visual and audio recordings of you that exist somewhere. If you work in a corporate environment, a simple 30 min Teams or Zoom meeting with cameras on could give any model more than enough to deep fake with, down to mannerisms and yes, even background noises if they happen to come up at all during all that footage (like dogs barking, street sounds, etc)
Customized Emojis or Stickers, Passive Voice Recognition
Again, risky if any amount of history exists. As fidelity increases in audio and video generation, generating even the perfect-imperfections, just bank on none of it being conclusive enough.
Don’t think all hope is lost. A few ideas had legs, and at the end of the day, until we must the bridge of Replicants and Skin Job Cylons, there’s always the option of ‘do nothing, say nothing, commit to nothing’ until you can meet up with a potentially faked individual in person.
Let’s look at some of the options that might just survive some deep fake shenanigans.
Dynamic Code Phrases or Device Authentication Apps
This line of thinking has some legitimate legs. The current wisdom for trusted devices usually goes something like ‘hang up and call the person back on the number you know you have for them’, but man-in-the middle attacks are becoming increasingly more common where spoofing of even entire financial websites is possible, and traffic can be routed to what looks like a “known good” website that’s not, in fact, good.
Therefore, it pays to have code phrases or other means to generate and confirm mutually shared pairs of “secret data” that only exists between two legitimate parties. That way, even if a person’s device is spoofed or even completely confiscated and accessed, a mal actor will be missing the secondary mechanism (secret) to get the handshake to work.
I was quite thrilled GPT-o1 mentioned this potential, as it aligns to an open source project I’m working on called SafePair which will give people an app mechanism for authenticating, with some unique ideas I don’t want to spoil just yet. More on this soon.
But I agree that techniques like “casually reference recent shared experiences” is a good move. An even better move is to casually mention something that never happened as if it did, which can sometimes trap people who might be pretending to be agreeable and “in the know”.
My wife came up with a rather good misattribution approach she often tells her friends and family. wherein if she ever calls asking for <redacted> then they should know SOMETHING is up because <redacted> would never normally happen.
Devising your own trigger phrases or shibboleths like this and coordinating them with the people you trust in private is an excellent way to pre-plant a layer of security into your life.
Cross-Channel Verification, Mutual Network Verification
Some of these ideas could also work and are related to the multi-factor authentication ideas we’ve discussed before.
Cross-Channel Verification is how some systems (like Google) authenticate by having you respond to a challenge in one application (Gmail) by pinging you in another (YouTube).
For people this might be something like - if the sus conversation is happening in Instagram, pause there and ping them on Discord, Telegram, or SMS or some other unrelated application where you also know them.
Obviously if they reply to you on platform B not knowing what you’re talking about, their account on platform A might be hacked. A double or triple compromise or complete confiscation of their device would be necessary to pull off a fake otherwise.
As always, the main thing to remember is to first do nothing until you can independently gather information about the situation or the individual before taking any actions.
Unknown or Anon Internet Denizens
Multi factor authentication might work for people and systems you have a prior relationship with, but what about the world of DMs, emails or other unsolicited communications from strangers?
No perfect answer here, putting things bluntly.
The problem with strangers on the internet is that if they turn out to be imaginary/AI, there’s likely no throat to choke as it were to sue or report to the authorities.
This has been an issue long before AI. Our already dead-internet makes it tough enough to chase down bad actors before you introduce LLM Bots.
So, what happens when the “person” on the other end of the wire is just an LLM with a fake generated history and profile running off some GPU?
In short, it’s just not worth sussing out the individual - but rather - what they want.
For me, internet outreach, no matter which medium (advertisements, emails, tweets, DMs, blogs, any of it) - boils down to four basic checks that have kept me out of trouble:
Are they talking to me because they want something from me directly? (Money, information, or an action like “log on to this website”)
Are they talking to me because they’re selling an idea that sounds good, but indirectly leads to needing something from me? (e.g. - Crypto/NFT offers, fake job postings that require you to buy setup materials)
Are they asking me to take any actions that I cannot follow up on with a real person or institution through proper channels? (e.g. - ambiguous bank or overdue payment notifications)
Can I verify the legitimacy of this offer, product, or service through independent and reputable sources? (e.g. - fake product websites, too good to be true prices)
If the answer is “Yes” to one or more of those questions, it might be a scam. If the answer is Yes and there appears to be a time-crunch or pressure to act quickly - a scam.
Do you have any other sure-fire checks for avoiding fake people and disingenuous interactions? Let other readers know in the comments.
Overcorrections?
Ok, let's flip the script a bit given these checks - are there any false-negatives we could be overly filtering out? That is, if we’re overly skeptical, could we in some cases face negative consequences on calls to action that were actually legit?
Not many situations come to mind, but a few.
Amber/Silver Alerts or other Emergency Broadcast notifications come to mind. We know what to expect from these, though - our phones involuntarily vibrate and beep, and so will everyone else’s in the vicinity.
Text messages coming from known good notification numbers tied to institutions you’ve already opted in to receiving could also be legit. (Tip: Create an actual Contact for automated message systems in your phone the moment you sign up for a service that sends them, so you know what to expect. I generally ignore messages I get on a personal device that doesn’t come up with an official Contact name I created.)
Another situation might be fraud, or institution alerts immediately following an attempt to purchase something. If your credit card gets denied and 20 seconds later your phone vibrates with a phone number to call to get things resolved - it’s probably legit.
You’ll notice the theme here though - in most legit cases of internet outreach, it will come by way of a previously sanctioned app you set up well in advance, or exists as an official communication channel on a device, or it will immediately follow an action you took and is directly related to the outcome.
Nothing’s Bulletproof
Despite our best efforts to be vigilant - there’s always something. Here, I tried my best to focus on situations and solutions that you have direct control over with the aim of avoiding the sophisticated but lazy attempts made to defraud people.
There are, of course, a myriad of other vectors you have no control over, such as hackers obtaining login information by directly compromising a site, hijacking sessions or tokens, or other malicious high-end techniques to hijack otherwise legitimate functioning systems.
Earlier, I mentioned a zero trust principle - never trust, always verify - that’s worth living by online. There are other principles to Zero Trust that I will leave you with, that you can adopt to taste.
Note that zero trust usually comes up in the context of computer network security, but squint your eyes a bit, and realize that people interacting with each other is, itself, another kind of network, and many of the principles covered in the video could be adapted to apply to people, too.
If you listened to the seven or so principles and thought: “Well, that’s exhausting”, I get it. Also, when you see that one of the core principals is, literally “Assume breach”, which in our context means “assume everyone will betray you online”, and it’s quite the downer to realize that despite best efforts, we’re still likely to get scammed at some point.
Vigilance and security are ultimately a balance between how paranoid and meticulous you want to live your life, and what is at risk if a breach occurs.
Final Thoughts
Navigating the digital landscape today is akin to walking through a hall of mirrors — reflections and illusions are everywhere, and distinguishing reality from deception requires keen awareness and adaptability. Throughout this series, we have peered into the evolving challenges posed by deep fakes, social engineering, and digital scams. We've also explored practical strategies to separate and authenticate the message and the messenger.
The core takeaway is that as technology advances, so too must our approaches to security and verification. "Never trust, always verify" isn’t about fostering paranoia; it's about empowering ourselves to take control of our digital interactions. Whether by employing dynamic code phrases, cross-channel verification, or being mindful of unsolicited requests, we create layers of defense that make it significantly harder for malicious actors to succeed.
Remember, the goal is not to eliminate all risks — that's neither practical nor possible. Instead, it is about finding a balance that allows us to live confidently in a connected world. Stay curious, stay informed, and don't hesitate to leverage both old wisdom and new tools to protect yourself and those you care about.