The Authenticity Rebellion
I was a New Yorker once. Briefly. A chapter, not a lifetime.
But God, I love that city.
The attitude stuck with me since I was a kid… that particular brand of New York grit you need just to survive there. They’re rude, yes. But they’ll welcome you anyway. An old man once spent an entire morning playing chess with my son in a Starbucks on Columbus Avenue. Treated him rough, cursed at his moves, but taught him more about strategy in three hours than any polite instructor ever could.
Maybe it’s the East Coast skepticism. That built-in resistance to bubbles and techno-utopian promises. Or maybe it’s something deeper. That New York embodies liberty itself. The city’s namesake statue stands in the harbor, the first glimpse immigrants saw after crossing an ocean on hope alone. A promise made visible on the horizon.
Liberty means something specific there. It means the freedom to push back.
And New Yorkers are pushing back.
My first day as a New Yorker, 2018
SIGNAL
In the last few weeks, the latest AI companion necklace called Friend plastered over 11,000 ads across New York City subway stations. Within days, riders covered the stark white posters with messages: “AI is not your friend,” “AI would not care if you lived or died,” “get real friends.”
The CEO knew this would happen. Apparently, he wanted it. From a visibility perspective, the alleged stunt worked. But I have two hot takes on this: one, he’s headed towards the same doom as the AI pin. Second and most importantly, he’s missing out that people weren’t just rejecting his product. They were rejecting something deeper.
Three other signals crossed my feed last week:
DC Comics president Jim Lee stood in front of thousands at New York Comic Con and made a promise: “DC Comics will not support AI-generated storytelling or artwork. Not now. Not ever—as long as [SVP, general manager] Anne DePies and I are in charge.” The room erupted. Lee’s reasoning cuts through the noise: “AI doesn’t dream. It doesn’t feel. It doesn’t make art. It aggregates it.”
Recruiters are spotting deepfake candidates in job interviews. According to a survey by Resume Genius, 17% of hiring managers have encountered applicants using AI to mask their real faces during video calls (CNBC). One talent sourcer at Warner Bros. Discovery told Wellfound she now asks situational questions tied directly to a candidate’s stated experience. When answers don’t connect to lived reality, “something feels off.”
The IMF and Bank of England joined the chorus warning about an AI bubble forming. IMF Managing Director Kristalina Georgieva said valuations are approaching levels last seen during the dot-com boom 25 years ago. The Bank of England flagged “stretched” valuations for AI-focused tech firms and warned of a potential “sharp market correction.”
Four different domains. One pattern: reality pushing back against synthetic everything.
STORY
These stories brought me back to my PhD thesis from the pre-ChatGPT era (weird feeling, it seems ages ago, but it’s not).
I built a system that lets people have conversations with pre-recorded videos of someone. Think Superman’s parents in his Fortress of Solitude, except accessible through a web app. The inspiration came from a collaboration between USC and a museum, where they filmed 2,000 video segments of Pinchas Gutter, a Holocaust survivor. Visitors could ask questions, and the system would play the most relevant recorded response.
My version democratized this technology. Anyone could upload videos and create their own interactive experience. I studied different human-computer interaction scenarios to understand how people react when talking to recorded humans.
One anecdotal finding surprised me: people preferred the imperfect real over the polished synthetic.
The system wasn’t perfect. Sometimes you’d ask a question and get an answer on the same topic, but not quite the right nuance. The transitions between video clips were abrupt. Nothing like today’s seamless AI-generated avatars that are nearly indistinguishable from reality.
But people didn’t care about the technical imperfections.
They cared that a real human being sat in that chair and recorded those answers. They valued the authentic hesitation, the genuine emotion, the real human being behind the pixels. Even when the experience was clunkier than ChatGPT’s perfectly fluid responses we’re used to now, users connected more deeply with the recorded human.
Jim Lee from DC Comics captured this perfectly:
“People have an instinctive reaction to what feels authentic. We recoil from what feels fake. That’s why human creativity matters.”
The deepfake candidate problem reveals the same truth from a different angle. Arielle, the recruiting lead who spotted a fake candidate, developed a simple filter: ask situational questions tied to actual lived experience. AI can aggregate patterns from millions of resumes and rehearse perfect answers to generic questions. But it can’t fake the specific texture of having actually done something.
Where were you when that system failed at 2 AM? What did the room feel like when you pitched that idea to the board? What detail from that project still makes you laugh?
Lived experience has a signature that no amount of training data can reproduce.
THE HUMAN OVERRIDE
How to stay real when everything else is fake
We are entering the age of the great confusion. Simulation feels real, and the real feels inadequate.
Here are five practices I’m using to safeguard my authenticity and filter out the synthetic:
1. Double down on curiosity.
Curiosity is the antidote to automation. It’s the moment you pause and ask why something works, not just how. Use AI tools, yes. You have to learn them. But explore them like an engineer examining a new species. Break them open. Look at their biases, their limits. Curiosity restores agency in systems that prefer you to consume passively.
2. See beauty beyond the prompt.
A few nights ago, I watched the stars from a dark spot, and a very powerful telescope. Watching nebulae and galaxies from billions of years ago, I couldn’t help but think that is the same sky that outlasted every algorithm. Beauty in its raw, natural form resets your mental model of what’s possible. Reality is far greater than anything a large language model can hallucinate. When the world feels like a closed feedback loop of AI hype, go outside. Recalibrate.
3. Build things (even badly).
The fastest way to understand a system is to play with it. Write code. Sketch an idea. Launch a small product. You’ll feel the edges, those places where AI excels and the corners where it collapses. Competence, before sedimenting in your mind, shall start as tactile. You build, you fail, you learn.
4. Sanitize your feeds.
The AI echo chamber is full of performative experts. Ask your smartest friends who they follow, and why. Curate your intellectual diet the way you’d curate your health. Misinformation distort your worldview as much as it erodes your judgment. Clean your inputs, and your reasoning sharpens.
5. Apply the “Munger filters.”
Charlie Munger, mentor to Warren Buffet, thought differently becuse he filtered differently. I learned about him and his mental models for becoming a better investor, but I find myself run even information sources through three filters:
Strategic Depth over Hype. If their thinking is shallow but their narrative is big, they’re noise. Ask why someone is saying, writing what they say/write. As Munger said, “Show me the incentive and I’ll show you the outcome.”
Pattern Recognition from Pain. I trust people who’ve lost money, taken hits, and bounced back. Look for battle scars, not résumés. AI can summarize every business book ever written. It can’t survive a failed startup. Failure is the tuition of wisdom.
Relentless Execution Velocity. Ever been at any corporate board/meeting/event where you hear big talkers stalling, gate-keeping. An nothing moves. Execs paralized because they need 100% data before acting (and sometime, don’t even act after hearing the evidence). Smart operators act at 70% clarity and course-correct fast. Munger admired people who decide fast and course-correct faster. Humans with conviction execute. Watch them, stay close to them.
SPARK
New Yorkers defacing AI companion ads. Comic book publishers rejecting synthetic art. Recruiters detecting deepfake candidates. Financial institutions warning about bubble valuations.
They’re symptoms of the same diagnosis: human instinct recognizing when something essential is missing.
We can measure AI’s technical performance. We can track adoption curves and investment flows. But we can’t fake the lived experience that creates trust, connection, and meaning.
The Friend CEO spent $1 million to start a conversation about AI companionship. He got his conversation. Just not the one he expected.
Maybe the rebellion has already begun. Not as a march or a manifesto, but as a quiet refusal.
People don’t want synthetic friends. They want real ones.
People don’t want AI-generated art. They want human creativity with all its messy imperfections.
People don’t want perfectly rehearsed interview answers. They want the specific texture of actual lived experience.
The technology will keep improving. The datasets will keep expanding. The outputs will become more polished, more seamless, more convincing.
The next decade won’t be defined by how advanced the machines become. Rather, by how courageously we protect what’s unreproducible. The texture of real experience. The instinct for truth. The capacity to dream.
So here’s a question for you:
In your work, what would “authenticity at scale” look like and what are you willing to refuse to preserve it?