
Most of our biggest arguments – about technology, politics, work, belief systems – aren’t just arguments about facts. They’re collisions between convictions: beliefs that have hardened beyond the evidence and reasoning that led to them.
Many of these convictions go unchecked. They form under uncertainty, get reinforced by social incentives, and harden into identity. Over time, new information stops being information and starts feeling like an attack. That isn’t a failure of intelligence; it’s a rational response to the cost of updating.
By “unchecked convictions,” I don’t mean bad intentions or dishonesty. I mean something more ordinary and more dangerous: beliefs we stop testing against reality, counter-evidence, tradeoffs, and time.
I’m sure I’m doing this somewhere right now; I just can’t see where.
Objectivity is an aspiration, not a personality trait
I don’t mean this cynically and I’m not arguing that reality doesn’t exist or that evidence doesn’t matter. The opposite, actually. Objectivity is not something you “have.” It’s something you practice and it requires work: noticing when your emotions are steering your conclusions, staying alert to the incentives shaping what you see, and updating your belief system even when updating is inconvenient.
Updating is hard for two reasons: we often don’t know how to do it well, and even when we do, it often threatens belonging.
That’s where the problem starts. The social world rarely rewards calibration. It rewards confidence: fluent certainty, early certainty, tribal certainty. It rewards clean stories more than careful ones, and it rewards the comforting feeling of coherence even when the world refuses your neat narratives.
Certainty feels like clarity, but most of the time it’s just a signal that updating has become socially expensive.
Reality keeps offering mixed signals. A person you admire does something indefensible. A person you dislike does something competent. A policy “works” in a pilot and then fails at scale. A metric improves while something important quietly degrades.
So we stop updating. Not because we’re stupid, but because updating has a cost.
A visible example: the AI conversation
The AI conversation makes this especially visible. The technology is moving fast, the evidence is uneven, and even experts disagree on timelines and failure scenarios. Yet certainty keeps hardening into tribal positions: AI will transform everything for the better versus AI is dangerously out of control.
Hedge and you’re “naïve” in one room and a “doomer” in the next. “Safety” becomes a badge on one side, “move fast and deploy” on the other, and the space for honest calibration shrinks precisely when it’s most needed.
The irony is that uncertainty is real, but the incentives punish anyone who admits it.
When smart people stop updating
If you’ve worked in business, you’ve seen quieter versions of this dynamic. A team falls in love with a strategy and begins interpreting every data point as confirmation. A founder becomes emotionally attached to a narrative and stops hearing the market. A well-designed program launches with the right vocabulary (governance, incentives, infrastructure) and then acts surprised when behavior doesn’t move on schedule.
I’ve done my share of it too, and one example still sticks with me because there was no scandal, no villain, no dramatic failure – just the slow, stubborn reality that people do not move at the speed of our spreadsheets.
Several years ago, I worked on a program to build startup infrastructure in Central Asia. The theory was sound and the design was, on paper, elegant: align incentives, strengthen support infrastructure, build the right governance, and founder activity will follow.
We believed that if we put the right system in place, founder activity would meaningfully increase within twelve to eighteen months.
And then eighteen months came and went and only a fraction of what was expected had materialized. Incubators launched slowly or not at all, an angel network met twice and quietly disappeared, seed deployment remained nearly null, and founder activity didn’t meaningfully change. When I learned that later, my reaction was: so much work, effort, money, and coordination for so little result – once again.
The lesson wasn’t that planning is useless or that ambition is naïve. The lesson was that our assumptions had been incomplete. We underestimated how slowly trust, adoption, and behavior move, and we overestimated commitment and coordination capacity across partners.
In hindsight, a piecemeal approach (testing smaller hypotheses sequentially, with clearer attribution and targets) would likely have produced more learning and more real impact. But that isn’t a story about Central Asia; it’s a story about how easy it is for smart, well-intentioned people to mistake a coherent model for a true one, and then to stop updating because the model feels so internally consistent.
Now scale that tendency into an environment designed to exploit it.
Social media didn’t invent bias. It industrialized it.
Social media is not a neutral channel. It’s an incentive system that rewards attention, and attention is captured far more reliably by outrage and certainty than by careful calibration. It also rewards reflexive contrarianism and it’s increasingly vulnerable to organized propaganda – both of which push people away from calibration and toward tribal certainty.
But the deeper shift isn’t just that the loudest voices travel farther; it’s that the long tail changed the social cost of holding extreme views.
In the old world, if you held a fringe or intense position, you might be lonely, and loneliness had a moderating effect. You had to keep living among people who disagreed with you, which forced friction between your certainty and the real complexity of the world.
In the long-tail world, you can always find your micro-tribe. Within minutes you can find a community that turns your hottest take into a personality, hands you a script, and rewards you for repeating it fluently. No matter how simplified your picture of reality becomes, no matter how moralized your interpretation feels, there is an audience ready to applaud it.
That changes the payoff structure. The cost of being extreme goes down. The rewards go up. And the disciplined work of updating starts to feel less like growth and more like betrayal.
Politics as a loyalty test
This is why so much of our political discourse now feels less like debate and more like loyalty theater. Many people choose a side and then stand behind everything that side stands for, not because they’ve carefully evaluated each issue, but because the point shifts from solving problems to signaling belonging.
Once a position becomes a badge, evidence becomes secondary. Incentives and outcomes get ignored in favor of cheerleading, contradictions become easy to defend, nuance becomes suspicious, and compromise is framed as weakness or treason.
And the social feed amplifies it further, because the content that performs best is rarely the content that helps you think; it’s the content that helps you feel certain. The result is predictable: we become certain about everything, while understanding less and less.
Not all disagreements are the same
At this point, the smartest skeptical reader will object: “That’s all true, but you’re still implying that disagreement is mostly a misunderstanding, and it’s not.” I agree. One reason conversations go nowhere is that we collapse fundamentally different kinds of disagreement into one bucket, and then we use the wrong tool.
Before you argue, ask what kind of disagreement you’re actually in. Sometimes you’re in a map disagreement: two people share a goal but see reality differently because they have different priors, assumptions, or causal stories – and those priors often hide untested convictions about how the world works.
Sometimes you’re in an objective disagreement: you may share facts, but you are optimizing for different outcomes, which makes the conflict about tradeoffs rather than truth.
And sometimes you’re in a value conflict: the values themselves collide in ways that aren’t tradable, so “finding common ground” may not be possible and the best outcome is stable disagreement within a shared commitment to process and basic rights.
The point of the taxonomy isn’t to sound clever; it’s to stop wasting energy. If you treat a value conflict like a data problem, you’ll end up in endless fact wars. If you treat an objective disagreement like a character defect, you’ll end up in contempt. And if you treat a map disagreement like tribal identity, you’ll end up in escalation.
A small scene from a WhatsApp group
I watched a small version of this play out in a WhatsApp thread among Brazilian friends. Someone posted a cheer for a brutal, anti-democratic regime abroad. It wasn’t framed as “I love oppression.” It was framed in the language that tends to seduce reasonable people when they are exhausted by dysfunction: at least they get things done, at least they restore order, at least they’re not hypocrites.
If you’ve lived with injustice, inequality, institutional decay, or the paralysis of performative moralizing, you can understand why that story can feel coherent in the moment. It offers emotional relief. It turns complexity into a single clean conclusion. It makes chaos feel controllable, and it makes doubt feel like weakness.
What shifted the conversation, slightly, was not a perfect argument or a devastating dunk. It was a mirror. Someone introduced a symmetry test that made the logic harder to hold without forcing anyone to admit defeat: the reasoning you’re using to excuse authoritarianism abroad could be used at home too.
Brazil has deep injustices and governance failures; if someone offered an authoritarian and violent “solution” like this here as the trade, would you accept it? If not, what non-authoritarian solutions should we be debating instead?
Nobody instantly changed their mind, but the conversation moved from performance toward coherence, from moral certainty toward at least a moment of reality testing. That shift is rare, and when it happens, it reveals something important: the real problem often isn’t that we disagree. It’s that we disagree in ways that make updating socially impossible.
You can’t fix the internet. You can change the local incentives.
You can’t redesign the social feed by yourself. But you can change the incentive structure in your own life. And that matters more than it sounds, because most of life is a repeated game.
Right now, one of the most damaging payoff structures in public discourse is that updating your beliefs has become socially expensive. Nuance is treated as disloyalty, partial concession as surrender, and compromise is framed as betrayal. So we get stuck in a bad equilibrium where people perform certainty even when they privately doubt it, because the social cost of revising a view feels higher than the intellectual cost of being wrong.
If you want to push back against that, try this – not as a grand self-improvement project, but as a small experiment.
A 48-hour calibration experiment
First: change the inputs (15 minutes).
· Pick the topic where your feed is most one-note: AI, politics, economics, whatever generates the most certainty and the least nuance.
· Mute three repeat offenders who always push the same emotional temperature.
· Add two sources that complicate your current view rather than confirm it.
· Read or watch one long-form conversation where people who disagree actually test each other’s assumptions.
Your goal isn’t to “balance” your politics. It’s to stop consuming only one emotional temperature. If the only thing you ever feel is dread, outrage or vindication, your brain will start treating that feeling as evidence.
Then: change the move (one real conversation).
The next time you’re in a heated disagreement (say, a WhatsApp thread about a protest where someone argues: “When our side blocks roads or breaks windows it’s justified; when their side does it, it’s criminal.”) don’t debate the event.
Use a mirror test: “If the exact same tactics were used by the group you dislike, would you defend it the same way?”
Then make it practical: “What specific fact would make you revise your view here? A video of who started it, what the police orders were, injuries, property damage, or an independent report?”
In one move, you either get a clear threshold (and the conversation becomes about evidence) or you learn that no amount of evidence will matter because the disagreement is really about identity or values. You can stop treating it like a fact fight.
A final rule of thumb
If there’s one line I hope sticks, it’s this: treat disagreement as a data problem before you treat it as a character judgment. Respect the person. Interrogate the convictions – including your own.
Leave a comment