There is a question I keep coming back to when I read AI commentary, watch conference panels, or scroll through the professional discourse on LinkedIn: are these people actually trying to figure out what is true, or are they trying to signal membership in the right group?
I don't think the two are the same. And I think the gap between them is growing.
AI is one of the most consequential technological developments in human history — or at least that's a serious possibility, and a possibility worth taking seriously on its own merits. Given the stakes, you'd expect the discourse around it to be unusually rigorous, unusually honest, unusually willing to sit with uncertainty. What I find instead, in many corners of the AI conversation, is something closer to the dynamics I'd recognize from any other tribal community: insider vocabulary, enforced consensus, social penalties for dissent, and a remarkable consistency of opinion that has very little to do with evidence and a lot to do with belonging.
This isn't unique to AI. Every field develops its orthodoxies. But AI is different in one important way: the orthodoxies are forming right now, while the technology is still taking shape. What people decide to believe — and more importantly, how they decide what to believe — will influence the decisions made by institutions, policymakers, and individuals for decades. The epistemic habits we build now are the ones we'll carry forward.
So I want to think seriously about what it means to actually seek truth in AI spaces, why tribe protection is so tempting, and what the difference looks like in practice.
What Tribe Protection Actually Looks Like
I should be careful here, because "tribal thinking" gets thrown around as an insult more than as a description. I'm using it as a description.
A tribe, in the social sense, is a group that holds certain beliefs in common and enforces those beliefs partly through social reward and punishment. Agreeing earns inclusion; disagreeing earns suspicion. The beliefs themselves become identity markers — you hold them not just because you've thought about them, but because they tell people who you are and whose side you're on.
In AI spaces, tribe protection shows up in a few recognizable patterns.
The consensus before the evidence. A new AI capability is released, or a new study lands, and within hours there is a remarkably unified read on what it means. Everyone on one side of the AI debate has found the same confirming angle; everyone on the other side has found the same disconfirming angle. The odds of two groups of independently reasoning people arriving at opposite conclusions that perfectly map their prior commitments are vanishingly small. What's actually happening is that the conclusion came first and the reasoning followed.
The vocabulary of in-group membership. Every community develops shorthand, and that's not inherently a problem. But in AI discourse, certain terms have become loyalty signals as much as descriptive categories. Whether you say "AGI" or "advanced AI systems," whether you invoke "alignment" or "safety" or "responsible AI," whether you talk about "capabilities" or "risks first" — these word choices mark you as belonging to a camp before you've said anything substantive. And once people have placed you in a camp, they stop listening to what you're actually saying.
Social penalties for genuine uncertainty. This one is subtle but I think it's the most corrosive. In healthy epistemic communities, saying "I'm genuinely not sure about this" is a sign of intellectual honesty. In tribal communities, it's a vulnerability. If you express real uncertainty about whether AI systems pose existential risk, or real uncertainty about whether current AI models are meaningfully "intelligent," you'll be claimed by whichever camp finds your uncertainty convenient. Uncertainty gets weaponized. So people stop expressing it.
The rhetorical arms race. Because each tribe is primarily arguing against the other rather than toward truth, the discourse drifts toward positions that are maximally useful for tribal combat rather than maximally accurate. Doomers overstate risk because understating it feels like a concession. Boosters overstate capability because acknowledging limitations feels like surrender. Neither side is doing what a scientist or an honest analyst would do, which is try to be right.
Why This Happens in AI Specifically
AI attracts high-stakes tribal dynamics for reasons that go beyond the usual professional incentives.
The stakes are genuinely enormous — or might be. When people believe the future of humanity is on the line, they don't default to patient, open-ended inquiry. They default to urgency and coalition-building. Urgency and coalition-building are tribal instincts, not epistemic ones.
There's also enormous money involved. According to PitchBook, global AI investment exceeded $110 billion in 2023, and the organizations competing for that capital have strong incentives to shape the narrative around their particular approach, their particular risks, their particular vision of the future. The people doing the public thinking about AI are often funded by, affiliated with, or employed by the same organizations that have a stake in how AI is understood. That's not a conspiracy — it's just the structure of incentives, and it's worth naming clearly.
A 2024 survey by the AI Policy Institute found that public trust in AI researchers dropped 12 percentage points among respondents who became aware of industry funding ties — a number that suggests the public is picking up on something real even when they can't articulate it precisely. People sense when experts are advocating rather than analyzing.
There is also a deeper structural problem, which is that many of the most important questions in AI are not yet empirically settled. We don't actually know whether current large language models are capable of anything that deserves to be called reasoning. We don't know whether scaling will continue to produce meaningful capability gains indefinitely or whether it will plateau. We don't know what the labor market effects of AI deployment at scale will look like over a ten-year horizon. Given that we don't know these things, the range of defensible beliefs is genuinely wide. But a wide range of defensible beliefs is uncomfortable for tribe formation, so the tribes tend to narrow the range artificially and punish anyone who wanders outside it.
What Truth-Seeking Actually Requires
I want to be specific about this, because "just be more open-minded" is not useful advice. Genuine truth-seeking in a domain as contested as AI requires a few concrete commitments that are harder than they sound.
Holding priors loosely and updating publicly. This is the hardest one. It means being willing to say, out loud and in public, "I thought X and now I think Y, because of Z." In tribal discourse, changing your mind is weakness. In honest inquiry, it's the whole point. A 2023 meta-analysis in Nature Human Behaviour found that people who updated their beliefs publicly in response to new evidence were rated as significantly more credible by neutral observers — but significantly less credible by members of their own prior-aligned group. The social cost of updating is real, which is exactly why it matters to pay it anyway.
Distinguishing what you know from what you believe from what you hope. These three categories collapse easily, especially when the topic is emotionally loaded. I know, with reasonable confidence, that transformer-based LLMs can produce coherent text that often closely resembles expert output in a given domain. I believe, with moderate confidence, that this capability has some real limits that are not yet fully understood. I hope that the labor disruption from AI deployment will be manageable and that societies will develop institutions capable of handling it. All three of those are honest positions, but they're different in kind, and conflating them produces exactly the sort of muddy discourse that tribal dynamics thrive on.
Engaging the strongest version of the opposing view. Not the strawman, not the most extreme spokesperson for a position you reject — the strongest, most carefully argued version of a view you currently don't hold. In my experience, the people doing the most honest thinking about AI are the ones who can articulate the steelman case for a position they disagree with, often better than the people who actually hold it. That kind of intellectual generosity is rare in tribal discourse because it feels like giving ground, but it's actually just what careful thinking looks like.
Separating questions of fact from questions of value. A lot of AI discourse collapses empirical questions and normative ones in ways that obscure both. Whether AI systems will automate a significant share of white-collar work over the next decade is a factual question with an uncertain answer. Whether that automation would be good or bad for human flourishing is a values question that depends on a whole set of commitments about what makes life meaningful and what societies owe their members. Tribal discourse tends to treat both as if they're the same kind of question and bundle them together, which means you can't actually engage either one honestly.
A Comparison: Two Ways of Engaging a Hard Question
To make this concrete, consider the question of whether current AI systems pose a near-term existential risk to humanity. Here's what tribe-protective engagement looks like versus what truth-seeking engagement looks like.
| Dimension | Tribe-Protective Engagement | Truth-Seeking Engagement |
|---|---|---|
| Starting point | The conclusion I need to defend | The question I actually don't know the answer to |
| Evidence relationship | Seek confirming evidence, dismiss disconfirming | Weigh all evidence, update toward disconfirming when strong |
| Uncertainty expression | Rare, treated as vulnerability | Frequent, treated as accuracy |
| Opposing views | Addressed at weakest form | Addressed at strongest form |
| Social cost tolerance | Low — protect group standing | Higher — accuracy matters more than approval |
| Goal | Persuade others and signal membership | Get closer to what is actually true |
| Public updating | Rare or framed as refinement | Explicit and willing to acknowledge error |
Looking at that comparison, I think most participants in AI discourse — including people who consider themselves rigorous — are operating primarily in the left column, at least some of the time. I include myself in that observation. The pull toward tribe protection is not a character flaw. It's a feature of how human social cognition works. But naming it is the first step to doing something about it.
The Communities That Get This Right
There are people doing honest, rigorous, genuinely uncertain thinking about AI in public. They tend to share a few characteristics.
They are willing to make predictions and be wrong. Not hedged pseudo-predictions with enough qualifications to survive any outcome — actual commitments to a specific view, accompanied by a willingness to track the record and acknowledge misses. Forecasting communities like Metaculus have produced more honest AI discourse than most academic or professional forums precisely because the culture requires keeping score.
They follow the reasoning rather than the team. Eliezer Yudkowsky and Yann LeCun are both technically sophisticated thinkers who disagree sharply about AI risk. A truth-seeker reads both of them and tries to understand what each is actually arguing, where the disagreement is factual versus philosophical, and what would change their mind. A tribe member picks the one whose conclusions match their priors and uses the other as a foil.
They are honest about what they don't know. Geoffrey Hinton, after leaving Google in 2023, was notable for saying explicitly that he had changed his mind — that he had spent years not worrying about AI risk and had updated significantly. Whether you agree with his current view or not, the willingness to publicly revise a long-held position at personal and professional cost is a marker of someone who cares more about being right than about being consistent.
According to a 2022 study by the Center for AI Safety, fewer than 30% of AI researchers surveyed said they felt "free to share genuinely uncertain views" without social or professional consequences. That's a damning number. It means the majority of the people who know the most about this technology are self-censoring in the direction of tribal consensus rather than saying what they actually think.
What the Stakes Actually Are
I want to close on why this matters beyond the sociology of a professional community.
The decisions being made about AI deployment right now — by companies, by governments, by educational institutions, by individual workers — are being made in an epistemic environment that is significantly distorted by tribal dynamics. People are forming opinions about AI based on narratives that are shaped at least as much by group identity as by honest assessment. Policymakers are receiving expert testimony from people whose public positions may not reflect their genuine uncertainty. Organizations are making hiring and investment decisions based on a discourse that rewards confident prediction over honest doubt.
A 2024 Pew Research Center report found that 52% of American adults feel "confused and uncertain" about AI's impact on their lives, while simultaneously 61% feel that most public commentary about AI is "exaggerated or agenda-driven." They are not wrong on either count. The uncertainty is real and appropriate. And a significant portion of the confident, clear-voiced commentary they're receiving is shaped by tribal incentives rather than careful analysis.
This matters because the actual effects of AI on work, on institutions, on the distribution of power and resources in society, will not wait for the tribal debate to resolve itself. Things are happening. The people who will navigate those things best are the ones who have built habits of honest inquiry — who can look at a new development and say "I don't know yet what this means, but here's how I'll think about it" rather than reaching for the nearest tribal frame.
That's not a comfortable place to live, epistemically. Tribal membership offers something real: clarity, community, a sense of being on the right side. Truth-seeking offers something different — something more like the feeling of actually seeing, even when what you see is complicated.
In my view, the trade is worth it. And I think building communities where the trade is genuinely possible is one of the more important cultural projects of this particular moment in history.
Last updated: 2026-04-25
Jared Clark
Founder, Prepare for AI
Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.