Mindwars: Open, Preregistered… and Wrong Question — Pathologising Dissent in Plain Sight
How a methods-heavy psychology paper became a public verdict—and how to reopen the evidence lane.
1. Introduction—The Method-as-Verdict Choreography
A new clinical-psychology paper, published in Clinical Psychological Science on 29 September 2025, promises clarity on a heated question: Do Stress, Depression, and Anxiety Lead to Beliefs in Conspiracy Theories? The authors, from Massey University, declare their longitudinal study shows “little to no effect.”
This is the same paper whose press-friendly version—featured on New Zealand state media channel 1News—was examined in “Mindwars: Fluoridating the Rabbit Hole” . But as with my previous analyses of Marianna Spring’s “Conspiracyland” and Hornsey et al’s work, the real story lies beneath the headline. Look deeper and you’ll see a now-familiar choreography. A specific statistical model—a within-person panel design—is positioned as the sole arbiter of causal talk. Negation phrases like “no effect” and “not predictive” function as closure devices. Transparency rituals like preregistration and open data operate as a shield against substantive critique. And a psychometric label—“conspiracy mentality”—effortlessly drifts into a reified population category, pathologising dissenters.
The result is what we might call governance by method. A respectable technique and open-science optics launder a narrow, model-bound finding into a broad social verdict. This is not an information operation in the classic sense; it is a structural one that systematically pathologises dissent while leaving institutional credulity—the unquestioning acceptance of official narratives—completely unexamined.
Whether or not the study reports its minimal detectable effect is less important than what the omission represents: a procedural culture that treats statistical completeness as epistemic closure. The finding of “no longitudinal effect” may be perfectly valid within its model — the issue is that the model itself defines what counts as reality. It steers public and policy debate away from the only question that matters for genuine risk assessment: what does the evidence actually say about the claim itself, at the relevant dose and time window?
This Mindwars investigation is a deeper dive into the source study. It maps how this epistemic seal is manufactured, exposes the costs of a science that profiles people instead of testing propositions, and—critically—upgrades the framework with a symmetric, claim-first process that real scientists can stand behind. We don’t need to prove harm from any particular substance here; we need to show how the current system replaces truth-testing with believer-profiling. And in a note of fairness: it is acknowledged that within their own window, these authors actually de-pathologise one classic story, finding no “vicious cycle” of distress causing belief. Yet, as we will see, the larger category economy survives this nuance, stronger than ever.
2. The Authors and Their Institutional Footing
To locate a paper’s epistemic orientation, we must first situate its authors within the institutional lattice that confers authority. Nick D. Fox, Stephen R. Hill, and Matt N. Williams are all based in the School of Psychology at Massey University, New Zealand—a team representing a standard academic ladder from doctoral researcher (Fox) → Associate Professor & Doctoral Supervisor) (Hill) → Professor (Williams). This placement is not a mere footnote; it signals the methodological incentive stack that governs their niche: publish within the paradigms of psychometrics and longitudinal structural equation modelling; privilege procedural precision; and translate results through the channels of university public relations and academic key performance indicators.
As the corresponding author and senior figure, Matt N. Williams a senior figure in the Australasian subfield focused on public belief and conspiracist attribution. His prior work includes papers like “Australasian public awareness and belief in conspiracy theories: Motivational correlates,” “People do change their beliefs about conspiracy theories—But not often,” and Why are beliefs in different conspiracy theories positively correlated across individuals?” The authorial lists of many of these papers also feature Hill’s name. Given the topic, Fox, as a doctoral candidate, almost certainly anchors his PhD work under Williams’ supervision. But this is a story of methodological conditioning, not academic relationships. The system they operate within rewards the conversion of dissent into measurable constructs through instruments like the Conspiracy Mentality Questionnaire (CMQ), incentivising research that asks who believes rather than whether the exemplar claim could be valid.
As stated above, it is notable in this instance that the authors arrive at a de-pathologising finding—showing no vicious cycle of distress causing belief. However, the same architectural choices that yield a clean null also actively maintain the pathologising category: instrument labels are allowed to drift into public discourse as population labels; the “no effect” conclusion is packaged without its Minimal Detectable Effect (MDE = [unknown/not reported]; window = ~7 months; lens = within-person panel); and no mirror lane is run to scrutinise institutional credulity.
Hazarding a guess, it is likely that the paper represents the outcome of Fox’s doctoral research project, and the accompanying articles signal an aspiration for an academic career based on this work. This impression is reinforced by the relay choreography:
coordinated science-comms (journal online 29 Sept 2025 → The Conversation 13 Oct 2025 → MedicalXpress syndication 13 Oct 2025 → university PR Oct 14, 2025)
Each peice in the chain repeating the conspiracy theorist frame and linking it to beliefs about water flouridation.
The bottom line: these are method-faithful operators working inside a structure that makes psychology about profiling people rather than adjudicating propositions. If we read their conclusions as strictly model-bounded statements, suddenly the imperative shifts. Perhaps we can demand the counter-specification and the two-lane ledger, refocussing the entire research orientation from the flaws of people to the substance of claims or the nature of belief in general—conforming or non-conforming.
3. The Model as Gatekeeper
The pretence of neutrality is the most potent part of the mechanism. The paper and its accompanying media coverage present themselves as objective, method-driven science. But by uncritically adopting and operationalising the “conspiracy theorist” frame, they build a profound bias directly into their foundation.
This isn’t a minor oversight; it’s a foundational category error that dictates every step that follows. Here’s how that bias works:
The Frame Pre-Selects the Object of Study: The moment you decide to study “conspiracy theorists,” you have already made a choice. You are not studying “people who dispute official narratives” or “holders of heterodox beliefs.” You are studying a group already defined by a term (conspiracy theorist) that, in modern parlance, is loaded with connotations of irrationality, pathology, and deviance. The question is no longer “Are these claims valid?” but “What is wrong with the people who believe them?”
The Frame Dictates the Methodology: Because the object of study is a “type of person,” the entire toolkit of psychometrics and individual-differences psychology becomes the default, “neutral” approach. You create scales (CMQ), you measure personality traits, you correlate them with anxiety. A truly neutral approach to understanding why people believe certain things would also include:
Political Science: Studying institutional trustworthiness and corruption.
History: Analysing documented instances of actual conspiracies or institutional deception.
Sociology: Examining group dynamics and network effects.
Philosophy of Science: Interrogating the processes of official knowledge production.
By not doing this, the “neutral” science is, in fact, highly selective. It focuses the investigative lens exclusively on the individual psyche, implicitly letting the institutions and their claims off the hook.
The Frame Creates a False Asymmetry: Notably, there is no equivalent field studying “Official Narrative Believers” or the “Institutionally Credulous.” There is no “Authority Deference Scale” pathologising people who trusted the WMD intelligence in Iraq, or corporate statements from tobacco companies, or any other official claim that was later proven false (like the official narrative on fluoride safety). The “conspiracy theorist” frame automatically and asymmetrically defines one side of a debate as the “problem” worthy of scientific scrutiny, while the other side is the unexamined, rational baseline.
A Narrow, Model-Bound Finding Travels as a Broad Verdict: The study’s analysis shows how the rigorous, “neutral” method (RI-CLPM) produces a narrow finding (“no within-person effect in this panel”). But because the frame is already biased, this narrow finding travels into a broad public conclusion via the frame + relay: “See? Science says distress doesn’t cause these irrational beliefs.” The procedural rigor of the method lends false credibility to the biased frame. It makes the pathologising conclusion seem like a neutral, data-driven outcome, when the pathologising was embedded in the premise.
In short, the “conspiracy theorist” frame is a pre-emptive ad hominem disguised as a scientific category.
A genuinely neutral approach would be claim-agnostic. It would start with a disputed proposition (e.g., “Fluoridation at dose X causes harm Y”) and then:
- Audit the evidence for and against the claim.
- Study the sociological, political, and psychological factors that lead all people (both supporters and detractors) to their position.
- Examine the institutional and historical context that makes suspicion more or less rational.
By failing to do this, the “science” shows itself to be not just slightly biased—it is performing a specific social and political function: to naturalise official narratives and medicalise dissent. The pretence of neutrality is what allows it to do this so effectively. As pointed out in the prior article on this particular research, the political aspects of discussions about water fluoridation in New Zealand cannot be ignored.
4. Manufacturing Closure—Sealing the Epistemic Vacuum
The most potent achievement of the Fox et al. (2025) paper is not its null finding, but its success in manufacturing a definitive end to a conversation that never actually happened. It creates a sense of finality, not by engaging with the substance of conspiratorial claims, but by constructing a hermetically sealed discursive arena where the question of a claim’s truth or reasonableness is rendered methodologically invisible. This “manufactured closure” is engineered through three sophisticated mechanisms that transform a narrow, model-bound non-finding into a broad verdict on belief itself.
- The Negation Cadence: Sealing the Vacuum with “No Effect”
The study’s core results are communicated through a rhythmic, repeated use of negation: “no evidence,” “little effect,” “not predictive.” This “negation cadence” is a powerful closure device precisely because it is deployed within a fact-free vacuum. The study never establishes whether the selected “conspiracy theories” are, in fact, irrational; it simply assumes this by fiat through its item selection. Therefore, when it declares “no effect” of distress on belief, the unstated but powerful implication is: ”Since these beliefs are not caused by distress, they must emerge from a stable, inherent irrationality.”
Stripped of the essential context of the Minimal Detectable Effect (MDE) and the study’s inherent blindness to real-world evidence, this cadence mutates from a statistical observation (“Our model detected nothing”) into a substantive conclusion (“There is no rational cause for these beliefs”). It travels from the paper into press releases and media headlines, landing with the force of a scientific verdict. The public hears that “science has proven” these beliefs are baseless, not that a specific model, looking only at internal psychological states, found no link to another internal psychological state. The cadence performs the closure, foreclosing the very possibility that beliefs could be responsive to evidence, history, or institutional behaviour. - Transparency as a Shield: Immunising the Frame from Scrutiny
A second, more insidious mechanism is the weaponisation of open-science norms as an argumentative shield. The paper’s adherence to preregistration and open data is leveraged as a performance of unassailable rigor. This creates a “transparency immuniser” that protects the study’s foundational bias from challenge.
Any critique of the study’s core premise—that it pathologises dissent by studying only pre-designated “conspiracy theories” while ignoring institutional credulity—can be deflected by appealing to these procedural virtues. To question the framing becomes framed as being against scientific transparency and rigor itself. The underlying, unstated equation becomes: Preregistered + Open Data = Valid Framing of the Research Question.
This brilliantly immunises the entire enterprise. The conversation is redirected from the substance of the critique—“You are studying the wrong thing in the wrong way”—to the hygiene of the process—“But we followed the protocol!” It transforms tools for accountability into tools for foreclosing a more fundamental debate about power, evidence, and what counts as a legitimate object of study. - Category Drift: Reifying the “Fact-Free” Believer
Finally, closure is achieved through the crucial drift of a psychometric instrument into a social identity. The study uses the “Conspiracy Mentality” questionnaire (CMQ) to measure a disposition. Throughout the paper and its dissemination, this instrument-bound term (“scores on the CMQ”) effortlessly drifts into a reified social label: “conspiracy theorists.”
This is the master-stroke. By framing the issue around a fixed identity (“conspiracy theorist”) rather than a set of evaluable propositions, the debate is permanently sealed away from the truth or falsity of specific claims. The “conspiracy theorist” is constructed as a fact-free actor, whose beliefs are products of a “mentality” entirely detached from the objective world. The question is no longer “What is the evidence for this claim about fluoride?” but “What is the psychological profile of a ‘conspiracy-minded’ person?”
This final move completes the epistemic containment. It pathologises the dissenter not as clinically ill, but as epistemically broken—a person whose beliefs are generated by a flawed cognitive engine, not by a rational engagement with a complex, and often duplicitous, world. It manufactures a durable closure that protects institutional narratives by ensuring the conversation remains forever focused on the psychology of the believer, and never on the behaviour of the powerful.
5. The Reference Spine—Canon Metrics & Author Orientation
A paper’s reference list is its intellectual orientation map, revealing the boundaries of what it considers legitimate knowledge. The spine of this study reads like a tight, inward-facing ecosystem, and the canon metrics tell a clear story.
The paper’s reference spine of 81 items features a significant core of lead authors: van Prooijen (7 items; ~5 as first author), Douglas (6; 3 first), Swami (3), Jost (3), Uscinski (2), van der Linden (2), Brotherton (2), Goertzel (2). The makes the top three first-author share ≈ 12–13%. And the mix: ~74% academic, ~26% grey/media/government. In these terms, the paper exhibits a solid cadre-anchored orientation toward belief segmentation, media-portable lexicon, and policy-ready inoculation—not toward claim testing.
The study (and apparently much of the field) runs on three workhorse instruments: Conspiracy Mentality Questionnaire (CMQ) (Bruder et al., 2013), a brief “content-free” index of conspiracist disposition; Generic Conspiracist Beliefs Scale (GCB) (Brotherton et al., 2013), a multi-item battery that aggregates endorsement of stylised conspiracy claims into a single “worldview” score; and the Cognitive Reflection Test (CRT) (Frederick; Thomson & Oppenheimer, 2016 variants), a quick gauge of “cognitive reflection” often used to predict CMQ/GCB scores. In practice, CMQ is used to abstract suspicion away from any particular claim, GCB supplies a portable trait label for correlational and intervention studies, and CRT provides a cognition-virtue yardstick that neatly orders “rational” vs “credulous” minds. The catch is baked in: CMQ and GCB presuppose that persistent suspicion is a person-level property (not a response to evidence), GCB’s items presuppose the underlying claims are false (accuracy can’t raise your score), and CRT presupposes that analytic style is the normative standard. Together they stabilise a people-first, proposition-second research economy—useful for profiling and “prebunking,” weak for adjudicating whether any specific claim is true at dose/time.
The presuppositions of the field are not accidental; they are codified in the very definition that opens the paper. The authors write that “a conspiracy theory is an explanation of an event or observation as the result of a conspiracy—multiple actors secretly plotting to do something harmful or unlawful” (Swami et al., 2016), and proceed to note that “although conspiracies do happen, a non-trivial minority of the public express belief in conspiracy theories that are unwarranted or even strongly contradicted by evidence.”
From that hinge sentence forward, every instrument—CMQ, GCB, CRT—has its assumptions pre-loaded. “Conspiracy theory” is defined as prima facie false or unwarranted, so any measured belief is, by construction, an error signal. The scales can only register credulity, never detection. Once the chemtrail example is invoked (“of course, they are simply contrails”), the epistemic asymmetry is complete: institutional explanations are declared correct in the definition, public doubt becomes pathology in the data. The tools then operationalise that presupposition—CMQ generalises it, GCB quantifies it, CRT moralises it—so the outcome is already determined before a participant answers the first question.
The bibliography bends toward who believes and how to “prebunk”—and away from whether the exemplar claims are true or subject to any valid debate (scientific or otherwise). To be precise: we’re diagnosing orientation, not intent. The spine bends toward the tools and labels that the field already rewards.
Title archetypes (the templates you see over and over):
- “Belief in conspiracy theories” / “Beliefs in conspiracies”
- “Measuring belief in conspiracy theories: [scale/validation]”
- “[Trait/ability] predicts belief in conspiracy theories”
- “The psychology of conspiracy theories” / “Understanding conspiracy theories”
- “[Cognitive process] reduces belief in conspiracy theories”
- “[Social/ideological factor] predicts conspiratorial thinking”
- “[Intervention/prebunking] inoculates against misinformation/conspiracy beliefs”
- “[Control/threat/loss] and belief in conspiracy theories”
- “[Education/analytic thinking] and decreased belief in conspiracy theories”
- “[Political extremism/ideology] predicts belief in conspiracy theories”
- “[Media exposure] increases/decreases distrust or conspiracism”
- “Paranoid style / historical or rhetorical frame revisited”
These titles operationalise a population lens that builds/validates scales, links traits and ideologies to “conspiracy belief,” and tests inoculation—profiling believers rather than adjudicating whether the exemplar claims are true at dose/time.
This is not inherently suspect, but when combined with a high Method Monoculture Ratio—where the RI-CLPM/CLPM/SEM family of models dominates the empirical citations—it signals a field orthodoxy in which the methodological lens itself becomes the ultimate arbiter of truth. When most intellectual waypoints route back to the same people using the same tools, replication can masquerade as accumulation, while the research question itself quietly shrinks to fit only what that specific lens can see.
This self-reinforcing loop is compounded by the Labelling Load evident in the cited works—titles that heavily feature “conspiracy,” “mentality,” and “misinformation.” This reveals a critical lexical drift: the reification of psychometric instruments into social identities. The name of a scale (“conspiracy mentality”) creeps into use as a population category (“conspiracy-minded”), a label that travels with ease into media and policy briefs. This is not fraud; it is a taxonomic convenience. But labels simplify at a cost: they naturalise a population frame that pre-decides what counts as an explanation, prioritising traits over truth and ensuring the conversation remains focused on the believer’s psychology.
This orientation is quantified by the Proposition Testing Ratio, which tilts overwhelmingly toward person-profiling over claim-adjudication. The reference spine shows a scarcity of cohort studies with biological exposure markers or quasi-experimental designs that would directly test an exemplar claim (e.g., “fluoride at dose X causes harm Y”). Instead, the canon privileges studies of the correlates of belief. This is perfectly consistent with the paper’s procedural centre of gravity: a focus on within-person longitudinal variation in mood and belief scores, a closed system that is methodologically blind to the external evidence for or against a claim.
Taken together, this reference spine reveals authors who are method-faithful, category-comfortable, and symmetry-blind. They are faithful to a methodological orthodoxy where causal legitimacy is bounded by the longitudinal SEM toolkit. They are comfortable allowing psychometric constructs to serve as explanatory shorthand for complex beliefs. And they are blind to the critical asymmetry of their field: there is no parallel literature pathologising “institutional credulity”—the unquestioning acceptance of official narratives despite documented institutional failures—nor any demand for a two-sided evidence ledger that tests official claims with the same rigor applied to heterodox ones.
This is not a matter of bad faith but of a deeply embedded field orientation. To generate different answers, one must change the inputs: broaden the canon to include direct claim-tests and quasi-experiments, pair every belief-correlate citation with a balanced evidence ledger, and retire reifying labels in favour of instrument-bound phrasing. Without this, the reference spine will perpetually bend any analysis back into the same self-justifying category economy that produced it, ensuring the epistemic seal remains intact.
6. The Closed-World Frame: What the Items Themselves Assume
Table 3 of the paper quietly reveals the entire epistemic structure. The “Belief-in-Conspiracy-Theories” measure consists of eleven statements, each describing a claim already defined as false or discredited in its citation trail. Every item is either “adapted from” a prior paper that treated the statement as baseless (Swami et al., 2011; Lewandowsky et al., 2013; Jolley & Douglas, 2014) or “constructed for this study” to mirror that tone. Participants are invited to “indicate the extent to which you agree,” but not whether the claim might be empirically true in part, conditionally, or at a different time.
The result is a closed-world instrument: belief is defined as error, disbelief as correctness. A respondent who marks agreement with any of the eleven statements is, by design, scoring against evidence—because the references embedded in the table (e.g., “of course, contrails are simply frozen water vapour”) function as prior adjudications. The instrument therefore doesn’t measure openness to conspiratorial explanations; it measures distance from the sanctioned narrative corpus.
Look at the lineup:
- Item 1 & 4 (COVID origin and government motives) tie respondent belief to political legitimacy, not empirical uncertainty or evidence of political malfeasance and corporate profit driven behaviours.
- Item 2 (NWO) → Elite-cabal macro-control (grand agency meta-plot; outside single-claim adjudication).
- Item 5 (Chemtrails) and Item 6 (Fluoride) target environmental-health dissent long before dosing or exposure levels are specified.
- Item 7 (Climate) and Item 8 (Vaccines) conflate ongoing policy and scientific disputes with flat falsity.
- Item 9 (election theft) → Procedural electoral fraud (institutional process allegation; requires audit/tracing, not psychometric pathologising).
- Item 10 & 11 (Cancer cure, GMOs) moralise corporate scepticism as pathology.
Across the set, there is no parallel lane that asks about over-trust in official claims later proven wrong, no “mirror” items such as “Government statements about safety are always reliable.” A neutral instrument would include mirror items for institutional credulity (e.g., ‘Government safety claims are reliable by default’) to measure over-trust with the same zeal used to measure dissent. The asymmetry is built into the scoring key: all doubt points toward dysfunction; no credulity points toward naïveté.
This is what “governance by method” looks like in miniature. The questionnaire isn’t neutral—it’s an enforcement device for epistemic alignment, canonising an assumed and utterly unquestioned fixed list of “false beliefs” as diagnostic material. Once that table enters the psychometric bloodstream, the very act of scepticism becomes a variable to be explained away, and “truth testing” exits the design.
7. The Cost—The Strategic Deferral of Truth and Debate
The most damaging cost of this architectural system is its active and strategic deferral of truth-testing and substantive debate. The entire apparatus—from the pre-selection of stigmatised claims to the translation of null findings into public narrative—is engineered to avoid ever having to engage with the evidence for or against a contested proposition. It substitutes a debate about what is true with a diagnosis of who is flawed.
This deferral operates on multiple levels:
- Methodological Deferral: The RI-CLPM model, for all its sophistication, is a perfect tool for ignoring the real world. It is a closed system that tracks internal psychological co-variation, deliberately blind to external evidentiary inputs. A person could change their belief because a hidden document was revealed, and the model would simply record it as an “autoregressive effect,” utterly obscuring the reason for the change. The method, by design, defers the question of truth to the simpler question of psychological correlation.
- Lexical Deferral: By adopting and reifying the “conspiracy theorist” label, the field pre-emptively decides that the salient fact about a person holding a heterodox belief is their psychology, not the content of their belief. This frames the entire research endeavour. The question is never “Is this claim valid?” but always “What motivates people who believe this invalid claim?” The lexicon itself defers truth-testing in favour of person-profiling.
- Narrative Deferral (The Public Seal): The ultimate deferral occurs in the public translation. The complex finding is simplified into a story that declares the motivating question—“Does distress cause these beliefs?”—effectively closed. This creates a powerful social cue that the time for debate is over. Why invest resources in testing the claim about fluoride or vaccine ingredients when “science” has shown that the beliefs are not driven by distress and are therefore a stable feature of a pathological class? The narrative defers public and policy engagement indefinitely.
The Consequences of Perpetual Deferral:
- Policy Becomes Management: Resources shift from auditing claims to managing populations. Funding flows to “prebunking” campaigns and “resilience” training—efforts to inoculate the public against wrongthink—rather than to rigorous, independent science that tests the actual propositions in dispute.
- Accountability Evaporates: By pathologising dissent, the system launders institutional failure. When an official narrative collapses, there is no mechanism for accountability, because the focus was never on the evidence for the narrative, but on the psychology of those who doubted it. The question “Why did we believe that?” is deferred in favour of “Why did they disbelieve it?”
- The Public Sphere Atrophies: A healthy democracy relies on the contest of ideas based on evidence. This system makes that contest impossible by dismissing one side as epistemically broken before the debate can even begin. It teaches the public that certain topics are not for reasoning, but for diagnosis.
In the end, this is not a system that fails to find the truth. It is a system designed to avoid the inconvenient pursuit of truth altogether. It is a science of the footnote, endlessly debating the psychological margins while deferring the substantive text—the actual truth of the claims that move millions and shape public trust—forever. The “no-effect” finding is not a conclusion; it is a sophisticated mechanism for ensuring the most important questions are never seriously asked.
8. The Symmetry Upgrade—A Framework for a Non-Pathologising Psychology of Belief
A rigorous psychology of belief would not pre-judge its subjects by studying only those who dissent from institutional narratives. It would instead adopt a symmetrical framework, treating belief formation as a universal process, whether the belief in question is later proven true or false. The following architecture replaces moralising with methodology, forcing the science to clarify the mechanisms of belief, rather than to pathologise the believer.
1) The Two-Lane Protocol (The Core of Symmetry)
The preregistration mandates the study of two mirrored claims:
- Lane A (Off-Narrative): A specific, currently stigmatised claim (e.g., “The safety profile of this new pharmaceutical is not fully known”).
- Lane B (On-Narrative): A specific, institutionally endorsed claim. Crucially, this should include historical examples of institutionally-approved claims that were later disproven, such as: “Tobacco is safe,” “Thalidomide is ok for pregnant women,” “Iraq possesses Weapons of Mass Destruction,” “The primary motive for the war is humanitarian concern (and not control of oil resources).”
Each lane receives identical methodological scaffolding: the same population, the same measurement of belief strength and trust, and the same covariates. The hypotheses are symmetrical, testing the psychosocial correlates of belief in both directions.
2) The Evidence Ledger (Re-introducing Epistemology)
For each claim, the study maintains a live, one-screen ledger. This forces an engagement with the evidentiary context and historical precedent. The ledger includes:
- The strongest evidence for and against the claim at the time of the study (for current claims) or at the historical peak of belief (for past claims).
- A pre-registered “Falsifier Menu”: specific findings that would objectively change the claim’s credibility.
- For historical claims, the ledger shows the ultimate outcome, demonstrating that institutional credulity has real-world costs.
3) Counter-Specification (Rival Explanatory Lenses)
The preregistration commits to testing the same hypotheses through at least one rival lens.
If the primary lens is a within-person panel model (RI-CLPM) tracking internal psychological co-variation, the counter-spec must be a contextual or institutional lens. This would test if belief change is predicted by exposure to real-world data like:
The publication of the Surgeon General’s report on smoking (for Tobacco).
The emergence of birth defect data (for Thalidomide).
The findings of the Iraq Survey Group (for WMDs).
4) Symmetry Metrics: Update Elasticity & Authority Weighting
The study directly tests the core asymmetry by measuring:
- Update Elasticity: The rate at which individuals revise their beliefs in both Lanes A and B when presented with pre-specified, high-quality contradictory evidence. For example, how did belief in “Iraq has WMDs” change after the Duelfer Report?
- Authority Weighting Coefficient: The degree to which belief in a claim is driven by the perceived credentials of the source versus the internal evidence for the claim itself. This measures the “trust fallacy” for both on- and off-narrative claims.
5) Power as a First-Class Citizen & De-Reification
- The study pre-computes the Minimal Detectable Effect (MDE). Any “no effect” finding must be accompanied by its MDE. If policy-relevant effect sizes are smaller than the MDE, the conclusion is framed as “Underdetection Risk.”
- The study uses only instrument-bound language (“scores on the Trust Scale”), explicitly banning identity labels (“conspiracy theorists,” “the credulous”).
Net Effect: Studying Belief, Not Sanctioning Believers
This upgrade transforms the psychology of belief from a discipline that diagnoses deviance into one that illuminates how all people—citizens, scientists, and policymakers—navigate complex information environments where institutional claims can be, and have been, catastrophically wrong. By installing symmetry and historical context, it forces the science to ask the critical question: What are the psychological processes that allow both justified scepticism and unjustified credulity to persist, and how can we tell the difference? This is the foundation of a credible and socially responsible science of belief.
9. Why They Won’t Do This—The Vested Interests in the Category Economy
The Symmetry Upgrade presents one possible clear and, arguably, methodologically superior path forward. Yet, the current system is structurally designed to avoid any such path. The resistance is not primarily about intellectual disagreement; it is about the powerful incentives and vested interests served by the status quo—a network that now extends from academic psychology departments directly into the halls of government.
- The Institutional Captivity of Academic Psychology: The authors of this study, like most in this field, are behavioural scientists located in psychology schools. Their entire disciplinary training, publication pipeline, and career advancement are built on a paradigm that quantifies individual-level traits and cognitive processes. Introducing a “Two-Lane Protocol” that forces engagement with history, political science, and institutional analysis falls outside their subject based methodological mandate. The system rewards narrow, procedurally “clean” studies in high-impact psychology journals, not interdisciplinary work that challenges the field’s foundational categories.
- The Behavioural Insights Industrial Complex & The Career Incentive
This academic niche is not an isolated ivory tower; it is the research and development wing for a global policy movement. The rise of Behavioural Insights Teams (or “Nudge Units”)—pioneered by theorists like Cass Sunstein and Susan Miche, and implemented by governments worldwide—has created a direct pipeline for this research. These teams apply the very logic we see in this paper: framing complex societal problems as issues of individual “cognitive bias” to be corrected through state-led psychological interventions. - The Funding and Career Pipeline: Research is funded and careers are built on stable constructs. “Conspiracy mentality” is a grantable, researchable, and citable construct. Deconstructing it threatens the intellectual capital of an entire subfield. This is not an abstract economy; it is funded by government agencies (health, defense, innovation), corporates and non-profit bodies whose mandates align with the “nudge” paradigm and maintaining public trust in official institutions. These funders have a vested interest in understanding and managing public dissent, not in auditing the truth status of beliefs concerning institutional credibility or malfeasance, whether that be public scepticism of the biopharma industrial complex, or the validity and reliability of information stemming from politicians, bureaucrats or state intelligence agencies. The money flows toward categorising the dissenter, not toward auditing the powerful.
- The Allure of the “Usable” Finding: Policymakers and media demand simple takeaways. A finding that identifies a “type” of person (the “conspiracy-minded”) provides a clear target for manageable interventions. In contrast, a symmetric finding that reveals high “institutional credulity” or rational public scepticism offers no simple policy lever. It is complex, uncomfortable, and implicates the very institutions that fund and rely on this research.
- The Unruly Subject: When the “Nudge” Meets a Shove: Ultimately, the figure of the “conspiracy theorist” represents a triple crisis for the behaviouralist paradigm. They are a challenge to its authority, refusing the “correct” narrative. Although a fertile field for research and a seemingly endless source of data points that fuel the category economy, they are a living testament to its failure, proving that human belief cannot be easily “nudged” into compliance. But to understand the true vested interest, we must state the underlying mission plainly: Incentives favour certainty optics and programmable interventions over open-ended claim audits.
The behaviouralist, embedded in this institutional network, operates as a secular priest of the official narrative. Their tools are not falsification and open inquiry, but psychometrics and messaging. The programme’s comparative advantage is profiling and prebunking; symmetric claim adjudication sits outside its tooling and risk budgets. The Symmetry Upgrade is therefore heretical; it suggests that the official narrative itself could be subject to the same rigorous, sceptical scrutiny as the heterodox one. This is why it is structurally impossible for the current system to adopt it. To do so would be to abandon its core function: the management of public cognition in service of institutional stability, a task that requires a pre-designated deviant class to define the boundaries of acceptable thought.
In short, the current system is not broken; it is functioning perfectly for its stakeholders. It produces publications for academics, provides a “scientific” justification for Behavioural Insights governance, and offers a managerial solution to political dissent. The Symmetry Upgrade and variations on it don’t just propose a new method; it challenges a thriving industry and the core logic of a modern, psychologically-enabled state. Until these incentives change, the category economy will continue to prosper, and the science of belief will remain a science of sanctioning non-believers.
10. Fracture & Conclusion—The Unsealed Door
This analysis is not a verdict—it is a map of the stress lines. IIf diagnostic labels stand in for instrument scores, and if a single methodological lens writes the headline while the evidence ledger for both heterodox and official claims remains blank, then the question is not resolved. It is sealed. The problem isn’t the null; it’s that only nulls are permitted when claims are filtered through a design that cannot see the relevant causal window.
The path forward is not comfort, but testable risk. The architecture buckles the moment its seals are broken:
- Remove the label: Does the effect hold without the pathologising frame?
- Add the counter-specification: Does the inference survive a rival lens?
- Publish the two-lane ledger: Who updates, and how quickly, when a mirror claim fails?
- State the transfer conditions: Can this narrow window legitimately speak for the entire field?
This is not about defending “conspiracy theorists.” It is about defending science from easy closures and one-sided scrutiny. A field that meticulously profiles dissent while naturalising compliance is not neutral; it is an agent of epistemic asymmetry. When negation phrasing and transparency rituals are laundered into public verdicts, substantive debate is foreclosed, and identity politics replaces evidence.
The fix is not sentimental; it is procedural and ruthlessly symmetric. The next move is to scope every null, run rival models, publish two-lane ledgers, and speak only in instrument-bound language. Until these switches are thrown, a study does not close a debate—it contains it. A panel design will keep deciding a question that an exposure biology docket—and a rigorous audit of official claims—never got to test.
The fracture is now visible: it is the gap between a science that profiles people and a science that adjudicates claims. The choice of which side funds the next paper will determine whether psychology remains a tool for managing populations, or becomes a discipline for illuminating truth.
Published via Journeys by the Styx.
Mindwars: Exposing the engineers of thought and consent.
—
Author’s Note
Produced using the Geopolitika analysis system—an integrated framework for structural interrogation, elite systems mapping, and narrative deconstruction. Assistance from Deepseek for composition and editing.