Mindwars: Managers of Mistrust – Why Conspiracy Theory Theorists Won’t Turn Their Lens on Power
They say they’re protecting truth. But if you look closely at what they measure—and what they never touch—another story emerges.
Conspiracy theories have become the official psychological pathology of the twenty-first century, and Conspiracy Theory Theorists (CTTs) are the ones writing the diagnostic manual.
The Mindwars series has so far unpacked how this architecture was built: the CMQ that quietly turned distrust into a measurable defect; CONSPIRACY_FX, the ERC “consequences factory” that industrialised the idea that conspiracy beliefs are a kind of social toxin; and the global trust surveys that repackaged obedience to institutional narratives as a psychological health metric. Together they sketched a worldview in which “science” and “institutions” sit above the game, while citizens are profiled, segmented and corrected.

This article takes that frame and asks the next question: what are CTTs actually doing inside the wider choreography of power? It follows their work across climate, vaccines and “trust in science,” not to nit-pick their statistics, but to map their political role. It takes a look at what they scrutinise and, more importantly, what they systematically leave offstage: liability shields, corporate capture, backroom committees, perverse incentives.
The claim is simple: CTTs don’t just study conspiracy believers. They help govern them. And understanding that function is the first step in deciding whether to treat their “science of conspiracy” as analysis—or as methodologically laundered narrative management.
Conspiracy Theory Theorists (CTTs) are not an abstract category here; this article looks at a specific, interlinked set of papers and treats them as a concrete object of study.
The core nodes, as listed in the references at the foot of this article, are:
- Hornsey et al.’s meta-analysis of the determinants and outcomes of belief in anthropogenic climate change, which establishes political identity as the dominant predictor and foregrounds the “consensus” and “source trust” heuristics.
- Hornsey’s review on reasons for COVID-19 vaccine refusal and recommended, non-stigmatising strategies for shifting hesitant individuals toward uptake.
- Douglas et al.’s global study of trust in scientists across 68 countries, whose headline finding is that “in most countries, most people trust scientists,” and which decomposes trust into competence, benevolence, integrity and openness.
- Douglas and colleagues’ work on science-related conspiracy beliefs, which ties conspiracist thinking directly to reduced vaccination, weaker climate action and lower institutional trust.
Taken together, these papers operationalise “conspiracy beliefs,” “trust in science” and “denial/scepticism” in precisely the domains highlighted by the ERC CONSPIRACY_FX mandate: the consequences of conspiracy theories for politics, vaccination and climate policy.
This article therefore continues and extends the CTT frame developed in the Mindwars series by analysing how these papers connect conceptually and institutionally to that mandate—what they choose to measure, which harms they centre, and which structural sources of distrust they systematically leave outside the model.
1. What They Say They’re Doing
In their own terms, the climate, vaccine and trust-in-science papers describe a tightly aligned mission.
- First, they argue that “facts are not enough”. In Hornsey’s COVID-19 vaccination piece, the puzzle is that simply repeating safety and efficacy data does little to shift those who identify as anti-vaccination. Instead, people “behave more like cognitive lawyers” than scientists, selectively attending to information that fits pre-existing conclusions; successful communication “requires deep listening” to fears, worldviews and ideologies, and persuasion that targets underlying “attitude roots” rather than “an exclusive focus on facts and data.” The climate scepticism toolkit makes the same move: classic deficit-model communication has “limited effects,” so scholars must understand motivated reasoning, identity and disinformation infrastructure to answer “why so many people are sceptical of climate science, and what can be done about it.”
- Second, they frame themselves as responding to challenges to the epistemic authority of science. The 68-country trust survey opens by noting that public trust in scientists underpins evidence-based policymaking and crisis management, but that this authority has been “challenged by misinformation and disinformation, a ‘reproducibility crisis’, conspiracy theories and science-related populist attitudes.” Anti-science attitudes, even among a minority, are said to raise concerns about a “potential crisis of trust in science” that could undermine the role of scientists in supporting policy.
- Third, they explicitly position their work as providing tools for non-stigmatising communication. Hornsey’s vaccination article emphasises “respectful and inclusive communication” and warns that stigma can drive hesitant people into echo chambers; non-stigmatising engagement is presented as both ethically desirable and “a pre-requisite for enabling positive change.” The trust survey similarly recommends avoiding “top-down communication” and encouraging “genuine dialogue” to maintain and increase trust.
Taken together, these authors present themselves as neutral psychologists and survey scientists whose role is to help rational, evidence-based policy navigate an environment distorted by misinformation, conspiracies, and identity-driven reasoning. The question for the rest of this article is simple: if we accept their own framing for a moment and then turn the psychologist’s lens back onto them—onto what they choose to model and what they bracket out—what do we see?
2. Asymmetric Scepticism: Who Gets Studied, Who Gets Trusted
Across this literature there is a consistent pattern: scepticism is directed downward at publics, deference upward at institutions. That is the basic asymmetry.
On the public side, citizens are described as:
- Identity-protective reasoners. In both the climate and COVID vaccination work, people are said to behave “more like cognitive lawyers than scientists,” selectively endorsing information that fits their political or worldview commitments.
- Susceptible to conspiracist thinking. Douglas’s review of science-related conspiracy beliefs treats a general conspiracist mindset as a stable individual difference that predicts reduced vaccination, weaker support for climate action and lower trust in institutions.
- Driven by ideology more than facts. Hornsey’s meta-analysis reports that political ideology and party identification are the strongest correlates of belief in anthropogenic climate change, with education and knowledge playing a smaller, sometimes conditional role.
On the institutional side, science and its surrounding bodies are treated quite differently. In the 68-country trust study, “scientists” and “the scientific method” are positioned as the benchmark of epistemic reliability; the concern is that their “epistemic authority” has been challenged by misinformation, a reproducibility crisis, conspiracy theories and science-related populism. Scientists are generally more trusted than governments and business, and the recommended response is to improve openness and communication to maintain that trust.
Crucially, the same analytic tools used on publics are almost never turned back on these institutions. When the trust paper mentions a reproducibility crisis, historical injustices or corporate influence, these appear as factors that may undermine trust rather than as independent objects of investigation into whether specific expert claims and systems are, in fact, systematically unreliable or captured. In the vaccine and climate papers, liability shields, funding structures, conflicts of interest, revolving doors or back-room policy processes are absent as variables. The emphasis stays on the perception of corruption or capture, not on measuring the behaviour of the institutions themselves.
This is what we mean by asymmetric scepticism. The CTT literature applies sceptical, fine-grained psychological analysis to ordinary people: their identities, biases and conspiracist styles. It applies comparatively little sceptical scrutiny to the organisations whose narratives it takes as baseline—scientific institutions, regulators, expert panels, and the broader political–economic structures in which they operate.
The result is a one-way critical lens: publics are modelled as motivated reasoners, institutions as motivated only by truth plus occasional communication shortcomings. Yet the core theoretical claim in this work is that humans in general are identity-driven, motivated reasoners. Which raises the question that will run through the rest of this article:
If you believe humans are identity-driven, motivated reasoners, why would that suddenly stop at the laboratory door, the journal office, or the ministry entrance?
3. Internal Critiques of Science vs How CTT Papers Treat Them
The asymmetry becomes clearer if we line up major internal critiques of science against how they appear in the CTT corpus. Below, each critique is a heading; underneath is how it is handled in the papers we are analysing.
- Anthropogenic Climate Change Itself (is ACC real / how certain?):
ACC is treated as fully settled by “climate science.” None of the texts treat “is ACC happening / are humans driving it?” as a live scientific question. Climate scepticism is analysed as a psychological / political outcome—driven by values, ideology, identity, conspiracist mindset and elite cues—rather than as a competing evidential position. Scientific consensus is the reference point; disagreement is an obstacle to mitigation, not a potential source of correction. - · Replication Crisis / Ioannidis-style “Most Findings Are False”:The reproducibility and integrity crises in science (see John Ioannidis famous paper “Why Most Published Research Findings Are False”) are mentioned mainly in the global trust paper as one item in a list of things that “challenge the epistemic authority of science” in the eyes of the public, alongside misinformation, conspiracy theories and science-related populism. It is not integrated as a reason to downweight confidence in the literatures they themselves rely on. In practice it is treated as a trust problem, not as a structural reason to be more cautious about their own evidential inputs.
- Perverse Incentives in Pharma / Vaccine and Drug Manufacture:
Concerns about big pharma incentives appear primarily as content of public mistrust: people suspect companies of hiding harms, chasing profit and manipulating science. That suspicion is then modelled as a driver of conspiracy belief and hesitancy, not as a hypothesis to investigate on its own. Concrete incentive structures—blockbuster economics, trial ownership, selective publication—are not analysed as parallel objects of inquiry; they are folded into “reasons people distrust” and then used as inputs to persuasion/trust models. - Corporate Influence Over What Gets Studied and How:
In climate, fossil-fuel funding and think-tanks are acknowledged as organised disinformation infrastructures that distort public understanding and policy. That is the one area where corporate capture is treated as real, ongoing power rather than just a belief. In vaccines/biomedicine, corporate influence mostly appears as something people believe, not something structurally modelled. There is no systematic attempt to quantify how corporate capture biases the evidence base the authors treat as “science.” - Politicisation of Science (Parties, Ideology, Cultural Warping):
Politicisation is heavily foregrounded—but almost always outside the lab. Political identity, partisan cues and culture-war media are treated as the main forces shaping public attitudes towards climate and vaccines. The idea that internal scientific agendas or outcomes might be politicised by funders, agencies or states is barely touched. Politicisation is something that happens around the production of science, not inside its epistemic machinery. - Peer Review Integrity, Publication Bias, Journal Incentives:
Peer review, editorial practice and publication incentives are essentially absent as objects of analysis. Peer review functions as a background mechanism that generates the literatures they meta-analyse. There is no serious examination of how reviewer incentives, editorial capture, prestige dynamics or “publish or perish” pressure might systematically distort what counts as “the evidence” in climate psychology, vaccine attitudes or trust research. - · Ghostwriting, Pharma Marketing, Key Opinion Leaders:
Ghostwriting, marketing departments and KOL branding—well-documented features of pharma publishing—do not appear as categories. Where analogous practices are alluded to at all, they are absorbed into generic phrases like “people think corporations manipulate science,” which are then analysed as belief content rather than structural conditions to be modelled. - Meta-scientific Critiques of How Consensus Is Produced:
Consensus is treated instrumentally: as something that (a) indicates what is true and (b) can be signalled to publics to increase acceptance. There is no engagement with the possibility of consensus cartels (coordinated gatekeeping, funding alignment, narrow networks) producing stable but distorted consensuses. The only “bad” consensus actors examined structurally are outside science proper (e.g. fossil-funded denial networks), not inside mainstream institutions. - Science as an Industry (Careers, Prestige, Institutional Survival):
The human side of science appears mainly via trust facets (competence, benevolence, integrity, openness) and “scientists” as a social elite. The deeper industrial structure—careers depending on positive results, journals needing attention, universities chasing rankings—is not modelled. The psychology and trust literatures are treated as if they sit above these dynamics, even though they are produced inside the same incentive field. - Legitimate Lay Critique vs “Conspiracy”:Grievances that overlap with real structural problems—historical abuses, racism in medicine, corporate capture, skewed funding—are acknowledged mostly as inputs into conspiracist thinking and low trust, not as prompts to re-open the underlying cases. Analytic effort goes into understanding “why people don’t trust” and “how that affects compliance,” rather than asking “when are they right not to trust, and what does that imply for what we prescribe?”
What This Pattern Implies:
Across these headings, the treatment is consistent:
- Core institutional narratives (climate consensus, vaccine efficacy, science-as-authority) are held fixed as background
- Structural critiques (replication, capture, incentives) are registered mainly as reasons for distrust, not as reasons to recalibrate those narratives
- Where corporate or political power is admitted (fossil disinformation), it is largely externalised; inward-facing critique of mainstream scientific and regulatory structures is absent.
This is another expression of asymmetric scepticism: the deep debates about how trustworthy the system is are routed around, while the management of public doubt becomes the primary object of “conspiracy” research.
4. Case Studies — Climate, Vaccines and Trust in Science
If the CTTs are cartographers of mistrust, their maps are notable for their systematic omissions. They chart the swamps of public delusion with exquisite detail but treat the palaces of institutional power as blank spaces—neutral, unproblematic, and beyond the need for survey. The following case studies—on climate, vaccines, and the very science of “trust”—hold their work to its own standard.
Case Study 1: Climate – Denial, Consensus, and the Half-Seen System
What They Map Well (Albeit In Their Own Terms)
Hornsey’s meta-analysis of 25 polls and 171 academic studies across 56 nations shows clearly that values, ideologies and political orientation overshadow classic “knowledge” variables as predictors of belief in anthropogenic climate change. Political affiliation is the largest demographic correlate of climate belief; its effect is roughly double that of any other demographic variable, and education, income, sex, race and age all have only trivial relationships with climate belief. In the psychological domain, belief correlates most strongly with source and consensus heuristics—trust that “scientists are trustworthy so the scientific orthodoxy must be true” and the belief that “there is scientific consensus around climate change, and consensus implies correctness.”
The later toolkit piece with Lewandowsky develops this picture into a narrative: climate scepticism is best understood through motivated reasoning, political followership, and disinformation infrastructures. Meta-analytic evidence is used to show that climate acceptance is more closely tied to ideologies around free markets, individualism and hierarchy than to education or self-reported knowledge. Climate beliefs are situated within culture-war faultlines, where party cues and group loyalties drive attitudes; once an issue is sucked into this loop, “facts become rubbery and subjective.”
On the structural side, the toolkit acknowledges fossil-funded disinformation systems: think-tank networks, media channels and lobbying that systematically distort the public conversation and slow the transition. Newspaper content analyses are cited showing that only in a handful of countries—most notably the US, UK and Australia—has media coverage turned scientific consensus into an ideologically charged “debate,” with intensity highest where fossil-fuel interests and Kyoto commitments make decarbonisation particularly costly.
The newer Gourville et al. paper adds a finer-grained analysis of climate conspiracy beliefs. It distinguishes “denialist” conspiracies (climate is a hoax, scientists and politicians exaggerate or fabricate for gain) from “warmist” conspiracies (corporate and industrial interests suppress or downplay climate risks and block mitigation technologies), showing that denialist beliefs are associated with higher conservatism and greater denial of anthropogenic climate change, while warmist beliefs show the opposite pattern and correlate positively with climate engagement.
Taken together, this cluster of work convincingly demonstrates three points:
- Political identity and ideology dominate education and knowledge as predictors of climate belief.
- Consensus and trust in scientists are powerful heuristics that shape acceptance.
- Fossil-funded networks and partisan media ecosystems act as disinformation infrastructures that amplify denial and delay.
This is the part of the system the CTTs are willing to map, and they map it well.
What They Refuse to Map
The same tools are not applied to the other half of the climate system: the way consensus is produced, curated and coupled to policy and capital.
Nowhere in these papers do we see:
- Internal gatekeeping in climate science: who sits on key editorial boards; how IPCC lead authors are selected; how outlier views or methodological critiques are handled inside the field; what happens to dissenters who question mainstream impact models or preferred solutions.
- Political editing of “Summaries for Policymakers.” The toolkit assumes that “decades ago, scientists reached consensus” and that this message has been a prominent, persistent feature of global conversation; it does not ask how much of what policymakers and publics actually see is the result of negotiation and compromise between scientists and governments in the final summaries.
- The role of “green capital” and consultants in steering which climate solutions are “scientifically respectable”: market-based mechanisms vs direct regulation, CCS and offsets vs rapid phase-out, financialised “net zero” pathways vs structural degrowth. These interests appear, if at all, as part of the solution space, not as potential sources of capture or agenda-setting.
In the conspiracy frame, denial networks are treated as the visible conspiracy-like structure: industry-funded think-tanks, PR campaigns and partisan media are modelled as coordinated efforts to undermine science. But consensus formation itself is effectively off-limits to the conspiracy lens. There is no attempt to model whether funding alignment, institutional hierarchies and epistemic cartels might also coordinate narratives—less crudely than oil-funded denialists, but no less consequentially for which futures are thinkable.
Even in the Gourville paper, where “warmist” conspiracies explicitly reference documented financial ties and partnerships between fossil-fuel companies and universities used to facilitate climate disinformation, that content remains on the “belief” side of the ledger. The real-world networks named in passing (trade associations, strategic partnerships, think-tank ecosystems) are not turned into structural variables; they serve as illustrative anchors for scales that classify people according into a binary entailing how much they endorse denialist or warmist narratives. The system is still: conspiracy as belief, not conspiracy as institutional practice.
So we end up with a simple asymmetry:
- Exxon and the classic denial complex are legitimate targets of structural suspicion.
- The “97% consensus,” IPCC process, and broader climate-science–policy machinery are accepted as neutral baselines that only need to be better communicated and defended.
The question practically asks itself:
Why is it legitimate, inside this literature, to treat fossil-funded denial as a structural manipulator of belief, but not to ask whether the consensus itself, and the solution set it privileges, might be shaped by subtler forms of capture, gatekeeping and capital alignment?
You don’t have to think the consensus is wrong to see the problem. What’s at issue is the refusal to even look—the decision to aim the conspiracy lens at citizens and denial networks, but not at the institutions and coalitions that define, package and deploy “the science” that everyone else is judged against.
Case Study 2: Vaccines – A System That Looks Rigged, a Literature That Won’t Touch It
The Loaded Language of Deviance
The central term in this space, “vaccine hesitancy,” is not a neutral descriptor; it is a framing device. It collapses a wide spectrum of positions—from cautious questioning through conditional acceptance to outright, principled refusal—into a single psychological category defined against a presumed norm of smooth compliance. “Hesitancy” is a state of mind: it implies unwarranted delay, anxiety, indecision. It subtly shifts the issue from a possible political, ethical or epistemic disagreement (“I reject this product or this programme”) to a personal shortcoming (“I can’t bring myself to go along, yet”).
In doing so, it constructs a subject whose primary characteristic is a deficit of alignment. However, in reality many of those labelled “hesitant” do not see themselves as undecided at all; they have already reached a clear conclusion—sometimes about particular vaccines, sometimes about a class of products, sometimes about the institutional complex delivering them. The vocabulary cannot accommodate this; it has no positive language for informed, principled refusal, only for departure from the uptake norm.
Within this loaded frame, CTT work centres squarely on the psychology of non-compliance rather than on the structure of the system being refused.
Hornsey’s COVID-19 vaccination paper is a good anchor. It classifies “reasons why people may refuse COVID-19 vaccination” into a set of attitude roots: low perceived risk from the virus, heightened concerns about side-effects, conspiracy beliefs about pharma and government, distrust in institutions, and broader ideological commitments around liberty and purity. The key claim is that people “behave as cognitive lawyers”, marshalling information to defend these prior conclusions. The proposed remedy is not more data but better targeting of these roots, via tailored, non-stigmatizing engagement that acknowledges concerns and nudges people toward acceptance.
Douglas’s review of science-related conspiracy beliefs extends this logic. A general conspiracist style is treated as a relatively stable individual difference that predicts a pattern of “anti-science” attitudes and behaviours: lower intention to vaccinate, preference for alternative treatments, reduced compliance with public-health guidance, and greater endorsement of other science-targeted conspiracies (climate hoax, GMO cover-ups, etc.). Conspiracy beliefs are portrayed as maladaptive meaning-making tools that rarely satisfy the needs (for certainty, control, belonging) that draw people to them.
Across these papers, three themes are consistent:
- Refusal is explained via mistrust, fear, identity and conspiracist style, not via any explicit evaluation of the system’s performance.
- The recommended response is empathetic, non-stigmatizing communication that validates feelings but gently steers people toward institutional recommendations.
- Non-compliance is modelled as a public-goods problem—a behavioural externality that institutions are justified in mitigating.
On their own terms, this is coherent. But it pathologises a range of responses—from critical engagement to principled rejection—by leaving out what the system looks like from the bottom up.
What the System Actually Looks Like From Below
From a citizen’s perspective, the vaccine/pharma/regulatory complex has several features that do not require any speculative narrative to look structurally skewed or epistemically uncertain. The valid reasons for scepticism, which the CTT literature systematically overlooks or re-categorises as irrational, include:
- Epistemic shortcuts. Novel products, especially under Emergency Use or conditional authorisation, have been approved on the basis of relatively short-term, often underpowered trials, with explicit acknowledgement that long-term safety and effectiveness profiles are unknown at the time of rollout. That is not a “theory”; it is how the authorisation regimes are designed. To question the long-term risk–benefit profile under those conditions is a straightforward epistemic concern, not evidence of a cognitive defect.
- A proven record of misconduct. Major pharmaceutical companies have a well-documented history of fraud and misconduct: criminal and civil settlements for misleading marketing, hiding harmful side-effects, manipulating or fabricating trial data, and illegal promotion. This is a matter of public legal record, not internet rumour. It provides a perfectly rational, evidence-based foundation for generalised caution about claims and assurances coming from the same sector.
- Justified political distrust. Many citizens bring to the vaccination context a long memory of political deception, geopolitical blunders, and spin-heavy crisis communication. Public health messaging is often tightly coupled to political leadership and electoral cycles. Asking people who do not trust governments to trust medical interventions aggressively championed by those same governments is not a neutral request; it is a psychological and political demand.
- The censorship of legitimate debate. During COVID, highly credentialed virologists, epidemiologists and cardiologists who questioned elements of the dominant narrative—on masking, school closures, natural immunity, or rare adverse events—were deplatformed, demonetised, or informally blacklisted. Whether each of those positions was ultimately correct is separate from the point: when authorities and platforms narrow the “marketplace of ideas” by fiat, whatever remains looks less like organic consensus and more like a managed line, fuelling the very suspicions the CTTs claim to study.
- Structural asymmetries. On top of this are the underlying asymmetries: liability shields and indemnities that socialise risk and privatise profit; revolving doors between regulators, advisory bodies and industry; ghostwriting and KOL-driven marketing that blur the distinction between independent science and promotion; and, in crises, abrupt or poorly explained shifts in guidance that make earlier confident statements look unreliable or opportunistic.
Even if one believes vaccination programmes are net beneficial, these features collectively describe a system that looks structurally tilted and epistemically fragile. It is not irrational to infer that such a system may sometimes privilege speed, optics and profit over full transparency or maximal safety. You do not need a baroque story about depopulation to arrive at that conclusion.
How the Theorists Treat This
The CTT literature does not completely ignore these realities, but it does something specific with them.
They appear, when they appear at all, as soft background context. Hornsey notes that some hesitancy is grounded in understandable concern about past abuses or corporate behaviour, before pivoting quickly to the claim that robust regulation and ethics frameworks are now in place and that general mistrust is therefore excessive. Douglas acknowledges that believers often see science as elitist, corrupt or dependent on corporate interests, but treats that perception primarily as fuel for conspiracy thinking, not as a claim to be investigated in its own right.
Critically, these issues are not treated as independent objects of empirical scrutiny. There is no systematic modelling of how emergency authorisation criteria, liability design, funding dependence, revolving-door appointments, or censorship practices shape the trustworthiness of the system and, by extension, the rationality of scepticism. Instead, they are folded into the input side of psychological models: reasons why some groups mistrust, which explain higher scores on conspiracy and hesitancy scales, which in turn predict lower uptake.
The central research question becomes:
- not “Is this system, as currently organised, producing patterns that reasonably undermine trust?”
- but “Why do some people form conspiracy beliefs and mistrust, and how does that harm compliance?”
That inversion is decisive. Distrust is recoded as symptom—a deviation from the ideal trusting subject—rather than as a potentially accurate reading of how power, incentives and information flow actually work in this domain. The underlying configuration is treated as baseline reality, perhaps in need of better explanation and more “openness,” but not of sustained suspicion or redesign.
In short: instead of asking “Is the system rigged?”, the CTT literature asks “Why do some people think it is?” It then builds its measures, models and “solutions” around that second question. Conspiracy theories and so-called “vaccine hesitancy” become psychological contaminants to be managed; the structural features that make conspiratorial interpretations plausible remain largely offstage.
Case Study 3: Trust in Scientists – The Legitimacy Dashboard
The soothing headline vs messy reality
The 68-country “trust in scientists” survey is the keystone of the CTT trust narrative. Its abstract opens by directly challenging “the idea that there is a widespread lack of public trust in scientists,” reporting that “in most countries, most people trust scientists” and that people generally believe scientists should be more involved in society and policymaking.
The study finds that average trust is moderate to high almost everywhere; “no country has low trust in scientists on average.” Scientists are described as trusted actors in most societies, and trust in scientific methods is even stronger: prior work cited shows that less educated people often trust scientific methods more than scientific institutions.
The authors explicitly frame the project as a rebuttal to a “dominant narrative” of a crisis of trust. Their discussion states that the survey “challenges the idea that there is a widespread lack of public trust in scientists” and “refutes the narrative of a wide-ranging crisis of trust,” while confirming that trust in both scientists and methods is generally high. The main statistical figure (Fig. 1 in the paper) visualises national mean trust scores comfortably above the scale midpoint across most countries.
So the headline takeaway is clear:
- There is no generalised trust collapse
- Scientists are more trusted than governments in many countries, especially where corruption is high
- Methods enjoy even more confidence than the people and institutions who use them.
This is exactly the kind of global reassurance policymakers and science funders were looking for after COVID.
The caveats they bury
Underneath the reassuring narrative, several technical and substantive caveats make the picture less tidy.
- First, measurement invariance. The authors are explicit that their 12-item scale (competence, benevolence, integrity, openness) achieves only configural invariance across countries—no metric or scalar invariance. In plain terms: the basic four-factor structure holds, but the meaning and weight of the items differ enough across contexts that direct comparisons of absolute levels are not strictly justified. Despite this, the paper still talks in global terms about “high trust” and uses cross-country comparisons in narrative form.
- Second, openness is systematically the weakest trust dimension. Across countries, perceptions of competence and integrity are relatively high; what lags is the sense that scientists are open—receptive to feedback, transparent about funding and data, and willing to communicate with the public. The authors explicitly recommend that scientists wishing to gain trust should “work on being more receptive to feedback and more transparent about their funding and data sources, and invest more effort into communicating about science with the public.” That is a non-trivial signal of a legitimacy gap, but it is framed as an opportunity for improvement rather than a structural failing.
- Third, tail risk is acknowledged but not explored. The paper plainly notes that, although national averages are high, “lack of trust in scientists by even a small minority needs to be taken seriously,” because distrustful minorities can influence policymaking and public behaviour if they are organised, receive media coverage, or occupy positions of power. This is an important admission: it concedes that a relatively small group can veto or distort policy translation of science. But beyond this paragraph, the tail is analytically backgrounded; the focus remains on characterising average trust and its correlates, not on mapping how those minorities form, how their grievances differ, or whether some of their distrust is substantively warranted.
The paper also briefly acknowledges that low trust may sometimes be warranted, citing science’s historical role in racism and unethical experimentation as examples that have reduced research participation in some populations. However, these examples appear as reasons why some people distrust science, not as starting points for assessing whether current institutions have corrected those problems or still reproduce them.
Functionally, what this gives institutions
Taken together, the trust survey functions as a legitimacy dashboard for institutions—a mixed comfort-and-controls instrument.
It provides a reassurance script:
- “Most people trust scientists”
- “There is no widespread crisis of trust”
- “Trust is even higher in places where governments are seen as corrupt, suggesting science is a relatively clean alternative.”
For anxious elites, this is a direct answer to a politically salient fear: the science system is not losing the public wholesale.
It offers a technical fix:
- Trust is decomposed into competence, integrity, benevolence and openness, and openness is flagged as the weak link.
- The recommended interventions are largely communicative and procedural: be more transparent about funding and data, listen more, engage in “genuine dialogue,” tailor communication to different demographic groups, and involve trusted non-scientific communicators.
In other words: adjust the optics and contact surface, not the underlying distribution of power, incentives or accountability.
And it supplies a tail-management frame:
- The main analytical energy goes into explaining variance in trust by demographic and ideological variables (gender, age, education, SDO, science-related populism, political orientation).
- Distrusting minorities are framed as a manageable risk: small but potentially disruptive blocs whose attitudes must be monitored and addressed so they do not obstruct evidence-based policymaking.
What does not appear is a symmetric analysis of institutional untrustworthiness: no mapping of where scientists or scientific institutions have systematically failed, misled, or been captured; no attempt to correlate trust with concrete indicators of institutional performance, transparency, or independence. Those factors enter mainly as context for attitudes, not as objects of diagnosis.
So functionally, the message to institutions is:
- Relax: there is no generalised collapse; trust is mostly fine
- Signal openness: tweak communication and transparency to shore up any weak spots
- Watch the tails: focus attention on small but noisy distrust groups whose political impact could be outsized.
From the perspective of asymmetric scepticism, this is telling. There is a full, preregistered, Many Labs–style project devoted to understanding who distrusts scientists and why. There is no parallel Many Labs project devoted to asking, in a similarly systematic way, when and where science and its institutions have been untrustworthy, and what structural reforms that might imply.
Which leads naturally to the question this case study is meant to foreground:
If a system tells you “most people trust us, and where they don’t it’s largely a perception issue,” what are the odds it will genuinely interrogate its own failures—rather than doubling down on better messaging and more finely tuned monitoring of the doubters?
5. Conspiracy as a One-Way Mirror – The Theocracy of Scientism
Across the climate, vaccine and “trust in science” strands, the CTT worldview can be summed up in one picture: conspiracy as a one-way mirror inside a theocracy of scientism.
On one side of the glass are citizens. They are the explicit object of study. In the climate work, those who doubt or downplay anthropogenic climate change are classified as “denialists,” their positions explained via political identity, values and conspiracist mindset. In the vaccine papers, those who refuse or delay are mapped in terms of fear, mistrust and “conspiracy beliefs” about pharma and government. In the science-related conspiracy literature, a general “conspiracy mentality” is treated as a trait that predicts lower vaccination, weaker climate action and diminished trust in institutions.
On the other side of the glass are institutions: scientific bodies, regulators, expert committees, global surveys of “trust in scientists.” These are rarely objects of symmetrical scrutiny. They appear mainly as:
- victims of populism, denial, misinformation and conspiracy theories
- targets whose “epistemic authority” must be protected
- heroes who generate the facts that rational citizens are supposed to align with.
The mirror is one-way in three consistent ways.
- In what gets labelled “conspiracy”: Conspiracies about climate (IPCC hoax, exaggerated warming) are studied. Conspiracies about vaccines (pharma hiding harms, governments colluding with industry) are studied. Conspiracy talk about “corrupt elites” and “captured science” is studied. But actual coordination in lobbying, regulatory capture, ghostwriting, closed-door negotiations, or cartel-like agenda-setting is mostly left to investigative journalists, NGOs and political scientists. For CTTs, these dynamics enter the model primarily as belief content, not as co-equal empirical phenomena.
- In how moral status is assigned: Those who accept climate consensus, follow vaccine schedules and express high trust in science occupy the “good subject” category: rational, pro-social, psychologically healthy. Those who doubt or resist cluster in the “problem” category: high in conspiracy mentality, science-related populism, or anti-scientific attitudes. The language is technical, but the structure is familiar: orthodoxy vs heresy. There is no corresponding psychometric apparatus for institutions themselves—no capture-index for regulators, no “scientism scale” for expert networks whose confidence outstrips their error-correction record.
- In what counts as an explanation: When citizens mistrust institutions, the explanation is sought in psychology: identity protection, unmet epistemic/existential/social needs, exposure to misinformation. When institutions disappoint or mislead—through non-replication, conflicts of interest, secrecy—that is typically coded as a challenge to trust or a communication problem, not as a reason to question whether the official narratives being defended are always deserved.
Seen this way, the CTT position is less “science” than a form of scientism: a state-backed epistemic religion. Institutional science functions as the Church; consensus statements are dogma; public scepticism is heresy. CTTs play the role of theologians and inquisitors. Their job is not to interrogate the dogma, but to diagnose the psychological and political pathologies that lead to heresy, and to develop more effective, “non-stigmatising” pastoral techniques to bring the doubters back into alignment.
In that framework:
- The Church is effectively infallible at the level that matters; questions about consensus formation, capture and incentives are doctrinal issues, not proper research targets.
- Heresy is pathology: distrust becomes a symptom of a flawed mind (“conspiracist mindset,” “science-related populism”), not a potentially rational response to institutional failure.
- Managing the flock is the end goal: the research programme optimises narrative control and compliance, not structural reform.
This is the core asymmetry the article traces: we now have a detailed science of why the powerless imagine conspiracies, and almost no science of when the powerful behave in ways that make those imaginations plausible. In that sense, the “science of conspiracy” functions as methodologically laundered apologetics—a tool of narrative management in a struggle over who gets to define reality, rather than a symmetric investigation of how trust and power actually work.
6. Follow the Incentives: Why the Blind Spots Make Sense
The asymmetry in CTT work is not best explained by “bad people with bad motives.” It is better explained by where they sit in the system and what that system pays them to do.
At the surface level, three forces are obvious:
- Role alignment. These researchers are funded, cited and invited into policy spaces to help governments, health agencies, climate institutions and platforms manage mistrust and non-compliance. Their value proposition is: we can tell you who doesn’t believe you, why, and how to get them onside. There is no parallel demand for “please map how captured and untrustworthy we have become.”
- Disciplinary framing. The “psychology of conspiracy” starts from a tacit axiom: science is basically right; people vary in how they respond to it. Within that frame, turning the tools onto “science + institutions” themselves—asking whether they are structurally biased, captured or systematically misleading—is coded as “political,” “STS,” or someone else’s job.
- Career incentives. Critiquing publics (their biases, heuristics, conspiracist styles) yields grants, media calls and policy impact. Sustained critique of institutional corruption and capture, at scale, is a reliable way to trigger reputational risk, grant reviews that say “outside scope,” and subtle exclusion.
The Mindwars series shows how these forces are concretely implemented in the CTT architecture.
From CMQ Original Sin to the Consequences Factory
The Original Sin, showed how the Conspiracy Mentality Questionnaire (CMQ) did the hinge move: instead of adjudicating specific claims, it pre-sorted a set of state- and elite-critical propositions into a “conspiracy theory” bin, then validated that bin by its adjacency to schizotypy, paranoia and fringe beliefs. Generalised political distrust became a measurable pathology. Distrust itself turned into a diagnosis.
That move created the certification racket: the very institutions being scrutinised get to decide which suspicions are “premature” (conspiracist) and which eventually become “justified” (scandal, failure, capture) — always after the fact, never feeding back into the metric.
The Consequences Factory then mapped how the ERC CONSPIRACY_FX grant industrialised this logic. The pipeline is straightforward:
- Antecedents: CMQ/GCB segment the “conspiracy-minded” subject.
- FX (Harms): those scores are correlated with a curated list of “negative consequences” — lower vaccination, climate inaction, reduced institutional trust, social conflict.
- Interventions: “prebunking,” inoculation and engagement protocols are designed to manage this problem population in politics, vaccines and climate.
It is not open exploration; it is a service contract: build psychological infrastructure that helps states and platforms deal with inconvenient scepticism in three priority domains.
You can almost write the mantra: Pathologise → Correlate → Inoculate → Dismiss.
CMQ manufactures the subject, CONSPIRACY_FX catalogues the harms, and the “science of conspiracy” supplies the cure.
From Consequences to Perceptions: The Governance Stack
From CONSPIRACY_FX to Perceptions of Science showed how the big Trust in Scientists (TISP) project plugs into this machinery as the second tower: a Many Labs legitimacy dashboard.
- Tower A (CONSPIRACY_FX): “conspiracy beliefs” are operationalised as a measurable pathogen with negative consequences for policy, cooperation and trust.
- Tower B (TISP): “trust & deference” to science and scientists are operationalised as health; “science-related populism” and boundary-setting are operationalised as pathology.
Dissent is thus defined as consequence-bearing deviance, and TISP upgrades this into a global KPI of deference: who trusts, who doesn’t, and how to adjust “openness” signals to keep the averages high.
You don’t need a smoke-filled backroom for this. A €2.5m ERC hub and a Harvard-branded Many Labs consortium, both premised on “public scepticism as the problem,” will naturally generate work that protects institutional authority and keeps structural critique off the table.
The Self-Sealing Paradigm
The Self-Sealing Science of CTTs reconstructed the CTT worldview as a cognitive firewall that keeps these incentives safe. Six axioms dominate:
- Primacy of the psychological: scepticism is a mindset, not an evidence stance.
- Generality of the trait: distrust in one domain implies a global “conspiracy mentality.”
- Content insignificance: experiments that fail to move high-CMQ participants with extra information are read as “content doesn’t matter; predisposition is everything,” not as “our sources lack credibility.”
- Institutional benignity: official narratives are the baseline reality.
- Pathologisation of distrust: persistent dissent is coded as a person-level defect.
- Locus of the problem: the problem is always in the individual, never in institutional conduct.
The toolchain embodies this: CMQ as trait scaffold (“move away from concrete theories”), LPA to “confirm” uniformity in a pattern the tool built in, experiments where failure to persuade becomes evidence of pathology. All of this sits atop a field — social psychology — that itself has serious replication fragility. A rational public is right to down-weight single-study claims; the CTT frame has no slot for that possibility.
As argued throughout the Mindwars series, speculative narratives are often a distorted response to real structural conditions: secrecy, unaccountable elites, behavioural ops, opaque treaties, captured regulators, courts that rarely sanction their own. The CTT / CONSPIRACY_FX / TISP stack is exactly what such a system builds when it wants to manage the symptom (conspiracy talk) without addressing the disease (opacity, capture, impunity).
None of this requires cartoon villains. It requires smart, well-intentioned researchers, working inside an architecture that rewards one kind of scepticism (downward) and quietly punishes the other (upward). Seen through the Mindwars lens, Conspiracy Theory Theorists aren’t just describing a social problem; they’re staffing a consequences factory that protects a fragile, captured system from its most reasonable critics.
7. What a Symmetric Conspiracy Theory Theory Would Look Like
If we really wanted a science of conspiracy and distrust, it would have to run in both directions: towards citizens (who believes what, and why), and towards institutions and organised actors (who creates and weaponises which narratives, and why).
Right now, CTT work sits almost entirely on the demand side. A symmetric programme would keep that, but add an equally serious supply-side analysis.
a. Demand side: keep the useful psychology
While one can, and should, be critical of the inherent pathologisation of dissent and institutional bias in CTT work, a serious program would still model key psychological and social factors:
- Identities and worldviews – how political identity, religion, and ideology shape which narratives “fit”
- Conspiracist style – the tendency toward global, all-explains-all narratives, “white knight” expectations, and pattern over-detection
- Media diets and information environments – the role of partisan media, alternative platforms, and algorithmic echo chambers
- Life-course and trauma – how prior experiences of betrayal or harm make certain suspicions feel earned.
This demand-side analysis has value. The fundamental problem, however, is that in isolation, it treats conspiracy talk as a free-floating cognitive pathology, systematically severed from the structural realities that provoke it. A symmetric theory doesn’t just add a missing half—it provides the essential context without which the demand side is fundamentally misunderstood.
b. Supply side: put institutions and operators on the map
On the supply side, the same analytic seriousness would be applied to the structures and actors that shape what people have to be conspiratorial about.
At minimum, that means:
Liability and indemnity structures:
- How are risk and responsibility distributed between corporations, regulators, professionals and citizens (e.g. vaccine courts, PREP-like shields, ISDS clauses)?
- Do stronger shields predict higher distrust, controlling for conspiracist mindset?
Corporate funding and agenda-setting in science and policy:
- Who funds what, under what terms?
- How do outcomes and framings differ between industry-funded and independent research?
- Which topics (e.g. CCS, geoengineering, patent drugs) get disproportionate attention relative to low-margin alternatives?
Gatekeeping and editorial bias:
- Who controls top journals, guideline panels, IPCC chapters, regulatory committees?
- What happens to high-quality dissent (on methods, risks, solutions) in review and publication?
- Are there “consensus cartels” where small, interconnected groups effectively set the line?
Revolving doors and corruption indices:
- How dense are personnel and financial ties between ministries, regulators, firms and lobby groups?
- Do higher capture indices track higher distrust and higher endorsement of specific conspiracies?
Openness and auditability:
- How easy is it for outsiders to get data, code, protocols, minutes and contracts?
- How visible are corrections and reversals?
- Do genuine openness reforms reduce conspiracy endorsement in sceptical groups?
But to fully map the supply side, we must confront the most potent actor: the professional influence industry. This is the literal conspiracy-theory supply chain, comprising PR firms, political consultancies, “strategic comms” units, influence shops, platform partners and in-house corporate/state comms teams whose business is narrative operations, astroturfing, perception management and crisis spin for powerful clients. A symmetric theory would treat these actors as primary supply-side variables, and ask:
- How do states, parties, intelligence services and corporate comms teams seed, amplify or weaponise conspiratorial content?
- For QAnon-style ecosystems, pro- and anti-climate-change campaigns, pharma- or state-aligned astroturf: which pieces are clearly bottom-up, and which carry the fingerprints of strategic operators?
- How does engineered disinformation from above interact with pre-existing conspiracist style from below to produce the observed belief patterns?
In other words, QAnon, Russia-gate, climate conspiracies, vaccine narratives and “globalist” talk should be treated not only as citizen cognition, but as hybrid products of grassroots meaning-making and deliberate information operations. A serious Conspiracy Theory Theory would want to know, very concretely, who is paying whom to produce which frames, with what distribution and targeting— and would treat that as at least as important as whether citizens score “high CMQ” or not (assuming, of course, you accept this as any sort of valid measure at all).
Lastly, Elite forums and policy-planning networks. Alongside formal PR and strategic communications, a symmetric theory would also treat elite organisations and policy-planning networks as part of the supply side of conspiracy and distrust.
These include a spectrum of bodies:
- Open but high-level forums – e.g. World Economic Forum–style gatherings, major security and economic conferences, industry–government taskforces.
- Semi-closed policy clubs – e.g. long-standing think-tank networks, trilateral or transatlantic commissions, big-ticket foundation roundtables, where business, political and academic elites coordinate analysis and “consensus” on policy directions.
- Highly closed meetings – invitation-only groups such as Bilderberg-type conferences or similar off-record gatherings, where agendas, participants and outputs are only partly visible.
A symmetric Conspiracy Theory Theory wouldn’t start by labelling talk about these networks “conspiracist”. It would ask empirical questions:
- What kinds of actors (states, firms, sectors) are over-represented in which forums?
- Which policy themes (trade, security, health, climate, tech governance) recur, and how do ideas and language from these spaces later appear in public policy and media?
- How much of the “consensus” that CTTs treat as neutral background actually passes through these elite filters first?
- How do secrecy levels (closed-door vs livestreamed, no minutes vs published communiqués) affect public trust and the shape of conspiracy narratives about them?
In other words, instead of treating references to “global elites” or named organisations (Fabian-style networks, Trilateral Commission style bodies, WEF, Bilderberg, CFR, etc.) as self-evidently pathological, a symmetric programme would treat those networks themselves as objects of study: mapping composition, agendas, information flows and policy impact.
Only then can we say, with any seriousness, which conspiracy stories about “hidden agendas” are fantasy, which are exaggerations, and which are simply crude descriptions of how high-level coordination actually works.
c. Concrete research questions
A symmetric Conspiracy Theory Theory would ask things like:
- Capture and distrust: Do documented levels of regulatory capture (revolving doors, lobbying intensity, enforcement failures) predict distrust in specific agencies above and beyond conspiracist mindset and ideology?
- Liability and perceived rigging: Does the presence of special liability protections (for vaccines, financial products, etc.) correlate with the belief that “the system is rigged in favour of corporations,” when controlling for media diet and education?
- Openness reforms and belief change: When a field introduces strong openness reforms (mandatory data release, pre-registration, visible corrections), does endorsement of relevant conspiracy theories fall among previously sceptical groups?
- Engineered vs organic conspiracies: To what extent can the growth of particular conspiracies (e.g. QAnon, specific climate or vaccine narratives) be traced to identifiable state, party or corporate operations versus organic community dynamics?
- Sorting fantasy from distorted signal: For major conspiracy themes (“pharma hides harms”, “climate risks are manipulated”, “global elites coordinate policy in secret”), which elements are false, which exaggerated, and which accurate descriptions of real structures and events?
d. What changes if we do this?
A symmetric programme wouldn’t junk psychology; it would embed it in a real political economy of information and power.
It would:
Stop treating distrust as automatically pathological and instead model when it is epistemically appropriate given the structure of the system.
Generate prescriptions that target structures as well as stories: reform liability, reduce capture, widen participation, constrain state/corporate disinformation, harden transparency—not just “better messaging.”
Distinguish between:
conspiracy beliefs that are untethered, and
conspiracy beliefs that are crooked reflections of genuine abuses and manipulations.
That is not a rejection of science. It’s a demand for better, more reflexive science—a science that turns its tools on the institutions and operators who shape the informational environment, not just on the citizens trying to make sense of it.
8. Who Watches the Watchers of Conspiracy?
If we pull back from the detail, the role of Conspiracy Theory Theorists in this landscape is fairly clear.
They are:
- Firstly, cartographers of dissent. They map who disbelieves climate consensus, who resists vaccination, who mistrusts scientists, and how those clusters line up with ideology, identity, media use and “conspiracy mentality.” They give institutions a segmentation map of the doubting public.
- Secondly, choreographers of acceptable belief. Through concepts like “vaccine hesitancy,” “science-related populism,” “conspiracy mentality” and “trust in scientists,” they sketch the boundaries of normality and deviance. To trust official science is health; to distrust is risk factor. Their scales and models define what counts as a reasonable citizen.
- Thirdly, soft shields for institutional authority. The climate work defends consensus and highlights fossil denial, but does not interrogate how consensus is produced or steered. The vaccine work pathologises refusal, but leaves liability shields, capture and censorship offstage. The trust work reassures there is “no general crisis,” turning a fragile, contested authority into a global KPI of deference.
This does not mean the whole enterprise is worthless. CTTs have produced genuine insight into how fear, identity, trauma and belonging shape our attraction to simple, totalising stories. They have shown how some conspiratorial frames can corrode cooperation and make genuine problem-solving harder.
But as long as they treat institutional narratives and structures as beyond systematic critique, and treat citizen distrust primarily as a pathology to be managed,they function less as neutral scientists and more as technocrats of consent—supplying methods and language that help existing power manage its doubters without seriously examining itself.
The fracture point is simple and unavoidable:
- If citizens are sometimes right to distrust, who will build a science that starts from that possibility, rather than coding it out of the model?
- Who will study the conspiracies of power—capture, coordination, disinformation, manufactured consensus—with the same rigour now applied to the conspiracies of the powerless?
Until someone does, the “science of conspiracy” will remain what this article has traced it to be: a one-way mirror that reflects endlessly on the public, and almost never on the institutions in whose name it speaks.
References
Cologna, V., Mede, N. G., Berger, S., Törnwall, O., Mede, E., Mede, T., … Douglas, K. M. (2025). Trust in scientists and their role in society across 68 countries. Nature Human Behaviour. Advance online publication. https://doi.org/10.1038/s41562-024-02090-5
de Gourville, D., Douglas, K. M., & Sutton, R. M. (2025). Denialist vs warmist climate change conspiracy beliefs: Ideological roots, psychological correlates and environmental implications. British Journal of Psychology. Advance online publication. https://doi.org/10.1111/bjop.70035
Douglas, K. M. (2026). Antecedents and consequences of science-related conspiracy beliefs. Current Opinion in Psychology, 67, 102191. https://doi.org/10.1016/j.copsyc.2025.102191
Hornsey, M. J. (2022). Reasons why people may refuse COVID-19 vaccination (and what can be done about it). World Psychiatry, 21(2), 217–218. https://doi.org/10.1002/wps.20990
Hornsey, M. J., Harris, E. A., Bain, P. G., & Fielding, K. S. (2016). Meta-analyses of the determinants and outcomes of belief in climate change. Nature Climate Change, 6(6), 622–626. https://doi.org/10.1038/nclimate2943
Hornsey, M. J., Imuta, K., & Bierwiaczonek, K. (in press). Social and cognitive factors shaping conspiracy theorizing across the life course. Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0001847
Hornsey, M. J., & Lewandowsky, S. (2022). A toolkit for understanding and addressing climate scepticism. Nature Human Behaviour, 6(11), 1454–1464. https://doi.org/10.1038/s41562-022-01463-y
Published via Journeys by the Styx.
Mindwars: Exposing the engineers of thought and consent.
—
Author’s Note
Produced using the Geopolitika analysis system—an integrated framework for structural interrogation, elite systems mapping, and narrative deconstruction.