Mindwars: How ‘Are Conspiracy Theories Harmless?’ Turns Suspicion of Power into a Cognitive Crime
How a landmark review turns public suspicion into a diagnosable thought crime.
This sequence in the Mindwars series began with Fluoridating the Rabbit Hole and the academic paper out of Massey University in New Zealand that pulled fluoridation into the “conspiracy theory” frame. Then The Original Sin — How the CMQ Turned Distrust into a Diagnosis showed how a five-item scale quietly turned distrust of power into a measurable defect, detached from evidence or context. The Consequences Factory – Inside CONSPIRACY_FX followed that defect up the chain into the €2.5m ERC machinery built to industrialise “consequences” across politics, vaccination and climate. The Self-Sealing Science of Conspiracy Theory Theorists exposed how “conspiracy mentality” as a trait is built and defended, and Managers of Mistrust mapped the political role of this stack: scepticism downwards, deference upwards.

Building on our previous analysis, which flipped the script by asking Are System-Trusting Beliefs Harmless?, this instalment rewinds to the script’s origin point. We examine Karen Douglas’s 2021 review not as a simple literature review, but as a definitive “harms brief”—a document that turned a loose diagnostic vocabulary into a mandate, making the large-scale, ERC-funded management of public suspicion a legible and fundable enterprise.
How the Article is Positioned
In 2020–2021, COVID-19 and “infodemic” language moved into the centre of public debate. Governments, public-health bodies, platforms and researchers converged on a shared problem frame: there is a virus, and there is also a surplus of “misinformation” and “conspiracy theories” that threaten compliance and trust. It is in this context that Karen Douglas’s 2021 review, Are Conspiracy Theories Harmless?, appears. Formally, it is a synthesis of existing research. Functionally, it is a structured argument for treating conspiracy beliefs as a risk factor for political, social and health systems rather than as a normal part of public contestation. The analysis that follows reads the paper as a Phase-2 move in a broader elite script cycle—following Mills on the power elite and Gramsci on hegemony—in which deviant belief is detected, pathologised and then used to justify new layers of informational control.
Within a year of the article’s publication, Douglas became principal investigator on the ERC Advanced Grant CONSPIRACY_FX – “Consequences of conspiracy theories”, hosted at the University of Kent and commencing on 1 January 2022. The project is explicitly framed around identifying when and how conspiracy theories affect politics (including populist and extremist currents), vaccination behaviour, climate engagement, and the reputations of elites who promote such narratives. The overlap in focus is direct: the 2021 paper maps “harmful consequences” across these domains; CONSPIRACY_FX is funded to expand and systematise that line of work. The review is best read as a harms brief that makes such a large-scale “consequences” programme intelligible and attractive to funders.
None of this requires Douglas to be a conscious architect in the war for minds mapped in the MINDSPACE framework or in the elite script cycle described here. The focus is institutional function, not intent. A review like this can be entirely sincere and still operate, in Mills’s sense, as an instrument of the power elite. In that role, it is not just a simple extension of the MINDSPACE playbook but an evolution of it: where traditional nudges hit their limit, her framework offers a systemic response, acting as a cognitive antibody that diagnoses public suspicion as pathology and prescribes managed belief as the cure, allowing the system to immunise itself against the failure of its own persuasion.
Taking that as the starting point, this article treats Are Conspiracy Theories Harmless? not as a neutral literature review but as an active participant in reclassifying broad suspicion of powerful actors as a harmful cognitive syndrome that justifies population-level belief management. We will track how it (1) frames conspiracy thinking as psychological pathology, (2) defines “harm” using system-centric metrics such as turnout, trust and compliance, (3) promotes soft control tools like inoculation, collectivist framing and “trusted messengers,” and (4) fits into a wider elite script cycle that tends to culminate in harder regulatory and enforcement measures.
From Crisis to Category: How the Paper Weaponises “Conspiracy Theory”
Douglas opens on COVID, not JFK. The first examples are familiar pandemic claims: the virus is a hoax, a bioweapon, part of a control agenda; 5G is implicated; vaccines are unsafe or sinister. These are immediately linked to “significant and damaging consequences” – people ignoring guidance, refusing vaccines, and in some cases attacking 5G masts. Crisis comes first, consequences second, conceptual work third. The pandemic becomes a timing window in which a broad class of beliefs can be redefined as a specific kind of problem: not just disagreement, but a threat to the informational order required for coordinated response.
On that base, the paper stabilises a definition: conspiracy theories are explanations that attribute important events to secret and malicious acts of powerful groups. Once that is accepted, any explanation that invokes elite collusion is routed into a suspect category by design. The central question is no longer “is this true in this case?” but “what happens to turnout, trust, and compliance when people think like this?” When the article talks about “damaging consequences,” the damage is almost always measured as stress on institutional goals – lower voting, lower trust, lower adherence, weaker organisational stability – rather than harm as experienced or defined by citizens. If people stop voting because they judge available options illegitimate or meaningless, that is still coded as a negative outcome; abstention is treated as pathology, not as a form of political speech.
From there the review ranges widely: COVID, climate, vaccination, AIDS, birth control, anti-Semitic tropes, workplace conspiracies, celebrity deaths. Very different histories and power dynamics are treated as instances of a single cognitive style. The focus shifts away from the truth or falsity of particular allegations and toward “conspiracism” as a generalised risk category – a way of thinking about power that can be measured, monitored, and, in later phases of the script, targeted for management.
Harm Metrics as System Health Indicators
Once the category is set, the review moves through a catalogue of “harms.” In the political sphere, exposure to conspiracy theories is linked to lower intention to vote, less engagement, reduced trust in politics, and higher willingness to support illegal or non-normative actions. In intergroup relations, anti-Jewish conspiracy narratives are tied to prejudice and discrimination against Jews, with spillover hostility toward other, unrelated groups. For climate and science, climate-related conspiracies correlate with weaker intentions to reduce one’s carbon footprint, sign petitions, or support mitigation policies, and with broader “science denial.” In health, vaccine conspiracies are associated with lower vaccination intention and greater attraction to alternative treatments; AIDS and contraception conspiracies are linked to riskier sexual and reproductive behaviours. In the workplace, organisational conspiracy beliefs track with lower commitment, higher turnover intention, and negative assessments of work outcomes. In the COVID context, pandemic-specific conspiracy theories correlate with lower compliance with guidelines, lower perceived threat, and support for vandalism and violence against 5G masts.
On the surface this looks like a neutral evidence review. In practice, every “harm” is defined in terms of system performance metrics: participation in official politics, adherence to public-health and climate directives, workplace productivity and stability, and the absence of protest, norm-breaking, or violence. Less voting, less compliance, less commitment, more disruption: these are taken as indicators that something has gone wrong in the citizen’s cognition, not in the system they are responding to.
The pattern is consistent. “Good” is equated with trust, turnout, compliance and productivity. “Bad” is equated with withdrawal, non-compliance, protest and radicalisation. There is no systematic discussion of when distrust, abstention, obstruction of projects (including 5G), or even disruptive action might be rational or morally defensible responses to institutional failure, corruption or unaccountable power. Those possibilities sit outside the frame.
None of this is to deny that some conspiracy-laden beliefs can lead to tangible harms: harassment of individuals, targeted prejudice, occasional violence, or increased health risks during a pandemic. Those outcomes matter. The question is not whether such harms exist, but how they are conceptualised and counted. Douglas’s review consistently measures harm in terms of stress on institutional projects—turnout, trust, compliance, continuity of infrastructure—without a framework for distinguishing between cases where suspicion-driven non-compliance is dangerous and cases where it may be a proportionate response to institutional failure. The problem is the metric and the asymmetry, not the claim that harmful outcomes sometimes occur.
Viewed through this lens, the harm calculus is less a neutral assessment of costs and more a system health dashboard: a status report on how well institutions are securing deference and cooperation, presented as social science about the dangers of conspiracy belief.
From Harms Dossier to ERC Flagship: How the Paper Scales into CONSPIRACY_FX
By 2021, Douglas has a full deck of “consequences” to play. Are Conspiracy Theories Harmless? pulls a decade of studies into one place and presents them as a coherent picture: conspiracy beliefs are associated with political apathy and extremism, prejudice and outgroup hostility, climate inaction, health-risk behaviour and vaccine refusal, workplace disengagement, and COVID non-compliance. The paper doesn’t just list correlations; it frames them as a broad evidence base that conspiracy thinking undermines key systems across multiple domains. In grant language, it is a classic “state of the art plus harms” statement: here is what we know about the damage, here is how much work remains to be done.
The match with CONSPIRACY_FX is direct. The article’s harms in politics, vaccination, and climate map almost one-to-one onto the ERC project’s advertised focus on conspiracy theories’ role in populist and extremist politics, vaccine hesitancy, and reluctance to engage with climate change. Even the stock line that “little is known about their consequences for individuals, groups, and societies” in the grant texts echoes the review’s own justification for more research and intervention. What appears in 2021 as an argument—that conspiracy beliefs have serious consequences in precisely these domains—reappears in 2022 as a funded mandate to investigate those consequences in depth.
ERC Advanced Grants are not marginal top-ups; they are designed to underwrite large, independent programmes: teams, PhD studentships, conferences, multi-method pipelines. In Mindwars terms, the sequence is straightforward. Phase 2: the article builds the diagnostic script – conspiracy suspicion produces measurable harms on system metrics and therefore warrants soft behavioural tools. Phase 2.5: ERC steps in and institutionalises that script as a flagship project, explicitly titled “Consequences of conspiracy theories” and led by the same author. Phase 3: the outputs from CONSPIRACY_FX are then available to feed into regulation, platform policy and security framing of conspiracist belief as a risk category.
Seen this way, CONSPIRACY_FX is a system-level investment in belief governance. The 2021 review is not a neutral question about harmlessness; it is the harms brief that clears the runway for a multi-year, multi-million-euro infrastructure devoted to managing public suspicion.
How The Paper Pathologises Suspicion Itself
Alongside the harms catalogue, the review offers a psychological story about why people believe conspiracy theories. It leans on a three-need model. First, epistemic needs: people want certainty and clear explanations in messy situations. Second, existential needs: people want to feel safe and in control rather than powerless in the face of large, impersonal systems. Third, social needs: people want to see themselves and their ingroups as competent, moral and distinctive; they may also want to feel unique or “in the know.” Conspiracy beliefs are presented as attractive when these needs are frustrated – when people are anxious, feel politically or economically marginalised, or have strong ingroup narcissism and a desire for distinctiveness.
In that framing, the primary drivers of conspiracy belief are vulnerabilities: anxiety, loss of control, low status, threatened identity, narcissism. Suspicion of powerful groups is not treated as a potentially rational response to real histories of collusion, secrecy or impunity; it is treated as a coping strategy for distressed psyches. In Mindwars terms, the stance is clear: conspiracist thinking is positioned as a maladaptive style of making sense of the world, grounded in unmet psychological needs, rather than as a possibly appropriate inference in some contexts.
Truth barely features. The review notes, briefly, that some philosophers and political theorists see conspiracy theorising as a form of democratic critique or as an attempt to understand how power actually operates. It acknowledges that there might be “positive” aspects, such as stimulating debate or calling elites to account. But this line is closed almost as soon as it is opened: the paper asserts that such positives are heavily outweighed by the negative consequences and then moves on, without developing criteria for when suspicion might be justified.
The net effect is to medicalise distrust. The belief that elites might conspire is treated less as a hypothesis to be tested case by case, and more as a symptom cluster – something to be explained, monitored and, in later phases of the script, managed.
From Diagnosis To Weaponised Solutions (Phase-2 Tools)
After diagnosing conspiracy belief as harmful and psychologically rooted, the review turns to what should be done about it. The menu of “solutions” is familiar. First, promote a collectivist mindset so that people “resist the temptation of conspiracy theories,” especially in crises like COVID. The idea is that if individuals think more in terms of group obligations and shared fate, they will be less inclined to endorse narratives that pit “the people” against “the powerful.” Second, enlist trusted ingroup messengers. Because governments and officials are often seen as outgroups, counterarguments should be delivered by valued insiders: community leaders, influencers, or other figures people already trust. Third, apply inoculation and pre-exposure warnings: provide factual rebuttals and explain common persuasion techniques before people encounter conspiracy content, and warn them in advance that future information might be misleading. Finally, address underlying psychological needs: in the long term, reduce uncertainty, powerlessness and social threat so that conspiracy theories become less appealing as a coping mechanism.
Reframed in Mindwars terms, each of these is an information-warfare tactic. “Collectivism” becomes moral pressure to prioritise group harmony and official narratives over individual scepticism whenever an “emergency” is declared. “Ingroup messengers” become weaponised social trust: when central institutions lack legitimacy, influence is routed through local nodes that are more likely to be believed. “Inoculation” and “prebunking” are pre-emptive framing devices, tagging certain themes and sources as manipulative or untrustworthy so that they are rejected on contact. “Need management” recodes structural conditions—precarity, exclusion, unaccountable power—as individual psychological problems to be therapised rather than political issues to be contested.
Placed back into the elite script cycle, this is all still Phase 2. The article does not itself legislate or regulate; it supplies scientific cover for a suite of soft-control measures that reshape the belief environment while maintaining a “care” optic.
Timing And Structure As Ritual Performance
Seen through a Mills–Gramsci lens, the appearance of this article is not just a response to events but part of a recurring performance. It comes out in 2021, mid-COVID, exactly when global institutions are rolling out “infodemic management,” prebunking campaigns, and new disinformation agendas. The sequence is familiar: a crisis is declared, a threat to truth and order is named, guardians of knowledge and expertise are elevated, and a package of corrective measures is prescribed. The paper sits neatly in that “season,” providing an academic version of the same script.
Its internal structure follows a similarly ritual choreography. First, it names the threat: are conspiracy theories harmless? The answer is quickly given – no, they have damaging consequences. Then it recites the harms: a tour through politics (apathy, extremism), intergroup relations (prejudice), climate and science (denial, inaction), health (risk and refusal), workplace outcomes (disengagement), and COVID behaviour (non-compliance, vandalism). Next, it invokes the core values that Gramsci would recognise as elements of the prevailing “common sense”: democracy, public health, science, social cohesion. Those are treated as background goods in need of protection. After that come the purification measures: inoculation, collectivist framing, trusted messengers, and psychological need management, offered as ways to reduce the appeal and impact of conspiracist thinking. Finally, the paper closes with reassurance: conspiracy theories are associated with a range of negative consequences, and addressing them is a key challenge for future work.
In Mindwars terms, this functions as a legitimacy ritual. It reassures its audience that the main institutions of knowledge and governance remain the rightful guardians of reality and that their projects define what counts as harm.
What The Article Erases Or Brackets
The most striking absence in the review is truth. It never systematically asks when conspiracy inferences have been factually correct, or when suspicion of elite collusion is epistemically appropriate. Historical cases where powerful actors did conspire – over surveillance, covert interventions, cartel behaviour, or cover-ups – are not used to build any criteria for justified suspicion. Philosophical work that defends conspiracy theorising as sometimes rational and necessary is briefly cited, but its implications are not integrated into the framework.
Institutional conduct is treated the same way. Harms are scored independently of whether institutions have behaved well or badly. A citizen who withdraws from voting after repeated experience of broken promises, or who distrusts a health agency after policy reversals and opaque decision-making, still appears in the data as a negative outcome – a loss of trust, a drop in participation, a sign that conspiracy beliefs are damaging. That withdrawal is not considered as a possible feedback signal about legitimacy or performance; it remains a symptom.
On the intervention side, autonomy costs are largely invisible. Inoculation and prebunking are presented as straightforwardly beneficial, with little attention to what it means to pre-frame entire classes of claims or actors as untrustworthy before people encounter them. The possibility that official counter-narratives might themselves be partial, interest-laden, or simply wrong is acknowledged, if at all, only in passing. There is no worked-through account of how to handle error or bias on the institutional side of the ledger.
Taken together, these omissions are not incidental. The article’s “total harm” verdict depends on bracketing almost all scenarios in which mistrust, suspicion or dissent might be protective, truth-tracking or democratically vital. Once those possibilities are kept off the table, the conclusion that conspiracy beliefs are net harmful follows almost automatically.
Read as a whole, the review is written from a technocratic perspective that takes the priorities of large institutions as the natural reference point. Harms are harms to coordination; solutions are expert-led adjustments to citizen cognition and behaviour; truth, accountability and consent are secondary or absent. In Mills’s terms, it is a paper written in the interests of the power elite, even if it never names them: it treats their need for trust, participation and compliance as the baseline good, and treats public suspicion of them as a pathology to be diagnosed and corrected rather than as a potential check on their power. Harms are scored independently of whether institutions have behaved well or badly. A citizen who withdraws from voting after repeated deception still appears as a negative outcome, there is no room for the idea that “Not voting is a vote of no confidence in the system,” and that this implies fault in the system’s functioning.
The same logic would apply to contemporary conflicts like London’s Ultra Low Emission Zone (ULEZ) expansion and the rise of the “Blade Runners” who vandalise enforcement cameras. From the harm calculus implicit in Douglas’s review, this looks like straightforward non-compliance and criminal damage, potentially fuelled by misinformation about the scheme’s intent and impact. That framing ignores the underlying political and material conflict. The ULEZ charge imposes a significant financial burden on a specific segment of the population—often lower-income workers with less flexible employment who rely on non-compliant vehicles. The policy is perceived as imposed from a distant metropolitan authority, with limited consent and few viable alternatives. In that light, the “Blade Runner” phenomenon is not merely a “consequence” of conspiracy theories; it is an expression of class-based grievance and political disenfranchisement which, in this case, appears to have been tactically effective, forcing the system to abandon or significantly retreat from its original enforcement design. To recode this collective action as a symptom of a “conspiracy mentality”—a cognitive defect rooted in psychological needs—is an act of political erasure. It allows the system to diagnose and pathologise the reaction while absolving itself of responsibility for the policy’s contested legitimacy and impact. The “harm” of smashed cameras is counted in detail; the systemic harms of regressive taxation, democratic deficit and the fact that only illegal resistance shifted the policy at all never enter the ledger.
The Elite Script Cycle – And Where Douglas 2021 Sits
To make sense of Are Conspiracy Theories Harmless? in structural terms, it helps to step back. Following Mills on the power elite and Gramsci on hegemony, we can sketch a simple elite script cycle for how institutions handle belief patterns that threaten coordination:
- Phase 1 – Deviance detection
Clusters of belief and behaviour that degrade coordination are noticed and named: “radicalisation,” “misinformation,” “conspiracy theory spread,” “polarisation.” - Phase 2 – Pathology and soft problematisation (cognitive phase)
This is where Douglas’s 2021 article sits. Academia and aligned think tanks translate raw deviance into syndromes—conspiracism, misinfo susceptibility, “infodemic.” Harms are defined using system metrics: turnout, trust, compliance, productivity. Recommended tools are soft: inoculation, prebunking, collectivist messaging, “trusted messengers,” resilience training. The takeaway is: these beliefs cause harm; soft steering is legitimate and necessary. - Phase 3 – Institutionalisation and coercive stabilisation (hard phase)
The same framing hardens into rules and infrastructures. “Best practices” and voluntary codes become legal or regulatory duties; risk labels migrate into platform policies, professional standards, funding rules and security frameworks. What was morally and scientifically “unsafe” to believe becomes legally and infrastructurally unsafe. - Phase 4 – Normalisation and interiorisation
Over time, these boundaries are internalised. People learn, often without noticing, which doubts are “responsible” and which mark them out as cranks, risks or potential extremists. Self-censorship and automatic deference replace overt enforcement. - Phase 5 – Drift and fracture
New crises, abuses or contradictions generate fresh deviance that doesn’t fit the existing frame; the cycle restarts on a slightly narrower field of acceptable dispute.
Douglas 2021 is a textbook Phase-2 move. It detects a pattern—conspiracy belief—classifies it as a multi-domain risk, explains it via psychological vulnerability, and lines up soft tools to contain it. CONSPIRACY_FX is the obvious continuation: the same author is then funded to build a full research programme around “consequences” in the very domains the review highlights. The point is not that the paper was written “for the grant,” but that the analytic frame and the funding architecture fit each other perfectly: define suspicion of power as a cognitive hazard, then invest in mapping and managing its effects.
Phase 3 is already emerging around this kind of work, as content rules, platform obligations and security framings increasingly use “conspiracy,” “misinformation” and “harm” as operational categories. Douglas’s article is not the cause of that shift, but it is part of the justification layer that makes those moves look like neutral, evidence-based harm reduction rather than a political choice about how far public mistrust is allowed to go.
Closing: Decoding The Article As A Mindwar Node
Read end-to-end, Douglas’s review functions as a cognitive-weapons brief. It recasts broad suspicion of powerful groups as a harmful psychological syndrome and prescribes population-level interventions—collectivist messaging, inoculation, trusted messengers—to manage it. That functional role is the focus here, regardless of whether Douglas herself understands the review in those terms. Trust and compliance are treated as unquestioned goods; “harm” is defined as stress on institutional goals; suspicion is pathologised rather than tested case by case; and behavioural engineering is presented as care.
The timing with CONSPIRACY_FX sharpens the structural question. Within the system, the review is both a synthesis of a research strand and a design document for the kind of programme that then gets funded: define harms in system terms, frame suspicion as a psychological risk, and propose scalable tools to manage it. The point is not that the paper was cynically written “for the grant,” but that the funding architecture and the analytic frame fit each other so well. The same institutions that most need public trust to be stable are the ones investing in research that defines instability of trust as a cognitive hazard.
At this juncture, the article transcends being an academic curiosity and demands harder questions:
- In whose interests is this written?
- What power structure does it serve, and why is that structure absent from its own analysis?
- What does it mean when the same system that is most threatened by public suspicion also funds the research that classifies that suspicion as a danger?
Douglas asks whether conspiracy theories are harmless.
The Mindwars question is less comfortable: When does a system decide that citizens’ suspicion is itself a problem to be treated—and what does that decision reveal about the system, and the power it prefers not to name?
A different starting point is possible. A genuinely democratic response to harmful falsehoods would begin not with managing citizen suspicion, but with constraining institutional abuse: radical transparency over lobbying and contracts, meaningful penalties for official lying, strengthened freedom-of-information regimes, and independent oversight with teeth. It would treat some conspiracy thinking as a warning signal about broken accountability, not as a freestanding defect to be engineered away. Only on that ground—where power is answerable and corrigible—does it even make sense to talk about “mitigating” the harms of misinformed belief without sliding into the kind of one-way cognitive management that this review, whether it intends to or not, is helping to normalise.
Published via Journeys by the Styx.
Mindwars: Exposing the engineers of thought and consent.
—
Author’s Note
Produced using the Geopolitika analysis system—an integrated framework for structural interrogation, elite systems mapping, and narrative deconstruction.