Mindwars: The ERC’s Democracy Map – From Popular Sovereignty to Technocratic Stewardship

How the EU’s “democracy” research portfolio quietly converts popular sovereignty into a managed variable in a technocratic control system.

Mindwars: The ERC’s Democracy Map – From Popular Sovereignty to Technocratic Stewardship

The Mindwars article CTTs, the Operating Class and the Governance of Suspicion examined how EU-funded “conspiracy theory” research turns suspicion of power into a managed risk variable inside a post-democratic operating system. This conspiracy theory research was conducted inside a wider funding and policy framework defined under the European Research Council’s 2024 report Democracy in the 21st century: mapping research frontiers for policy.

But a closer look reveals the report is not a neutral map; it is a technocratic operator’s manual—recasting a curiosity-driven research portfolio into a governance kit that defines threats, assigns roles, and prescribes controls for running ‘democracy’ as a technical system.

Across 215 projects (~€368m), the report compresses research into six risk clusters and wires them to existing files: the Digital Services Act (DSA), European Media Freedom Act (EMFA), the European Democracy Action Plan, and Defence of Democracy. The result is a safe-to-cite catalogue that routes academic claims into platform duties, influence policing, dashboards, and “participation” rituals. What’s missing is any audit of the technocratic machinery itself—how insulated decision chains, incentive structures, and security couplings generate the very “vulnerabilities” the manual is built to manage.

Ostensible Purpose

The European Research Council (ERC) presents this document as a neutral overview of “ERC-funded research projects that examine a wide variety of aspects linked to democracy” which “explores the multifaceted challenges confronting democratic systems while also illuminating the enduring resilience of democratic principles and civic engagement.”

This framing, however, belies its operative function: the document serves less as a field map and more as a pre-certified compliance kit for the European Union’s democracy governance stack—an operator’s manual.

Formally, it is a retrospective curation. The ERC explains that “the projects presented in this report were identified by keyword searches on all Horizon 2020 (2014–2020) and Horizon Europe (2021–2027) ERC-funded projects” and that “the resulting pool was refined, retaining 215 projects.”

This refined portfolio is then “divided into six thematic clusters”

covering democratic governance, elections, citizen engagement, human rights and the rule of law, disinformation and social media, and “polarisation, populism & authoritarianism.” Open-ended frontier research is compressed into labelled risk compartments that can be lifted straight into administrative workflows.

Each cluster is then equipped with named tools and deliverables— “new knowledge, concepts, data, and methodologies” presented as “innovative and concrete tools.” These are not abstract outputs; they are explicitly wired into the live policy environment of the DSA, EMFA, the European Democracy Action Plan, and the Defence of Democracy package. The route from research to enforcement is thus presented not as a debate, but as a pre-laid pipeline: projects feed knowledge, knowledge is packaged into tools, and tools are delivered as ready-made inputs for ‘key EU policy goals.

The declared audience makes the function plain. Under Horizon Europe, “the European Commission has delegated a new task to the ERC Executive Agency (ERCEA) to identify, analyse and communicate policy relevant research results to Commission services” and ERCEA has built a “Feedback to Policy (F2P) framework” to guide this flow.

The recipients are the Commission’s science-for-policy nodes—the Group of Chief Scientific Advisors and associated Scientific Advice Mechanism, the Joint Research Centre (JRC), the European Group on Ethics in Science and New Technologies (EGE)—and the line Directorates-General (DGs) responsible for justice, research policy and media regulation. They are not being invited into an open methodological argument; they are being handed a vetted, “evidence-based” catalogue whose categories, problems and tools arrive pre-baked for citation in impact assessments, recommendations and delegated acts.

In essence, the report converts frontier research into a governance reservoir. It does not assemble knowledge to audit how democracy is being run—it concentrates that knowledge into a set of authorised risks, instruments and narratives that can be used to administrate democracy as an object of ongoing technical management.

Provenance: The Architecture of Anonymous Authority

The report’s authorship is meticulously framed as institutional rather than personal—a design choice that erases individual accountability and presents its conclusions as the inevitable output of the system itself. The cover and front matter attribute it to the European Research Council (ERC), described as “the premier European funding organisation for frontier research” which “funds a rich and diverse portfolio of projects in all fields of science, without any predefined academic or policy priorities.” The report appears under the ERC imprint with the generic title “Mapping ERC Frontier Research: Democracy” and publication metadata, but no named author. Behind this institutional mask sits the ERC Scientific Council and the ERC Executive Agency (ERCEA), contributors from which are thanked in the acknowledgements for “their work and help in preparing this report” and for leading it through the “Scientific Impact and Feedback to Policy Sector.”

This is not a neutral compilation function; it is an internal cartography exercise that retrofits a scattered grant portfolio into a coherent, policy-ready map of “democracy” research.

Individual officials are only named as contributors in a closing acknowledgement list—ERCEA unit staff, proofreaders and liaison officers in Directorate-General for Justice and Consumers (DG JUST), Directorate-General for Research and Innovation (DG RTD) and the Joint Research Centre (JRC).

The formal point of reference remains the ERC and its Scientific Council as collective authority. Human judgment is routed through this framework and returns branded as institutional truth: a curated map of democracy-relevant projects that presents itself as evidence rather than selection.

The map is explicitly wired into the Commission’s “science for policy” lattice, a regime that posits “knowledge based decision-making is essential for ensuring robust and effective policies.” Within this lattice, the Scientific Advice Mechanism (SAM) provides integrated counsel, the Joint Research Centre (JRC) supplies evidence and participatory tools, and the European Group on Ethics (EGE) certifies ethical acceptability. Together, they form the synthesis and normalisation stage of the pipeline.

The Joint Research Centre “provides independent, evidence-based knowledge and science to support EU policies” and implements participatory practices through its Competence Centre on Participatory and Deliberative Democracy.

“Strengthening the contribution of research and innovation to policymaking benefits society as a whole, has a positive impact on the planet and eventually improves citizens’ lives and reinforces democracy.”

Taken together, this establishes a three-stage epistemic pipeline: ERC curation via ERCEA’s Feedback to Policy framework; synthesis and normalisation in SAM, JRC and EGE; then instrumentation in the line Directorates-General responsible for democracy, justice, research and media regulation. The ERC report supplies the curated portfolio, the Council conclusions and science-for-policy apparatus define science as the appropriate basis for “strengthening democracy,” and the DGs receive a ready stock of categories, risks and tools to embed into impact assessments, legislative files and compliance obligations. This chain constitutes the operational circuitry of technocratic translation: open-ended academic inquiry enters the pipeline as capital and exits the other end as the rigid geometry of enforceable policy.

Core Mechanism: The Ontological Engine

The report’s function is generative, not descriptive—it operates a three-phase ontological engine that converts political reality into governable substrate.

Phase 1: Agenda → Ontology (The Cartography of Permission)

The starting spell is “democracy … facing a complex array of challenges,” framed through a familiar litany: “socio-economic inequality,” “polarisation and populism,” “the impact of social media and artificial intelligence,” “misinformation and fake news,” and “foreign interference [posing] a significant threat to fair and free elections.” This is presented as neutral diagnosis. In practice it is the template against which the ERC portfolio is retrospectively re-sorted. The earlier “rich and diverse portfolio of projects in all fields of science, without any predefined academic or policy priorities” is now “refined, retaining 215 projects” and divided “into six thematic clusters” on democratic governance, elections, citizen engagement, human rights & rule of law, “disinformation, fake news & social media,” and “polarisation, populism & authoritarianism.”

This act of clustering decides what officially counts as a democracy “challenge.” Project types that match the surface frame—media systems, information flows, citizen attitudes, foreign actors—are elevated as sanctioned concerns. Structural features of EU power that destabilise democracy from upstream—monetary, security and technocratic architectures—are absent from the cluster headings and from the list of highlighted “main issues,” which stays tightly on inequality, polarisation, social media, artificial intelligence and foreign interference. Curiosity-driven research is thus transmuted into an operational ontology: a closed set of named risks whose implied remedies preserve the system that is doing the naming.

Conspicuously absent as a thematic cluster is any examination of how EU institutional design—its monetary union, security frameworks, or comitology procedures—might itself generate democratic strain.

Phase 2: Normalisation → Vocabulary (The Lexicon of Administration)

Once the map is drawn, it is fed into the Commission’s science-for-policy lattice. The report itself places the portfolio inside a “science for policy” regime where “knowledge based decision-making is essential for ensuring robust and effective policies,” and where Council conclusions stress the “contribution of science to reinforce policymaking … strengthening democracy.”

The Scientific Advice Mechanism (SAM), the Group of Chief Scientific Advisors, the European Group on Ethics in Science and New Technologies (EGE), and the Joint Research Centre (JRC) are identified as the key channels “to promote evidence-based policy making” and to integrate scientific knowledge into “a well-informed decision-making process.”

Within this lattice, the portfolio is compressed into a portable lexicon: disinformation, harmful information, foreign information manipulation and interference, vulnerable groups, resilience, electoral integrity, media pluralism. These terms are engineered for frictionless travel—through Council conclusions, democracy strategies, impact assessments, consultations, codes of practice. The same vocabulary appears in the surrounding policy architecture: the European Democracy Action Plan aims to “promote free and fair elections, strengthen media freedom and combat disinformation,” while its toolbox “to counter foreign information manipulation and interference” is coupled to “ensuring more accountability of online platforms … (Code of Practice on Disinformation and the Digital Services Act).”

The political effect is epistemic normalisation: contentious struggles over power, distribution, and sovereignty are recoded as technical problems— “information quality,” “platform accountability,” and citizen “resilience.” Once these categories harden into dashboards, survey instruments and integrity scores, measurement begins to create the reality it claims only to observe. The lexicon becomes self-validating: whatever appears on its indicators is what counts as democracy at risk.

Phase 3: Intervention → Enforcement (The Infrastructure of Compliance)

The final phase turns abstraction into force. The same report that catalogues risks goes on to nest them explicitly inside a tightening policy frame. The European Democracy Action Plan is described as a “comprehensive plan… to promote free and fair elections, strengthen media freedom and combat disinformation.” This to build “more resilient democracies” in the digital age, with linked initiatives on transparent political advertising, party finance, electoral resilience, media freedom and protection against SLAPPs, alongside “a new set of rules … under the European Media Freedom Act to uphold media pluralism and independence.” The follow-on Defence of Democracy package adds “common transparency and accountability standards for interest representation … carried out on behalf of third countries” plus recommendations on elections and citizen participation.

Here the vocabulary from Phase 2 is instrumented into duties, procedures and code. Projects that produce fake-news detectors, filter-bubble diagnostics, and algorithmic frameworks for “reducing bias and polarisation in online media” supply the technical repertoire for monitoring, ranking and labelling content “to counter the spread of disinformation.”

Platforms are refactored as governed infrastructure; citizen participation is channelled into “participatory and deliberative practices” and European Citizens’ Panels designed and curated by the JRC and Commission. These mechanisms ventilate sentiment and harvest data while leaving the basic power geometry intact.

This infrastructure possesses an inherent security gravity. Once established, the ethics review’s separate track for “security implications” naturally pulls the entire apparatus toward “potential for misuse” and separates “risks for misuse with potential security implications” into a dedicated security review track provides a ready justification for ongoing data extraction, profiling and cross-border coordination. With tools, processes and oversight networks embedded, rollback becomes politically unthinkable—less because the original threats have been definitively substantiated, more because the institutional and reputational cost of dismantling an apparatus built in the name of “protecting democracy” is prohibitive.

Receipts: How the Engine Is Fed

The pipeline runs on a sizeable and tightly profiled input. The ERC’s own summary notes that the democracy portfolio consists of “215 projects” with a budget of “EUR 368 million,” divided “into six thematic clusters” that cover governance, elections, citizen engagement, human rights, disinformation and populism/authoritarianism. The ethics analysis of a subset of these projects reports that the main considerations relate to “the protection of personal data (~87%); the involvement of human participants … (~79%), and research conducted in non-EU countries and the potential for misuse (~73%),” with 30% of the portfolio assessed as high-sensitivity and often involving sensitive personal data, biometric data, profiling and tracking.

On the output side, the project highlights showcase precisely the tools needed for the governance script: algorithmic fake-news detection and commercial applications for social-media sorting (GoodNews), auditing tools to “identify new disinformation in near real-time and break information bubbles” (FARE_AUDIT), and scalable frameworks “for reducing bias and polarisation in online media.” The policy section then spells out the tie-ins: disinformation and foreign interference workstreams are anchored in the Code of Practice on Disinformation, the Digital Services Act, the European Media Freedom Act, the European Democracy Action Plan and the Defence of Democracy package. And the audience fit is explicit: under Horizon Europe, the Commission has tasked the ERC Executive Agency with “identify[ing], analys[ing] and communicat[ing] policy relevant research results to Commission services” through a dedicated “Feedback to Policy (F2P) framework,” with DG JUST, DG RTD and the JRC named as key interlocutors.

The outcome is a self-sealing loop. The system manufactures certified risks, routes them through a science-for-policy lexicon, builds monitoring and compliance tools around that lexicon, then uses the resulting data streams and security reviews to justify both the initial risk framing and the next cycle of intervention. Political challenge is neutralised through absorption into this closed epistemic–administrative circuit rather than through open contestation. These are the inputs, outputs, and institutional wiring of the ontological engine—a circuit now closed and self-validating.

What it refuses to examine: The Strategic Void

While the report catalogues a familiar litany of ‘main issues’—from socio-economic inequality to foreign interference—its architecture contains a deliberate and strategic void. It catalogues “socio-economic inequality,” “polarisation and populism,” “the impact of social media and artificial intelligence,” “misinformation and fake news,” and “foreign interference [posing] a significant threat to fair and free elections” as the “main issues” confronting democracy. There is no corresponding cluster for the internal machinery of European Union (EU) power itself. Comitology committees, delegated and implementing acts, independent agencies, European Central Bank monetary design, security cooperations, and dense corporate lobbying ecosystems are absent from the six thematic headings and from the subsequent project highlights.

Technocracy as cause is structurally excluded from the field of sanctioned risk.

Within the report’s ontology, concrete political conflicts are re-coded rather than confronted. Distributional losers and sovereignty disputes reappear as “susceptibility to populist exploitation of economic grievances” and as a problem of “polarisation and populism” amplified by social-media echo chambers and fears of “the other.”

Systemic instability is externalised—cast “in particular” as foreign actors “subverting the integrity of electoral processes, manipulating public opinion, and eroding trust in democratic institutions” and as “propaganda by foreign powers.” This externalisation performs immediate political work: it legitimises a reinforced “EU toolbox to counter foreign information manipulation and interference,” directing regulatory energy outward while the opaque, internal decision chains that structure EU policy remain offstage, shielded from scrutiny. This outward-facing posture is reinforced, not questioned, by the report’s own ethics framework.

The ethics section reinforces the same outward-facing posture. Detailed scrutiny is applied to protection of personal data, involvement of human participants, and research conducted in non-EU countries and the potential for misuse, with nearly 80% of a high-sensitivity cluster flagged for “potential for misuse of research results and/or research methods” and subject to a separate security review track. Yet there is no parallel category for the potential misuse of democracy research by the very institutions commissioning it—to entrench surveillance, consolidate agenda-setting power, or narrow the range of permissible political outcomes.

This is more than omission. By corralling risk onto the information surface—media systems, online platforms, citizen cognition, hostile foreign actors—the report systematically avoids the possibility that the dominant strain on European democracy originates in the European Union’s own governance structures. This is more than an omission; it is a systemic evasion protocol. By corralling all risk onto the informational surface, the report structurally avoids the subversive possibility that the primary strain on European democracy originates in the EU’s own technocratic model—a model it takes as a neutral frame. The system is meticulously designed to study everything except the mirror it provides.

How the Framing Operates: The Three-Part Substitution

The report runs a deliberate substitution script—political dynamics are systematically replaced with administrable categories. Democracy is introduced as “facing a complex array of challenges” in which socio-economic inequality, “polarisation and populism,” “misinformation and fake news,” and “foreign interference” structure the narrative terrain. What enters as conflict over power and distribution leaves as a sequence of governable pathologies, metrics, and security threats. This is achieved through a precise, three-part substitution script.

Pathology: Diagnostic Rebranding of Politics

The raw material of democratic life—majoritarian demands and sovereignty claims—appears in the report predominantly as symptoms. Socio-economic losers are recast as “susceptib[le] to populist exploitation of economic grievances,” while “polarisation and populism emerge as significant forces, exacerbating divisions and hostilities that transcend conventional ideological boundaries.” Politics is reframed as a clinical problem of vulnerability to manipulation, especially through “social media echo chambers” and fears of “the other,” rather than as contestation over concrete policy settlements.

Once diagnosed this way, the appropriate response shifts from bargaining to treatment and management. The surrounding apparatus pivots to “participatory and deliberative practices” and European Citizens’ Panels, designed and facilitated by the Joint Research Centre (JRC), alongside curated consultations under the European Democracy Action Plan. These formats operate as pressure-release valves: they ventilate discontent through organised exercises and reports, while the underlying allocation of authority—Treaties, central bank mandates, comitology chains—stays outside the diagnostic frame.

Metricisation: Trust as a Governable Variable

“Erosion of trust” appears as a core concern, tied to inequality, polarisation, disinformation and foreign interference “eroding trust in democratic institutions.” In the report’s ecosystem, trust is not treated as the outcome of accountable decisions that can be reversed; it is treated as a parameter to be raised through targeted interventions. Projects and policy instruments converge on surveys, behavioural studies, sentiment analyses and algorithmic tools meant to measure attitudes to institutions, media and electoral processes, feeding “new knowledge, concepts, data, and methodologies” into an evidence base for “strengthening democracy.”

This proliferation of dashboards and perpetual measurement enacts a profound substitution: metrics stand in for accountability. Granular indicators create a veneer of responsiveness, while the foundational question—whether citizens can alter monetary or security policy—remains outside the frame. The more granular the indicators, the stronger the claim to responsiveness—yet the basic question of whether citizens can effectively alter monetary, security or regulatory trajectories is never posed inside the same frame. Data collection and ethical review are foregrounded in the ethics section, which catalogues high rates of personal-data processing, profiling and tracking under the banner of robust safeguards. More data simulates more legitimacy; the system chases higher scores rather than altering the underlying power to decide.

Externalisation: Grievance and the Security Gravity Well

Domestic grievance is finally displaced outward. Social fragmentation and distrust are framed “in particular” through the lens of “foreign interference [that] poses a significant threat to fair and free elections, subverting the integrity of electoral processes, manipulating public opinion, and eroding trust in democratic institutions,” with “propaganda by foreign powers” singled out as a key driver. This framing connects directly to an “EU toolbox to counter foreign information manipulation and interference” and to “ensuring more accountability of online platforms … (Code of Practice on Disinformation and the Digital Services Act).”

These installations create a political gravity well. Once the regulatory and monitoring pipes are laid, shutting them down becomes politically unthinkable—portrayed as negligence—ensuring the apparatus persists long after the original justification may fade. The Defence of Democracy package proposes “common transparency and accountability standards for interest representation … carried out on behalf of third countries,” alongside enhanced monitoring of electoral integrity and resilience. This logic justifies registries, alert systems, cross-border data sharing and dedicated security-review tracks for research with “potential security implications.” These installations have strong institutional inertia—after the pipes are laid, shutting them down would signal negligence on “foreign interference” and “democracy protection,” so the apparatus persists even if the original claims of dominance or manipulation fade.

Taken together, Pathology, Metricisation and Externalisation form a coherent substitution play. Political conflict over power, distribution and sovereignty is not engaged on its own terms; it is transmuted into system malfunctions to be diagnosed, measured and secured against. The report provides the script for dismantling democratic politics as a site of contestation and reassembling it as a suite of technical–administrative functions. Conflict is not resolved; it is administratively dissolved.

Sovereignty Inverted: The Processing Plant for Popular Will

The report’s systematic recoding culminates in an architectural inversion of sovereignty. While paying lip service to the ideal that “citizens delegate authority... reflecting the collective will of the people,” the ERC framework operationally repositions that collective will as the system’s primary risk input. Popular sovereignty is not executed; it is processed.

To centre “polarisation and populism” as key democracy “challenges” is to shift the baseline of sovereignty. Democracy’s engine is popular rule, yet the overview section construes popular mobilisation as a channel through which socio-economic inequality “poses risks of social unrest and susceptibility to populist exploitation of economic grievances,” and where “polarisation and populism emerge as significant forces” endangering pluralism and trust. The dedicated cluster on “Polarisation, populism and authoritarianism” then links “the rise of authoritarian leaders, backed by popular support” and “a wave of electoral successes of populist politicians” to “a new tendency of democratic backsliding.” Popular mandates that strain existing constraints are framed as early-stage pathology; the institutional settlement itself is treated as the patient to be protected.

Stage 1: The Baseline Swap (Redefining the Substrate)

The report quietly swaps the substrate of sovereignty. “Democracy, the rule of law and fundamental rights” are introduced as “founding values of the European Union” that “underpin all the EU’s achievements,” and the Commission’s yearly rule of law report is cast as guardian of “checks and balances and division of powers between legislative, executive and judiciary branches.” Democracy is thus anchored in a pre-given liberal–administrative architecture: independent courts, agencies, treaty locks and expert review are presented as the neutral ground on which politics takes place.

Within this swapped reality, the entire electoral and participatory machinery is optimised for citizen experience within a fixed cage. The constitutional and treaty frame—the cage itself—is placed outside the realm of political renegotiation. Movements that challenge it are thus algorithmically flagged not as politics, but as system errors. “Free and fair elections” are described as the “bedrock of democratic societies,” giving citizens “the crucial opportunity to shape the policies and laws that govern them through their voting rights,” yet the practical focus falls on managing voter attitudes, atmospheres and turnout. Projects develop tools for Election Management Bodies “to tailor electoral experiences for first time voters,” testing protocols and issuing guidelines to “re-attract young voters to polling stations.” Citizens interact with a pre-edited menu of parties and candidates whose viability has already been filtered by party structures, finance and media; the constitutional and treaty frame in which those delegates operate is taken as fixed. Movements that attempt to renegotiate that frame are thus predisposed to appear as aberrations in need of containment rather than as legitimate exercises of constituent power.

Stage 2: Taxonomic Contamination (Guilt by Conceptual Clustering)

Populism is not treated as a neutral descriptor of certain representative styles or coalitions; it is embedded in a contaminating taxonomy. One of the six headline clusters is explicitly titled “Polarisation, populism and authoritarianism,” and the narrative asserts that “the rise of authoritarian leaders, backed by popular support” and “a wave of electoral successes of populist politicians” have produced reforms that amount to “democratic backsliding.” The cluster summary then links populism to conspiracy theories, anti-elite sentiment and autocratic trajectories, while autocracies are defined by the absence of democratic principles and repression of opposition.

This clustering performs a pre-emptive delegitimisation. It constructs a conceptual slide from populist challenge to authoritarian breakdown, thereby coding any electoral mandate that strains incumbent constraints as a potential harbinger of ‘democratic backsliding. By contrast, projects that examine “successful governance,” “policy successes and high performing public organisations,” and the resilience of the EU’s institutional architecture “whatever it takes” are positioned as knowledge for shoring up the existing order. The classification schema thus pre-sorts which expressions of popular will are to be treated as healthy politics and which are to be seen as precursors to regime decay.

Stage 3: Symptomisation (From Grievance to Diagnostic Code)

The final refinement turns political conflict into system data. Policy sections speak of the European Democracy Action Plan’s purpose in “building more resilient democracies across the EU,” outlining the aim to “promote free and fair elections, strengthen media freedom and combat disinformation,” backed by an “EU toolbox to counter foreign information manipulation and interference” and platform accountability under the Code of Practice on Disinformation and the Digital Services Act. The Defence of Democracy package adds “common transparency and accountability standards for interest representation activities” and recommendations on free, fair and resilient elections and citizen participation.

Within this frame, insurgent platforms and disruptive electoral outcomes are approached as vulnerabilities—drivers of “erosion of trust,” “delegitimisation of electoral outcomes” and “hostility towards fellow voters” —to be integrated into resilience engineering. The pivotal question thus changes. It ceases to be:

  • “Do these grievances reveal institutional failures we should address?”

Instead, it becomes:

  • “How do we build systemic resilience against these expressions?”

Political conflict is thereby translated from a substantive negotiation into a diagnostic code for system maintenance.

Net Output: The Certified Will

What exits this sovereignty refinery is a sifted, certified version of popular will. Raw sentiment comes in through elections, protests, surveys and campaigns. It is first reclassified under challenge labels such as “polarisation and populism,” then processed through indicators of trust, resilience and integrity, and finally channelled into the compliance infrastructure of the European Democracy Action Plan, the European Media Freedom Act and the Defence of Democracy package. Outputs that stabilise the existing liberal–administrative architecture are recognised as democratic and used to validate further entrenchment of the same. Outputs that seek structural renegotiation of that architecture are marked as pathology, risk or interference and diverted into monitoring, behavioural management and security response.

The final product of this sovereignty refinery is the Certified Will. This is popular sovereignty stripped of its constitutive, potentially disruptive power—sovereignty as a feedback mechanism, not a foundational authority. The system harnesses the energy of the demos not to power change, but to perpetuate its own stable operation.

IANUS: Scientism as a Governance Protocol

Faced with anxiety about “declining trust,” the EU installs IANUS as remedy within its creed of “knowledge based decision-making.” The project promises to “strengthen warranted trust in science, research and innovation at a systemic level” by turning research into “a co-creative and inclusive process” where “trust must be inspired by transparency and trustworthiness of knowledge production and established by active participation.” The rhetoric defends robust science; the structure recentres authority on accredited expertise—science becomes the medium through which belief is governed.

The Epistemological Bypass: From Dispute to Dogma

IANUS claims it will “enable societal stakeholders to distinguish valid from non-valid trust, healthy from unreasonable distrust” and cope with “uncertain, incomplete, and multiple viewpoints” in science. Its main outputs are a “conceptual framework of trust,” co-creation labs and media panels, indicators for co-creation, a zine of “tools for informal science communication,” and policy recommendations.

None of this trains citizens to audit the science itself—reading methods, interrogating models, translating jargon into checkable claims. The framework offers workshops rather than methodological literacy, ethics paperwork rather than forensic audits of corporate and institutional influence. Citizens are asked to help perform the ritual of science; they are not equipped to test the claims that ritual produces.

The Janus Protocol: Branded Duplicity

The dual design is explicit. “The acronym IANUS refers to the deity Janus of gateways … looking both at the inside and at the outside of the knowledge production process,” and the logo shows a double head wired into circuitry. Outwardly, the public face offers inclusion: expert-and-citizen co-creation labs and media panels that promise “lessons … on how to gain, maintain and prevent the loss of public confidence and trust in STI”, plus policy-labs and cluster conferences on “building supportive policy frameworks for fostering trust in science”. Inwardly, the work centres on measuring and lifting trust itself: diagrams of A’s “TRUSTING” leading to C “COMMITMENT,” indicator sets for co-creation, and devices like the “Who Do You Trust?” sticker barometer that score trust in communicators. These are KPIs for belief; there is no symmetric channel for citizen-level criticism of methods or advisory practice to reshape how evidence is assembled.

The Corrupted Substrate: Engineering Faith on Fractured Ground

IANUS prescribes more “trust” on top of a knowledge base whose comorbidities remain unexamined. In the ERC democracy portfolio, ethics scrutiny focuses on inputs— “protection of personal data (~87%)”, “involvement of human participants … (~79%)” and “research conducted in non-EU countries and the potential for misuse (~73%)”—with almost no attention to structural conflicts in funding and publication. IANUS adds “typologies of conflicts of interest” and suggested institutional responses. The deeper political economy of science—the dependence on corporate and state funders, incentives for positive findings, editorial capture—stays offstage, even as industry payments, low-power flexible analyses and replication failures accumulate in the background. The project looks like trust-engineering on an audited surface and an unaudited core.

The Scientistic Resolution: A Demand Masked as a Solution

IANUS sits in a Horizon Europe cluster on “Societal trust in science, research and innovation” with VERITY and POIESIS whose shared aim is to “develop actionable policy recommendations to improve public trust in research and innovation.” In this setting, trust becomes a policy variable rather than a verdict; scientific authority is treated as fixed, and public attitudes are tuned through communication, co-creation and ethics routines. The crisis in institutional science—financial conflicts, methodological brittleness, irreproducibility—is treated as a communications problem rather than a structural one for lay inspection. A framework aimed at legitimacy would purge conflicts at funding and review points, require preregistration and replication, open advisory synthesis to minority reports, tie policy uptake to falsifiable claims with rollback clauses, and build citizens’ capacity to read and criticise the underlying work. IANUS automates trust rituals around an unreformed substrate—science slides from open method into governance protocol, and Scientism becomes the operating rule.

Final Analysis: The Democracy-Administration Interface

At its own face value, the ERC report is an input to “build more resilient democracies” and “strengthen democracy” by feeding frontier research into the European Democracy Action Plan, the Defence of Democracy package and allied files. In practice, it sits where democracy and administration fuse—where “democracy” is redefined as the successful operation of a science-for-policy machine and “legitimacy” is inferred from alignment with that machine’s categories, tools and resilience metrics. The fracture line runs through this interface: is the system mapping democracy or specifying the acceptable operating range for political life inside a pre-given architecture.

If you take the framework on its own terms, real democratic legitimacy would require installing circuit-breakers in the very places the report naturalises. That implies at least three structural reversals:

  • Reversibility as design principle: every monitoring and enforcement apparatus anchored in “protecting democracy” would carry hard-coded sunset clauses and rollback triggers, so that the tools built under crisis rationales can actually be dismantled without institutional self-harm.
  • Symmetrical scrutiny of power: the tools now pointed at “disinformation,” “foreign interference” and populist mobilisation would be turned back onto EU and member-state institutions, corporate funders, and allied security networks with equal intensity, dissolving the assumption that threat is exogenous.
  • Transparent research-to-rule pipeline: the Feedback to Policy machinery would be opened as a public ledger, showing which projects, advisory opinions and models were used where in law- and rule-making, and on what evidential terms, so that “evidence-based” interventions can be contested at source rather than only after they harden into compliance duties.

Absent these reversals, the resilience project looks less like protection of a demos and more like optimisation of a control surface. The system anticipates challenge, routes it through diagnostic categories, participatory formats and integrity regimes, and neutralises it without ever placing its own technocratic core under equivalent diagnostic threat. The interface between “democracy” and “administration” becomes the point where politics is discretised into something the apparatus can safely process.

Therefore, the ultimate verdict on this report lies not in its conclusions, but in the unresolved—and systematically evaded—questions that its very architecture forces into the light.

  1. The diagnostic blind spot: If this framework is a valid instrument for assessing democratic health, by what logic does it exempt EU-level monetary, security and administrative structures from the same risk ontology it applies to citizens, media, platforms and foreign actors? What would a seventh cluster— “technocratic overreach and institutional capture”—do to its map?
  2. The preservation paradox: When “democracy, the rule of law and fundamental rights” are defined as founding values that the Union must protect, and resilience is measured against disruptions to that settlement, is the system fortifying democracy, or fortifying its own particular model of democracy against democratic attempts to renegotiate it?
  3. The means–ends inversion: If the cure for a crisis of democratic legitimacy is further insulation of decision-making inside expert-driven pipelines (SAM, EGE, JRC, ERCEA’s F2P) and platform governance regimes, does that not confirm the original democratic deficit while claiming to treat it? At what point does “science for policy” cross the line into scientistic veto over political choice?
  4. The epistemic outage: Citizens are channelled into participatory rituals but are never equipped to audit the “evidence-based” pipeline itself. Whose judgment, then, authorises democracy? What would it take to redesign the interface so the public can interrogate the system, rather than forever being processed as its primary risk input?

Those questions sit exactly where the report refuses to look. The fracture is already there—embedded in the code. The only live issue is whether it remains buried in technical annexes or is dragged back into the light of public contestation.


Published via Journeys by the Styx.
Mindwars: Exposing the engineers of thought and consent.

Author’s Note
Produced using the Geopolitika analysis system—an integrated framework for structural interrogation, elite systems mapping, and narrative deconstruction.

Read more