Behind the Scenes: Tightening the Analytic Engine

Dear readers,

It’s been a busy couple of weeks since Ursula Edgington and I wrapped the “George and the Philanthropath” series. Since then, apart from moving to Ghost.io, I’ve been rebuilding the Ai based narrative deconstruction engine that powers the analysis underlying many of my articles. This wasn’t cosmetic polishing—it was about making the analytic work easier to check, harder to blur and more consistent from piece to piece.

The engine is built to read media, doctrine and policy texts for power, incentives, omissions and framing. The recent changes fall into three areas:


1) Clearer discipline: fewer borrowed frames, more direct evidence

Going forward, the engine is stricter about not inheriting a document’s own language as if it were neutral. Terms like “misinformation”, “rules-based order”, “rogue regime” and “public health emergency” will continue to appear as quotes and be treated as framing terms rather than facts.

You’ll also see more “receipts”—direct quotes and clear pointers to where a claim comes from in the source text. The aim is to ensure the reader is able to see what the document says, what I’m inferring and where you might disagree.


2) More than one viewpoint: built in and kept visible

Most if not all of the texts I choose to explore are written from a single strategic or institutional viewpoint. By default, the Ai analytic system now runs multiple viewpoints (lenses) through the same text and keeps the differences visible instead of smoothing them away.

In practice, that means reading a document through more than the default Western policy or academic lens and placing that reading alongside others that focus on state interests outside the Atlantic world. This includes alternative lenses that look at the everyday man or woman in the street's experience, economic dependency, media positioning, power asymmetries and historical patterns. When those readings clash, they won’t be forced into a neat compromise. The analysis engine has been designed to expose these clashes and show what each perspective highlights or ignores.


3) Better context: identifying the professional world behind the text

Some documents come from academic research, some from policy shops, some from advocacy groups or media systems. The system is now more systematic about identifying what kind of professional world produced the text and what that world tends to treat as settled, unquestionable or “out of scope”. It exposes fractures within the micro-trades creating the document to expose what is being suppressed and how differences within the trade are papered over or nullified.

This matters because a paper can be technically careful and still build its conclusions on assumptions that are never tested. The analysis will be clearer about those assumptions and about internal tensions where the evidence doesn’t sit comfortably with the preferred story. But most of all it will show how some narratives are privileged at the expense of others.


4) Countering built-in Ai biases and consensus gravity

There’s a separate problem when you use Ai to analyse texts: large language models (LLMs) are trained on a corpus of material that strongly favours institutional consensus, credentialed authority, Western frameworks and a style of “balance” that can flatten real power differences.

Left unchecked, that produces a familiar outcome: the centre sounds reasonable, dissent looks irrational, and minority or officially excluded views are treated as marginal or even deviant by default. In essence, you get Ai sludge.

So the engine is explicitly designed to resist those defaults:

  • Consensus is treated as a social fact to explain, not a verdict to accept. If something is widely “agreed”, the relevant question is what structures made it dominant and what alternatives were made unthinkable.
  • Institutional language is treated as instruction, not description. Official terms are read as tools that define what counts as normal, responsible, extremist or dangerous.
  • Western dominance is made explicit rather than assumed. If a framework reflects Atlantic strategic or academic priorities, that is identified and compared with how the same issue reads from elsewhere.
  • No false balance. The goal isn’t to give every claim equal status. It’s to stop dominant narratives from quietly defining the only “reasonable” range of thought, especially when minority or negated positions are involved.
  • Disagreement is preserved. When perspectives genuinely conflict, the analysis keeps that tension visible instead of dissolving it into a bland middle.

This doesn’t create neutrality. It creates legibility: you can see the guidance system at work.


What this means for future posts

At least for a start while I try out the new system, the next few pieces will be closer to the system’s natural public facing output format. They will have:

  • Clearer separation between what a text says and how it steers
  • More direct quotes where wording matters
  • Multiple perspectives presented openly, including where they collide
  • Clearer notes on what a text assumes, avoids or treats as already decided.

My aim here is not to tell you what to think; it is to show you how the writers and institutions producing these academic articles, think tank papers, and government reports are engineering the information space to make thinking the way they want you to seem natural. It is to encourage you to ask deeper questions of power and seek better answers to these questions.

While I tune the system and get used to it, I encourage you to tell me one practical thing: is the new format clearer, too dense, or missing the context you need to follow the argument?

Read more