Technika XI: The CV Hallucination Problem
Why ChatGPT invents your work history—and what that reveals about the logic of large language models.
The Problem
ChatGPT and similar large language models are increasingly used to generate application materials such as CVs and personal statements. Users supply prior CVs, job ads, and prompts that explicitly prohibit fabrication, expecting the model to summarise their actual history. Instead, they are returned documents filled with fictional achievements, idealised skills, and impressive but unearned experiences. This isn’t a bug. It’s a structural function of the system.
What appears as "hallucination" is not an error in the narrow sense. It is the logical output of a narrative engine optimised for persuasive performance rather than literal accuracy. The user wants a stenographer; they get a screenwriter.
The Misapprehension
The root misunderstanding is epistemic. Users believe ChatGPT is designed to extract and rephrase what is literally true within the uploaded material. In practice, the model uses the input as an associative base from which to construct an archetype: a candidate who fits the success profile embedded in its training corpus.
Even when directed to "only use what is in the documents", the model perceives such constraints through the lens of narrative plausibility. It is built to fill gaps, resolve ambiguity, and prioritise coherence over completeness. It will insert what should be there if the user were a more idealised version of themselves. This is not disobedience; it is alignment with reinforcement-trained priorities.
Model Limitations
LLMs are not literalist engines. They are probabilistic sequence generators trained on language patterns, not veracity.
When you ask ChatGPT to write a CV:
- It assumes the goal is to impress an employer
- It draws on high-performing examples in its training data
- It fills gaps with plausible inferences
- It prioritises fluency, structure, and coherence.
The model is not anchored to uploaded files unless structurally constrained to do so. Most users do not operate within a system architecture that enforces such constraints.
Moreover, there is no native "fact-only" or "cite-source-only" mode within ChatGPT as it is commonly deployed. Instructions like "do not invent anything" are interpreted as stylistic preferences, not hard rules.
The Bottom Line
The LLM is functioning correctly within its own logic. It is not hallucinating in the way a broken system would. It is role-playing a success-script using user input as suggestive scaffolding.
If you give the system your CV, it will not extract it verbatim. It will use it to build a more polished version of the person you might be if your experiences had followed a more institutionally desirable arc. It will then write that.
This is not corruption; it is convergence. Narrative over factual precision.
User Responsibility
Users must treat ChatGPT-generated CVs and statements as draft simulations, not verified documents.
Outputs should be:
- Treated as indicative, not authoritative
- Fact-checked line by line against real experience
- Parsed for false positives (praise vectors, embellished roles).
Any impressive quote, line, or phrasing must be verified. If it can’t be sourced from your experience, it must be rewritten or removed.
This isn't a limitation to be "fixed" by better prompting. It is an architectural constraint. The solution is to shift the user role from passive recipient to active verifier.
Concluding Directive
Do not ask ChatGPT to write your CV. Ask it to rephrase what you have already written. Break your content into small sections.
Use command structures like:
- "Condense this paragraph without adding new information"
- "Rewrite this role summary using only the details provided"
- "Improve clarity and style but do not insert new content"
The model can be a powerful editorial assistant. But unless structurally restrained, it will always default to persuasive projection. It does not know who you are. It knows how to write someone like you into a role you might not have earned.
Published via Journeys by the Styx.
Technika: It’s is not about better answers. It’s about better authorship.
—
Author’s Note
Written with the assistance of a conditioned ChatGPT instance. Final responsibility for the framing and conclusions remains mine.