How to fix “garbage in, garbage out” by giving AI better input instead of a bigger model.
A marketing team inside an Irish SaaS firm wanted to speed up their thought-leadership workflow. Their first idea was a simple research agent that scraped pages, lifted patterns and stitched the highlights into a briefing deck. It grabbed whatever looked relevant and blended it into a narrative.
Early tests looked promising. On first pass, the output looked polished. The structure made sense, the tone sounded authoritative, and it even sprinkled in citations.
But contradictions surfaced. One section confidently claimed a competitor had changed its pricing structure. No one could find any evidence of that shift. Another paragraph referenced a “rising adoption trend” that only existed in a report from three years earlier. The issue wasn’t the model’s reasoning. Without a controlled reading pack, the system behaved like a rushed junior analyst: fluent, confident and wrong.
The fix was simple: give the agent a defined set of material to work from and stop it pulling from anywhere it pleased. This playbook outlines the two-level framework the team used to make their research agent reliable.
Essentially, instead of manually pasting source text or docs each time, you define a knowledge set – your authoritative base.
Examples:
You can think of it as:
“What would I want an analyst to have read before writing this?”
In practical, low-tech form, this could be:
For strategic projects, RAG becomes less about retrieval and more about orchestration:
how to get from raw content → insight → artifact (report, deck, narrative).
A simple but rigorous loop:
| Stage | Description | Example Prompt |
| Retrieve | Pull relevant knowledge from your knowledge set (manually or via search) | “From the documents, extract all mentions of market headwinds for SaaS in 2025.” |
| Augment | Summarize and reframe for the task at hand | “Condense those points into a narrative on emerging retention risks.” |
| Generate | Create your new content (draft, summary, model) | “Using the above, write a 1-page analysis of strategic implications for mid-market SaaS leaders.” |
| Review | Run self-checks or second passes | “Check for unsupported claims or repetition. Simplify for a C-suite audience.” |
That four-step loop is rigorous RAG, even without custom code.
The mark of rigor is auditability.
That small step dramatically improves trustworthiness and reviewability, which is crucial for strategy work.
If you want more automation (without writing code):
That’s “managed retrieval”, basically RAG with a UI.
For strategic work, the biggest uptick comes from your posture toward the model:
Adding a “retrieval layer” typically improves LLM response relevance, factual accuracy and trustworthiness by 50% or more, while cutting hallucinations roughly in half. Yet, in an examination of over 33,000 real world LLM queries, only about 6% added contextual documentation. It’s been said that AI content is a study in the regression to the mean, but you can instantly improve upon the performance most are seeing by priming your LLM with a strong knowledge base.