DRAFT — This article is in progress. Content is placeholder only.
Most security assessments end with a report. The report describes what was found, ranks the findings by severity, and recommends remediation steps. It is comprehensive, well-intentioned, and — in a significant number of cases — sits unread on a shared drive. The problem is not the content. It is the format. A report is passive. It asks the team to receive information and act on it independently. A workshop is active — the team builds the understanding together, which changes how they relate to the findings.
Why security reports do not stick
A security report is, at its best, a comprehensive record of findings delivered to someone who then has to act on it. The challenge is the gap between "finding recorded" and "finding fixed." That gap is where most security remediation effort is lost. Teams receive a list of issues they did not help identify, written in language they do not work in, with no clear guidance on priority beyond a severity score.
What a workshop creates that a report cannot
The people building your systems understand them better than any external assessor. A workshop starts from that knowledge. The engineering team explains how the system works; the security practitioner works through what that means from a threat perspective. Both sides learn. The output — a shared threat model — reflects both the technical depth of the team and the security expertise of the assessor. It is more accurate, more specific, and more actionable than either party could produce alone.
How Threatplane's workshop process works
The process we use runs over four structured stages. It starts with business context. What does this system do, what happens if it fails, who are the realistic threat actors? That framing shapes everything that follows. The architecture stage maps how the system actually works, often revealing details that the security team did not know and the engineering team had not thought about from a security perspective. The threat assessment works through what could go wrong, using that shared understanding as its foundation. The controls prioritisation stage connects findings to business impact and produces a clear, prioritised roadmap.
The downstream effects of shared understanding
The most significant downstream effect of a well-run threat modeling workshop is what happens months later. Engineering teams make different decisions — better security decisions — because they have a shared threat model in mind. They know which parts of the system carry the most risk. They know which controls are non-negotiable and which are optional. They can have a design conversation about a new feature that includes security thinking without pausing to escalate to a security expert.
What good looks like: comparing approaches
Automated scanning finds what it is trained to find. Traditional consultancy reports record what a skilled assessor found. A collaborative threat modeling workshop produces something different: a shared understanding, built by the people who own the system, of what could go wrong and why it matters. That shared understanding is what drives lasting change — not the findings themselves, but the people who understand them deeply enough to act on them.
The format of a security engagement matters as much as the quality of the findings. A report delivers information; a workshop builds understanding. For engineering teams responsible for complex, bespoke systems, that understanding is what actually changes how they build — and what changes how secure those systems become over time.

Jonny founded Threatplane in 2017. With a background in offensive security, he has spent 15+ years helping organisations across defence, financial services, healthcare, and manufacturing understand and manage their technology risks.
Full bio →