11. Threat Modeling
Guardians of AI, by Richard Diver

11. Threat Modeling

Today, threat modeling has been a specialized capability used in software development and system engineering. Very deep expertise and domain knowledge was required to carry it out - you can see some of the latest thinking and approach of threat modeling by The Open Group.

Throughout the last year of working with AI, it has become very obvious that we all need to think about threat modeling in a new way: every team is involved. From the business leaders to incident responders, help desk to information workers, and the legal team to the cloud operations experts. Only by involving diversity of approach, broad knowledge, and threat intelligence can we begin to get ahead of the next wave of attack types.

NIST CSF + AI RMF

The NIST Cybersecurity Framework (CSF) is a well-established model to track how to govern, identify, protect, detect, respond, and recover. Using these steps it is possible to map any threat scenario end to end and ensure all possible variables are considered. CSF v2.0 was released in 2024.

The AI Risk Management Framework (AI RMF) has been developed to provide clear guidance approach for AI systems.

Image of the NIST AI Risk Management Framework. A circle with governance in the center, around the outside are the core components of Map, Measure, Manage
NIST AI Risk Management Framework

The elements of Govern, Map, Measure, and Manage really helps provide a taxonomy that enables responsible design, development, and use of AI systems.

Combine these approaches with other strong guidance, such as Mitre ATT&CK, Mitre D3FEND, and OWASP Top 10 for LLM, and you can begin to really dig into some detailed approaches.

Inclusive threat model exercises

Before getting too deep technically, it is worth taking the time to build up the practice of discussing safety and security as a table-top (or drawing board) exercise where discussions can be free flowing, and repeatedly asking the question "WHAT IF?". A good source of inspiration for this can be found here: Information is beautiful.

There are four main focus groups for a threat framework:

  • Business and technical risks and vulnerabilities (Analysis / Assessment)
  • Secure design and development (Model / Test)
  • Secure deployment and configuration (Protection / Detection)
  • Security operations (Hunt / Respond)

Threat framework including four key areas of focused discussion
Threat Framework


Structured exploration

It helps to have some structure to these conversations and ensure they remain focused on the articulation of threats, not risks (risk is managed, and more hypothetical than a tangible threat). For each scenario, consider these headers to record details about:

  • Title: Give a unique name for easy identification.
  • Threat type: Categorize such as remote code execution or data integrity.
  • Remaining undetected: Note ways that attackers can remain hidden from detection, covering their tracks and blending into their surroundings.
  • Defense tactics: The most useful part of this exercise is building a list of all defenses, showing how many apply to each scenario, and where they are reused across multiple scenarios.
  • Business impact: Note the components and worst-case impacts specific to this scenario.
  • Summary: Provide a summary of the discussions held, outstanding questions not answered, and follow up actions for next review.

Of course, using a AI to assist in this effort makes it a lot more interesting too. LLMs are great at conversations like this, providing ideas and provoking new angles.

Diagrams and documents

In previous newsletter editions I shared threat model diagrams, these are very useful to ensure clear articulation of both the threats and appropriate mitigations. Using the same 3-layer model for AI systems, you can expand these to cover non-AI systems too.

Diagram shows a 3-layer model for mapping threats across an AI system, including the attack stages and appropriate mitigations in each layer
AI Threat Mapping Template


Start small, map some of the basics, then build upon them until you have a catalogue of threats to review regularly and include in your testing cycles. Teams that are good at this procedure can end up with hundreds of threat models for a single system. The most important aspect is to ensure this is repeatable, easy to understand, and it provides clear articulation to ensure appropriate investment and governance.

Here is my favorite quote from this chapter:


Quote by Richard Diver "The greatest risks are those we don't know about, the greatest threats are those we are not prepared for"
Quote by Richard Diver


The book is available now on Amazon - Guardians of AI: Building innovation with safety and security.


That brings us to a close on season 2 of the Drawing Cybersecurity newsletter, I hope you enjoyed it and I look forward interacting with any feedback and questions you might have.

Reed Beaty

Manager at Bridge Partners

5mo

Love this series, great work Richard! 🔥

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics