11. Threat Modeling
Guardians of AI, by Richard Diver

11. Threat Modeling

Today, threat modeling has been a specialized capability used in software development and system engineering. Very deep expertise and domain knowledge was required to carry it out - you can see some of the latest thinking and approach of threat modeling by The Open Group.

Throughout the last year of working with AI, it has become very obvious that we all need to think about threat modeling in a new way: every team is involved. From the business leaders to incident responders, help desk to information workers, and the legal team to the cloud operations experts. Only by involving diversity of approach, broad knowledge, and threat intelligence can we begin to get ahead of the next wave of attack types.

NIST CSF + AI RMF

The NIST Cybersecurity Framework (CSF) is a well-established model to track how to govern, identify, protect, detect, respond, and recover. Using these steps it is possible to map any threat scenario end to end and ensure all possible variables are considered. CSF v2.0 was released in 2024.

The AI Risk Management Framework (AI RMF) has been developed to provide clear guidance approach for AI systems.

Image of the NIST AI Risk Management Framework. A circle with governance in the center, around the outside are the core components of Map, Measure, Manage
NIST AI Risk Management Framework

The elements of Govern, Map, Measure, and Manage really helps provide a taxonomy that enables responsible design, development, and use of AI systems.

Combine these approaches with other strong guidance, such as Mitre ATT&CK, Mitre D3FEND, and OWASP Top 10 for LLM, and you can begin to really dig into some detailed approaches.

Inclusive threat model exercises

Before getting too deep technically, it is worth taking the time to build up the practice of discussing safety and security as a table-top (or drawing board) exercise where discussions can be free flowing, and repeatedly asking the question "WHAT IF?". A good source of inspiration for this can be found here: Information is beautiful.

There are four main focus groups for a threat framework:

  • Business and technical risks and vulnerabilities (Analysis / Assessment)
  • Secure design and development (Model / Test)
  • Secure deployment and configuration (Protection / Detection)
  • Security operations (Hunt / Respond)

Threat framework including four key areas of focused discussion
Threat Framework


Structured exploration

It helps to have some structure to these conversations and ensure they remain focused on the articulation of threats, not risks (risk is managed, and more hypothetical than a tangible threat). For each scenario, consider these headers to record details about:

  • Title: Give a unique name for easy identification.
  • Threat type: Categorize such as remote code execution or data integrity.
  • Remaining undetected: Note ways that attackers can remain hidden from detection, covering their tracks and blending into their surroundings.
  • Defense tactics: The most useful part of this exercise is building a list of all defenses, showing how many apply to each scenario, and where they are reused across multiple scenarios.
  • Business impact: Note the components and worst-case impacts specific to this scenario.
  • Summary: Provide a summary of the discussions held, outstanding questions not answered, and follow up actions for next review.

Of course, using a AI to assist in this effort makes it a lot more interesting too. LLMs are great at conversations like this, providing ideas and provoking new angles.

Diagrams and documents

In previous newsletter editions I shared threat model diagrams, these are very useful to ensure clear articulation of both the threats and appropriate mitigations. Using the same 3-layer model for AI systems, you can expand these to cover non-AI systems too.

Diagram shows a 3-layer model for mapping threats across an AI system, including the attack stages and appropriate mitigations in each layer
AI Threat Mapping Template


Start small, map some of the basics, then build upon them until you have a catalogue of threats to review regularly and include in your testing cycles. Teams that are good at this procedure can end up with hundreds of threat models for a single system. The most important aspect is to ensure this is repeatable, easy to understand, and it provides clear articulation to ensure appropriate investment and governance.

Here is my favorite quote from this chapter:


Quote by Richard Diver "The greatest risks are those we don't know about, the greatest threats are those we are not prepared for"
Quote by Richard Diver


The book is available now on Amazon - Guardians of AI: Building innovation with safety and security.


That brings us to a close on season 2 of the Drawing Cybersecurity newsletter, I hope you enjoyed it and I look forward interacting with any feedback and questions you might have.

Reed Beaty

Manager at Bridge Partners

5mo

Love this series, great work Richard! 🔥

To view or add a comment, sign in

More articles by Richard Diver

  • Be passionate, not passive

    Be passionate, not passive

    Yesterday I had the opportunity to share one of my hidden "talents" at a company event. It was well received, so I am…

    12 Comments
  • 10. AI System Defense

    10. AI System Defense

    Throughout all the studying, conversations, and experiences of the last year, it is clear that defense is going to be a…

    5 Comments
  • 9. AI System Attacks

    9. AI System Attacks

    In any sports setting there is a constant shift in the game between attack and defense. While cybersecurity is not a…

  • 8. AI Harms & Risks

    8. AI Harms & Risks

    Choosing what to include, or exclude, took some time to figure out. I think what we have here is a great starting point…

    1 Comment
  • 7. Existing Risk

    7. Existing Risk

    In the world of business and technology, risk management is a well-defined and practiced profession that has evolved in…

  • 6. AI Governance

    6. AI Governance

    AI harms and threats to the safe use of AI will not only occur because of malicious actors’ intent on causing damage or…

    2 Comments
  • 5. Ethical Framework

    5. Ethical Framework

    Considerations for the safety and security of AI systems goes beyond the traditional cybersecurity focus of defending…

  • 4. AI Application Architecture

    4. AI Application Architecture

    Understanding how an AI application works is the first step in assessing the ability to secure it. The 3-layer diagram…

  • 3. Types of AI Systems

    3. Types of AI Systems

    Artificial Intelligence (AI) is a group of technologies that, when combined, provide advanced computing capabilities…

  • 2. Cybersecurity in the AI World

    2. Cybersecurity in the AI World

    Will AI cause more headaches, or will it solve scenarios cybersecurity issues? Most likely both. From the attacker…

Insights from the community

Others also viewed

Explore topics